“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: […] like a physician, who hath found out an infallible medicine, after the patient is dead.” —Jonathan Swift

  • 3 Posts
  • 123 Comments
Joined 11 months ago
cake
Cake day: July 25th, 2024

help-circle
  • That’s mainly why I’m curious to see specific examples: I’ve fixed hundreds if not thousands of typos and can’t remember this happening, even long before I had much experience editing. I’m long past the point where I’d be considered a new editor, so any results I’d get now would be bullshit anyway short of violating the rules and starting a smurf account.

    Regarding “in the clique”, people give a shit about who’s who a lot less than you’d think. Despite having 25,000 edits over 8 years, I’ve interacted with maybe three people in the top 100 by number of contributions (let alone even know who they are). I’m not a social butterfly on there, but I’ve interacted in hundreds of discussions when needed. Not only am I almost never checking who an editor is when I check their edit, but I maybe know 100 people total (orders of magnitude less than the pool of very active editors); even among the few people I’d consider acquaintances, I’ve had my edits reverted and reverted theirs.

    The only instance I’ve seen of someone trying to play king shit of fuck mountain and not immediately failing is in our article for San Francisco, where they were insistent that there was a strong consensus for using only one image in the infobox instead of the usual collage we do in 99.9% of major cities. The image used was a picture of the Golden Gate Bridge in front of the San Francisco skyline – neither of which were represented well. They’d been shutting down ideas for a collage for years, and when other editors found out about this, it turned into a request for comment (RfC). Despite their now having 500,000 edits in about 18 years (this ought to put them in the alleged “clique” even though I’d never heard of them before) this swung wildly against them to the point of the RfC being closed early, and the article now has a (I think really nice) collage.

    (TL;DR: the policy against trying to dictate the contents of an article isn’t just there so we can say “but c it’s agenst da rulez so it dusnt happin!!”; it’s there because the wider editing community fucking hates that shit and doesn’t put up with it.)



  • A good feature if you ever decide to edit again (on desktop, probably mobile too) is that in the source editor, there’s a Show Preview button. This renders out the page as if you’d committed the change. I said in another comment that almost 2% of my edits have been reverted in some way, and many of those are self-reverts. The only reason there are fewer immediate self-reverts these days isn’t because I’m making fewer mistakes; it’s because I’ve mostly replaced the “oh fuck go back” button with being able to quickly identify how I broke something (unless what I’ve done is unsalvageable).

    The other day during a discussion, a few editors started joking about how many mistakes we make. Cullen328 (yes, the admin mentioned in this post) said: “One of my most common edit summaries is “Fixed typo”, which usually means that I fixed my own typo.” The Bushranger, another admin, replied: "I always spot mine just after hitting ‘Publish changes’… " And finally I said: “It feels like 50% of the edits I publish have the same energy as Peter watching Gwen Stacy fall to her death in slow-motion in TASM 2.” Between the three of us is about 300,000 edits, two little icons with a mop, and over 30 years of experience editing. Not only will you fuck up at first, but you’ll continue to fuck up over and over again forever. It’s how you deal with it that counts, and you dealt with it well.



  • There’s fortunately no such thing as control of the page. Like I explained above, reversion is considered a normal but uncommon part of the editing process. It’s more common at the outset for new editors to have their initial edit reverted on policy/guideline grounds but then have a modified version of the edit let through with no issue. In order not to not bite newcomers, experienced editors will often bite the bullet and take the time to fix policy/guideline violations themselves while telling the newcomer what they did wrong.

    If you go to discuss the reversion with the other editor on the talk page and it becomes clear this isn’t about policy or guideline violations (or they’re couching it in policy/guidelines through wikilawyering nonsense) but instead that they think they’re king shit of fuck mountain and own the article, ask an administrator. Administrators hate that shit.


  • That makes sense. “Probably over 20 years ago now” probably means that there weren’t any solid guidelines or policies to revert based on, since it was only around 2006 that the community rapidly began developing formal standards. I’m betting a lot more reverts were “nuh uh”, “yuh huh” than they are today. If you still remember the account name, I’m curious to see what bullshit transpired. If the watchlist even existed back then, someone probably saw a new edit, didn’t like it for whatever reason (I have no capacity to judge), and hit the “nuh uh” button. (Edit: I bet it was ‘Recent changes’, actually; probably more viable in an era of sub-100 edits per minute.)

    Something new editors get confused about (me especially; I was so pissed the first time) is that edits can be reverted by anyone for any reason. (By “can”, I don’t mean “may”; a pattern of bad-faith reversions will quickly get you blocked). Almost 2% of my edits have been reverted in some way, and plenty of those have been by people with 1/100th the experience I have (some rightly so, some not so much). Reversion is actually considered a very normal if uncommon part of the editing process, and it’s used to generate a healthy consensus on the talk page when done in good faith. But the pertinent point is that reversions can be done by anybody just like additions can be done by anybody; it’s just another edit in “the free encyclopedia that anyone can edit™”. I remember reverting an admin’s edit before (normal editing, not administrative work), and we just had a normal conversation whose outcome I can’t remember. It happens to everyone.




  • This is an ad for a proofreading service, so nominally it’s meant for you to use in formal writing. In that context, only a small proportion of these words are “fancy”.

    That said, a thesaurus is best used for remembering words you already know, i.e. not like shown here. Careful use of a thesaurus to find new words provided you research them first – e.g. look them up on Wiktionary (bang !wt on DuckDuckGo) to see example sentences, etymologies, pronunciations, possible other meanings, usage context (e.g. slang, archaic, jargon), etc. – can work, but if you’re already writing something, just stick to what you know unless it’s dire. You should make an effort to learn words over time as they come up in appropriate contexts rather than memorizing them as replacements for other words; this infographic offers a shortcut that’s probably harder and less accurate than actually learning.

    A one-night stand with a word you found in the thesaurus is going to alienate people who don’t know what it means and probably make you look like a jackass to those who do.





  • TheTechnician27@lemmy.worldtoProgramming@programming.devStack overflow is almost dead
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Dude, I’m sorry, I just don’t know how else to tell you “you don’t know what you’re talking about”. I’d refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you’re describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.

    You keep anthropomorphizing with “AI can already understand X”, but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn’t “understand” shit about fuck; it’s an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you’re so wrong: overfitting. This is one of the first things you’ll learn about in a ML class, and it’s what happens when you let a model train on the same data over and over again forever. It’s especially bad for a classifier to be overfitted when it’s pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.

    You really, really, really just fundamentally do not understand how a machine learning model works, and that’s okay – it’s a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you’re talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.


  • TheTechnician27@lemmy.worldtoProgramming@programming.devStack overflow is almost dead
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Your analogy simply does not hold here. If you’re having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it’s doing. Chess has the following:

    1. A very limited set of clearly defined, rigid rules.
    2. One single end objective: put the other king in checkmate before yours is or, if you can’t, go for a draw.
    3. Reasonable metrics for how you’re doing and an ability to reasonably predict how you’ll be doing later.

    Here’s where generative AI is different: when you’re doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier’s goal is to be correct, and the generator’s goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

    Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That’s what you’re describing for the classifier.



  • This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.

    Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.

    There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.







  • Didn’t downvote you, and I do agree that “mixed economy” doesn’t technically have a concrete meaning. I could’ve said “welfare state” as well. Here, “mixed” is generally understood to be somewhere near the middle-ish, however we define that. As you note, the US and Cuba lie on this spectrum but far to either side of it. So even though “all economies are mixed”, the economies of the Nordic states are more mixed than most.

    In general, I believe we agree that Norway and Denmark aren’t “socialist”.