• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle


  • Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.

    The paper authors point out that this also has severe implications for current AI, too–since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can’t run in polynomial time, “the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n.” They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:

    “Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n.”

    That’s why LLMs are a dead end.




  • The problem is that there’s no incentive for employees to stay beyond a few years. Why spend months or years training someone if they leave after the second year?

    But then you have to question why employees aren’t loyal any longer, and that’s because pensions and benefits have eroded, and your pay doesn’t keep up as you stay longer at a company. Why stay at a company for 20, 30, or 40 years when you can come out way ahead financially by hopping jobs every 2-4 years?


  • It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

    But when it’s pointed out that LLMs don’t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn’t be judged by human standards? I don’t know if it’s intentional on your part, but that’s a pretty classic example of a motte-and-bailey fallacy. You can’t have it both ways.




  • Who even knows? For whatever reason the board decided to keep quiet, didn’t elaborate on its reasoning, let Altman and his allies control the narrative, and rolled over when the employees inevitably revolted. All we have is speculation and unnamed “sources close to the matter,” which you may or may not find credible.

    Even if the actual reasoning was absolutely justified–and knowing how much of a techbro Altman is (especially with his insanely creepy project to combine cryptocurrency with retina scans), I absolutely believe the speculation that the board felt Altman wasn’t trustworthy–they didn’t bother to actually tell anyone that reasoning, and clearly felt they could just weather the firestorm up until they realized it was too late and they’d already shot themselves in the foot.





  • And the admins (and myself, for that matter) want to exist without the risk of doing a perp walk because Little Timmy saw a peen.

    I’m on an NSFW Lemmy instance. I have multiple NSFW accounts spread over the various platforms, and my single biggest fear is that some shithead kid is going to ignore the giant “18+ only” warnings because they’re so MATURE for their age, they’re going to find adult content (or worse yet, try and message me and pretend they’re over 18 so I don’t block them), and one of their relatives find out and call the police. Intentionally done or not, I’ve seen exactly that scenario play out, ruining the lives of multiple people through no fault of their own.

    The Lemmy admins all have to worry about this exact same thing too, except they have to worry about every kid and every NSFW account/community, unless they decide to either play whack-a-mole with the various NSFW instances, or move to default deny federation and only federate with known-SFW communities. And that’s on top of the existing CSAM spam concerns that they appear to have only recently gotten under control.

    I don’t give a single solitary flying fuck about whether children can express themselves equally. They’re NOT equal to an adult, because I don’t risk jail time by showing off my [REDACTED] to them.





  • Theoretically it can happen. In practical terms, 99% of those cases are out of three things:

    • A charade to get an angry customer to go away (pretending to fire an employee)

    • The last straw in a series of incidents that add up to justify firing the employee (i.e. the employee has repeatedly made a mistake with no improvement over a long period of time)

    • Misconduct egregious enough to warrant firing them on the spot (for example, the employee punches a customer, or shows up to a job site blackout drunk)

    The remaining 1% of cases are truly shitty managers that are a nightmare to work for.