• 48 Posts
  • 928 Comments
Joined 1 year ago
cake
Cake day: September 13th, 2024

help-circle
  • So, instead of feeding large documents into these models which break them, you can instead provide them with an API to interrogate the document by writing code

    Kind of off topic, but this reminded me about something I really don’t like about the current paradigm of “intelligence” and “knowledge” being parts of a single monolithic model.

    Why aren’t we training models on how to search any generic dataset for information, find patterns, draw conclusions, etc, rather than baking the knowledge itself into the model? 8 or so GB of pure abstract reasoning strategies would probably be way more intelligent and efficient than even a much larger model we have now. Imagine if you can just give it an arbitrarily sized database whose content you control, which you can then fill with the highest quality, ethically obtained, human expert moderated data complete with attributions to original creators, and have it base all its decisions from that. It would even be able to cite what it used with identifiers in the database, which can then be manually verified. You get a concrete foundation of where it’s getting its information from, and you only need to load what it currently needs into memory, whereas right now you have to load all the AI’s “knowledge,” relevant or not, into your precious and limited RAM. You would also be able to update the individual data separately from the model itself, and have it produce updated results from the new data. That would actually be what I consider an artificial “intelligence” and not a fancy statistical prediction mechanism.



  • Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks — and they run on hardware that organizations already own

    Hard disagree as someone who does host their own AI. Go on Ollama and run some models, you’ll immediately realize that the smaller ones are basically useless. IMO 70B models are barely at the level of being usable for the simplest tasks, and with the current RAM landscape those are no longer accessible to most people unless you already bought the RAM before the Altman deal.

    I suspect this is why he made that deal despite not having an immediate need for that much RAM. To artificially limit the public’s ability to self host their own AI and therefore mitigate the threat open source models present to his business.









  • The Developer ID certificate is the digital signature macOS uses to verify legitimate software. The certificate that Logitech allowed to lapse was being used to secure inter-process communications, which resulted in the software not being able to start successfully, in some cases leading to an endless boot loop.

    This is 100% on Apple users for letting a company decide what their computer can and can’t run. And then brag about its security like it has some super special zero trust architecture and is not just a walled garden with a single point of failure dependent on opaque decision making criteria for what code should be “allowed” to run on the system.

    Key and signature based security model does not prove if it’s safe, it proves if it’s approved. They’re not the same.

    Macs don’t get malware. Unless it’s malware Apple approves, those are called apps.




  • HiddenLayer555@lemmy.mltoProgrammer Humor@lemmy.mlelectron.jxl
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    8 days ago

    it was called CROSS PLATFORM APPS

    Absolutely not unless it’s as sandboxed as the web (which even the web isn’t sandboxed that well).

    Working with software has only made me not trust software (that’s not open source.)

    Why we’re giving any random software full user level access in 2026 is beyond me.