• 1 Post
  • 672 Comments
Joined 4 years ago
cake
Cake day: January 17th, 2022

help-circle
  • There’s no getting around using AI for some of this, like subtitle generation

    Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.

    Anyway, if you do want to go that road still you could try

    • ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g. .mkv? Depends on context obviously)
    • Kdenlive with vosk
    • Kdenlive with whatever else via *.srt *.ass *.vtt *.sbv formats





  • Sad but unsurprising.

    I did read quite a lot on the topic, including “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.” (2019) and saw numerous documentaries e.g. “Invisibles - Les travailleurs du clic” (2020).

    What I find interesting here is that it seems the tasks go beyond dataset annotation. In a way it is still annotation (as in you take a data in, e.g. a photo, and your circle part of it to label i.e. e.g “cat”) but here it seems to be 2nd order, i.e. what are the blind spots in how this dataset is handled. It still doesn’t mean anything produced is more valuable or that the expected outcome is feasible with solely larger datasets and more compute yet maybe it does show a change in the quality of tasks to be done.










  • Brand new example : “Skills” by Anthropic https://www.anthropic.com/news/skills even though here the audience is technical it is still a marketing term. Why? Because the entire phrasing implies agency. There is no “one” getting new skills here. It’s as if I was adding bash scripts to my ~/bin directory but instead of saying “The first script will use regex to start the appropriate script” I named my process “Theodore” and that I was “teaching” it new “abilities”. It would be literally the same thing, it would be functionally equivalent and the implement would be actually identical… but users, specifically non technical users, would assume that there is more than just branching options. They would also assume errors are just “it” in the process of “learning”.

    It’s really a brilliant marketing trick, but it’s nothing more.




  • The word “hallucination” itself is a marketing term. It’s not because it’s been frequently used in the technical literature that it is free of any problem. It’s used because it highlights a problem (namely that some of the output of LLM are not factually correct) but the very name is wrong. Hallucination implies there is someone, perceiving and with a world model, who typically via heuristics (for efficient interfaces like Donald Hoffman suggests) do so incorrectly leading to bad decision regarding the current problem to solve.

    So… sure, “it” (trying not to use the term) is structural but it is simply because LLM have no notion of veracity or truth (or anything else, to be clear). They have no simulation to verify from if the output they propose (the tokens out, the sentence the user gets) is correct or not, it is solely highly probably based on their training data.