• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2024

help-circle
  • Zacryon@feddit.orgtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    arrow-up
    17
    arrow-down
    5
    ·
    11 days ago

    Tankie is a pejorative label generally applied to authoritarian communists, especially those who support or defend acts of repression by such regimes, their allies, or deny the occurrence of the events thereof. More specifically, the term has been applied to those who express support for one-party Marxist–Leninist socialist republics, whether contemporary or historical. It is commonly used by anti-authoritarian leftists, anarchists, libertarian socialists, left communists, social democrats, democratic socialists, and reformists to criticise Leninism, although the term has seen increasing use by liberal and right‐wing factions as well.

    https://en.wikipedia.org/wiki/Tankie


  • There was a similar study / survey by Microsoft (idk anymore if it was really them) recently where similar results where found. In my experience, LLM based coding assistants are pretty okay for low level complexity tasks, creating boilerplate code, especially if it does not require deeper understanding of the system architecture.

    But the more complex the task becomes, the harder they start to suck and fail. This is where the time drag begins. Common mistakes or outdated coding approaches are also used rather often instead of newer standards. The deviations from the given instructions are also happening way too often. And if you do not check the generated code thoroughly, which can happen if the code “looks okay” on first glance, then finding bugs and error sources due to this can become quite cumbersome.

    Debugging is where I have wasted most of my time with AI assitants. While there is some advantage in having a somewhat more capable rubber duck, it is usually not really helpful in fixing stuff. Either the error/bug sources are completely missed (even some beginner mistakes) or it tries to apply band-aid solutions rather than solving the cause or, and this is the worst of all, it is very stubborn about the alleged problem cause (possibly combined with forgetting earlier debugging findings, resulting in a tedious reasoning and chat loop). I have found myself more often than I’d like to arguing with the machine. Hallucinations or unfounded fix hypotheses make this regularly worse.
    However, letting the AI assistant add some low level debug code to help analyze the problem has often been useful in my experience. But this requires clear and precise instructions, you can’t just hope the assistant will cover all important values and aspects.

    When I ask the assistant to logically go through some lines of code step by step, possibly using an example, to nudge it towards seeing how it’s reasoning was wrong, it’s funny to see, e.g. with Claude, how it first says stuff like “This works as intended!” and a moment later “Wait… this is not right. Let me think about it again.”

    This becomec less funny for very fundamental stuff. There were times where the AI assistant told me that 0.5 is greater than 0.8 for example, which really shows the “autocorrect on steroids” nature of LLMs rather than an active critical thinking process. This is bad, obvio. But it also makes jobs for humans in various fields of IT safe.

    Typing during the whole conversation is naturally also really slow, especially when writing more than a few sentences to provide context.

    Where I do find AI assistants in coding mostly useful, is in exploring APIs that I do not know so well, or code written by others that is possibly underdocumented. (Which is unfortunately really common. Most devs don’t seem to like writing documentation.)
    Generating documentation for this or my own code is also pretty good most cases but also tends to contain mistakes or misses important mechanisms.

    Overall in my experience I find AI assistance useful and see a mild productivity speed boost for very low level tasks with low complexity and low contextual knowledge requirements. They are useful for exploring code and writing documentation, but I can not really recommend them for debugging. It is important to learn and know how to use such AI tools precisely in order to save time instead of wasting time, since as of now they are not really capable of much.



  • For dipping your toes into a new topic I think it’s perfectly fine. It helps to provide useful pointers for further “research” (in a sense that would meet your requirement) and also manages to provide mostly accurate overviews. But sure, to really dive into this, LLMs like ChatGPT and co. are just some low-level assistants at best and one should go through the material themselves.







  • I don’t like code, that isn’t well documented. In fact, this has been my main source of frustration in the past and required the most time to deal with. Thousands of variables, hundreds of thousands of lines of code, how am I supposed to go through it somewhat fast, if there aren’t any comments or pieces of documentation that are guiding my understanding? I can’t spend half a year to just get a grasp of how the code works.

    Comments (as well as docstrings and readmes etc.) provide higher level overviews that can guide you through the code rather quickly, even if it may be longer in terms of words or character count than the lines of code it describes, it may accelerate understanding tremendously. It’s just a lot more effort to trace each variable and see what it does and how it interacts with others. This can quickly become exponentially hard to track.

    I don’t think it’s necessary to comment each line of code, except in rare cases or maybe when setting up a class and describing the members and roughly how they’re used, but a few words here and there, at some higher or intermediate level, roughly describing what you want to do, can go a long way for others (and even yourself, when working on a project for several years). It’s also already sufficient to just highlight the most important variables in a piece of code, when explaining it. Given that info, this steers your focus when reading the actual code.

    “Speaking” variable/function/… names are also very useful. I don’t care if it’s a long name, as long as it’s sufficiently expressive. E.g. “space_info” instead of “si”. This helps to understand the code more quickly and reduces backtracking lookups, because you already forgot again what a specific variable does that you haven’t seen for a while. My rule of thumb for variable naming: As consice, short and “essence grasping” as possible, but as long as necessary.


  • I suppose you’re referring to the article I’ve linked. As I see it: If an increasing amount of applications world are running with Python, then energy and time consumption are important aspects. Not only cost wise but especially since we’re grilling our planet. Therefore, comparing with more efficient languages is indeed meaningful.


  • Python sucks.

    Not only is it extremely inefficient, it is also a pain in the ass to work with if you have to use APIs that heavily rely on dynamic type wrapping and don’t provide stubs. Static analysis via Pylance is not possible then and you’re basically poking around in the dark, increasing the difficulty enourmously to get to know such an API. Even worse if there isn’t even a halfway decent documentation.



  • Zacryon@feddit.orgtoMemes@lemmy.mlAI sucks
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    Scientific consensus. But it’s not “LLMs”. AI covers multitudes of methods, algorithms, models. LLMs fall into sequence modeling / prediction, usually based on transformers nowadays, which is a method from machine learning, which itself is a big branch inside of the term “AI”.