I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).
Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.
if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs
LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…
type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure
let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten
fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this
let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc
there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks
I never said that, and a single review already will make a junior dev better off the bat
I agree, but then you say…
…which says the other thing. Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.
Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.
I’m not sure what you mean by that.
I agree with that, naming or even documenting is a good way to use an LLM. With supervision of course, but an imprecise name or documentation is not critical.
Not speaking for them, but I use LLMs for this. You have lines of repetitive code, and you realize you need to swap the order of things within each line. You could brute force it, or you could write a regex search/replace. Instead, you tell the LLM to do it and it saves a lot of time.
Swapping the order of things is just one example. It can change capitalization, insert values, or generate endless amounts of mock data.
Ah! That does seem useful indeed! Even just generating a bunch a dummy data.
I was tasked once with writing a front-end for an API that didn’t exist yet, but I had a model. I could have written a loop that generated “Person Man 1”, “Person Man 2”, etc. with all of the associated fields, but instead I gave the LLM my class definition and it spat out 50 people with unique names, phone numbers, emails, and everything. It made it easy to test the paging and especially the filtering. It also took like 30 seconds to ask for and receive.
I originally asked it to make punny names based on celebrities, and it said “I can’t do that.” ☹️
The junior developer can (hopefully) learn and improve.
LLMs are also improving though.
They’ll never be able to learn, though.
A LLM is merely a statistical model of its training material. Very well indexed but extremely lossy compression.
It will always be outdated. It can never become familiar with your codebase and coding practices. And it’ll always be extremely unreliable, because it’s just a text generator without any semblance of comprehension about what the texts it generates actually mean.
All it’ll ever be able to do is reproduce the standards as they were when its training model was captured.
If we are to compare it to a junior developer, it’d be someone who suffered a traumatic brain injury just after leaving college, which prevents them from ever learning anything new, makes them unaware that they can’t learn, and incapable of realising when they don’t know something, makes them unable to reason or comprehend what they are saying, and causes them to suffer from verbal diarrhoea and excessive sycophancy.
Now, such a tragically brain damaged individual might look like the ideal worker to the average CEO, but I definitely wouldn’t want them anywhere near my code.