Omg the comments are so out of hand. I regularly do code reviews on colleagues who use AI to write code (some whilst protesting, but still). The comments are usually the worst part.
The thing writes entire novels in the summary that do nothing but confuse and add cognitive load. It adds comments to super obvious things, describing what the code does instead of why. Yes AI I can read code, I know assigning a variable a value is how shit works. And I have still got PTSD from those kinds of comments from a legacy system I worked on for years that did the exact same, except the comments and the code didn’t match up, so it was a continuous guess which one was the intended one.
It also likes to put responses to the prompt in the comments. So for example when it assigned A to a variable and it was supposed to be B, when you point this out it adds a comment saying something like: This is supposed to be B not A. But when you read those comments after the fact, it makes zero sense. Like of course it should be B? Why should it ever be A?
And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.
My personal experience is in 30% of cases the AI is just plain wrong and the result is nonsense, delete that shit and try again. In the 70% that does have some kind of answer there is ALWAYS at least one big issue and usually multiple. It’s a 50/50 if the code is workable with some kinks to work out, or if it is seriously flawed and needs a lot of work. For experienced devs it can be a helpful thing if they have writers block, to give them something to be angry about, showing them how they can do better. But for inexperienced devs it’s just plain terrible, the code is shit and the dev doesn’t even know. And worse still the dev doesn’t learn. I try to sit down with them, explain the shortcomings and how to do better. But they don’t learn, they just know what stuff to write in the prompt, in order to not get me on their case. Or they will say stuff like: but it works right? Facepalm
That company I do work for also tried getting their sysadmins and devops people to use AI. Till one day there was a permissions issue, which admittedly was pretty complicated, where they ended up solving it with AI. The team was happy, the upper management was happy, high fives all around. Till the grumpy old sysadmin who has 40 years of experience takes a look and hits the big ol’ red alarm button of doom. Full investigation later, the AI had fucked up and created a huge hole in the security. There was zero evidence it had been exploited, but that doesn’t matter. All the work still needed to be done, all the paperwork filed, proper agencies informed, because the security issue was there. Management eased up on AI usage for those people real fast.
It’s so weird how people in charge want to use AI, but aren’t even really sure of what it is and what it isn’t. And they don’t listen to what the people with actual knowledge have to say. In their minds we are probably all just covering our asses to not be out of a job.
But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!
And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.
My favorite is when it generates a tree of the files in a directory in a README and a description for each file. How the fuck is this useful? Files will be added and removed, so there’s now an additional task to update these docs whenever that happens. Nobody will remember to do so because no tool is going to enforce that and it’s stupid anyway.
Sure, document high level directories. But do you really need that all in the top level README?
But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!
Nothing to add. Just quoting this section because it needs to be highlighted lol.
Very well said. This is 100% my experience and could be written by me. This is exactly what it is. We’re going to be seeing a lot of low quality code after 2024/5 sadly :(
Omg the comments are so out of hand. I regularly do code reviews on colleagues who use AI to write code (some whilst protesting, but still). The comments are usually the worst part.
The thing writes entire novels in the summary that do nothing but confuse and add cognitive load. It adds comments to super obvious things, describing what the code does instead of why. Yes AI I can read code, I know assigning a variable a value is how shit works. And I have still got PTSD from those kinds of comments from a legacy system I worked on for years that did the exact same, except the comments and the code didn’t match up, so it was a continuous guess which one was the intended one. It also likes to put responses to the prompt in the comments. So for example when it assigned A to a variable and it was supposed to be B, when you point this out it adds a comment saying something like: This is supposed to be B not A. But when you read those comments after the fact, it makes zero sense. Like of course it should be B? Why should it ever be A?
And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.
My personal experience is in 30% of cases the AI is just plain wrong and the result is nonsense, delete that shit and try again. In the 70% that does have some kind of answer there is ALWAYS at least one big issue and usually multiple. It’s a 50/50 if the code is workable with some kinks to work out, or if it is seriously flawed and needs a lot of work. For experienced devs it can be a helpful thing if they have writers block, to give them something to be angry about, showing them how they can do better. But for inexperienced devs it’s just plain terrible, the code is shit and the dev doesn’t even know. And worse still the dev doesn’t learn. I try to sit down with them, explain the shortcomings and how to do better. But they don’t learn, they just know what stuff to write in the prompt, in order to not get me on their case. Or they will say stuff like: but it works right? Facepalm
That company I do work for also tried getting their sysadmins and devops people to use AI. Till one day there was a permissions issue, which admittedly was pretty complicated, where they ended up solving it with AI. The team was happy, the upper management was happy, high fives all around. Till the grumpy old sysadmin who has 40 years of experience takes a look and hits the big ol’ red alarm button of doom. Full investigation later, the AI had fucked up and created a huge hole in the security. There was zero evidence it had been exploited, but that doesn’t matter. All the work still needed to be done, all the paperwork filed, proper agencies informed, because the security issue was there. Management eased up on AI usage for those people real fast.
It’s so weird how people in charge want to use AI, but aren’t even really sure of what it is and what it isn’t. And they don’t listen to what the people with actual knowledge have to say. In their minds we are probably all just covering our asses to not be out of a job.
But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!
My favorite is when it generates a tree of the files in a directory in a README and a description for each file. How the fuck is this useful? Files will be added and removed, so there’s now an additional task to update these docs whenever that happens. Nobody will remember to do so because no tool is going to enforce that and it’s stupid anyway.
Sure, document high level directories. But do you really need that all in the top level README?
Nothing to add. Just quoting this section because it needs to be highlighted lol.
Very well said. This is 100% my experience and could be written by me. This is exactly what it is. We’re going to be seeing a lot of low quality code after 2024/5 sadly :(