A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer.
What kind of work do they do?
in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development.
I wonder what I have to imagine this is doing and how. How do they interface with the loop-without-a-human?
Either way, they do seem to have a (small, narrow) systematic test case and the product variance to be useful at least anecdotally/for a sample case.
I have a feeling that their test case is also a bit flawed. Trying to get index_value instead of index value is something I can imagine happening, and asking an LLM to ‘fix this but give no explanation’ is asking for a bad solution.
I think they are still correct in the assumption that output becomes worse, though
This isn’t even a QA level thing. If you write any tests at all, which is basic software engineering practice, even if you had AI write the tests for you, the error should be very, very obvious. I mean I guess we could go down the road of “well what if the engineer doesn’t read the tests?” but at that point the article is less about insidious AI and just about bad engineers. So then just blame bad engineers.
Yeah, I understand that this case doesn’t require a QA, but in the wild companies seem to increasingly think that developers are necessary (yet), but QA are surely not
It’s not even bad engineers, it’s just squeezing of productivity as dry as possible, as I see it
What kind of work do they do?
I wonder what I have to imagine this is doing and how. How do they interface with the loop-without-a-human?
Either way, they do seem to have a (small, narrow) systematic test case and the product variance to be useful at least anecdotally/for a sample case.
I have a feeling that their test case is also a bit flawed. Trying to get index_value instead of index value is something I can imagine happening, and asking an LLM to ‘fix this but give no explanation’ is asking for a bad solution.
I think they are still correct in the assumption that output becomes worse, though
It just emphasizes the importance of tests to me. The example should fail very obviously when you give it even the most basic test data.
Yeah, if only QA vere not the first ‘replaced’ by AI 😠
This isn’t even a QA level thing. If you write any tests at all, which is basic software engineering practice, even if you had AI write the tests for you, the error should be very, very obvious. I mean I guess we could go down the road of “well what if the engineer doesn’t read the tests?” but at that point the article is less about insidious AI and just about bad engineers. So then just blame bad engineers.
Yeah, I understand that this case doesn’t require a QA, but in the wild companies seem to increasingly think that developers are necessary (yet), but QA are surely not
It’s not even bad engineers, it’s just squeezing of productivity as dry as possible, as I see it