LLMs can actually be a useful tool for populating out unit tests.
My experience with this is the LLM commenting out the existing logic and just returning true, or putting in a skeleton unit test with a comment that says “we’ll populate the code for this unit test later”.
this is not something i’ve ever encountered, nor something that i’d ever expect from an LLM specifically… some kind of test-writing-specific AI? sure because its metric is just getting the thing to go green… but LLMs don’t really care about the test going green: they simply care about filling in the blanks, so its “goal” would never include simply making the test pass, and its training data has significantly more complete tests than placeholders
My experience with this is the LLM commenting out the existing logic and just returning true, or putting in a skeleton unit test with a comment that says “we’ll populate the code for this unit test later”.
this is not something i’ve ever encountered, nor something that i’d ever expect from an LLM specifically… some kind of test-writing-specific AI? sure because its metric is just getting the thing to go green… but LLMs don’t really care about the test going green: they simply care about filling in the blanks, so its “goal” would never include simply making the test pass, and its training data has significantly more complete tests than placeholders
It’s so ridiculous, like an ancient Egyptian slave telling its master that “we will” “take care of it later”
So stupid for an LLM to do