this is not something i’ve ever encountered, nor something that i’d ever expect from an LLM specifically… some kind of test-writing-specific AI? sure because its metric is just getting the thing to go green… but LLMs don’t really care about the test going green: they simply care about filling in the blanks, so its “goal” would never include simply making the test pass, and its training data has significantly more complete tests than placeholders
this is not something i’ve ever encountered, nor something that i’d ever expect from an LLM specifically… some kind of test-writing-specific AI? sure because its metric is just getting the thing to go green… but LLMs don’t really care about the test going green: they simply care about filling in the blanks, so its “goal” would never include simply making the test pass, and its training data has significantly more complete tests than placeholders