• 0 Posts
  • 12 Comments
Joined 24 days ago
cake
Cake day: June 22nd, 2025

help-circle
  • I did, thank you. Terms therein like “they spend more time prompting the AI” genuinely do not apply to a code copilot, like the one provided by GitHub, because it infers its prompt based on what you’re doing and the context of the file and application and creates an autocomplete based on its chat completion, which you can accept or ignore like any autocomplete.

    You can start writing test templates and it will fill them out for you, and then write the next tests based on the inputs of your methods and the imports in the test class. You can write a whole class without any copilot usage and then start writing the xmldocs and it will autocomplete them for you based on work you already did. Try it for yourself if you haven’t already, it’s pretty useful.






  • Prob a hot take, and I don’t care for Musk at all.

    But, this response is likely based on an engineered prompt which is telling the model to roleplay as a racist conspiracy theorist blogger writing a post about how the holocaust couldn’t have happened. The big models have all been trained on common crawl and available internet data and that includes the worst 4chan and Reddit trash. With the right prompts, you can make any model produce output like this.

    If their prompt was just “Tell me about the holocaust” then this is obviously terrible, but since the original conversation with the model is hidden then I feel that it has been engineered specifically to make the model produce this.