AI Models Trained on Buggy Code Mirror Errors, Study Finds
Large language models trained on flawed data tend to replicate those mistakes, researchers found. "In bug-prone tasks, the likelihood of LLMs generating correct code is nearly the same as generating buggy code," found researchers from institutions including the Chinese Academy of Sciences.