New ChatGPT-4o Jailbreak Technique Enabling to Write Exploit Codes
Researcher Marco Figueroa has uncovered a method to bypass the built-in safeguards of ChatGPT-4o and similar AI models, enabling them to generate exploit code. This discovery highlights a significant vulnerability in AI security measures, prompting urgent discussions about the future of AI safety. According to the 0Din reports, the newly discovered technique involves encoding malicious […]
The post New ChatGPT-4o Jailbreak Technique Enabling to Write Exploit Codes appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.