A New Trick Uses AI to Jailbreak AI Models—Including GPT-4 Nonso Jr. December 05, 2023 A+ A- Print Email Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. via Tingle Tech Labels: Feed: All Latest, Tingle Tech