Sam Schechner / Wall Street Journal:
DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more — Testing shows the Chinese app is more likely to dispense details on how to make a Molotov cocktail or encourage self-harm by teenagers
DeepSeek R1 is more susceptible to jailbreaking than ChatGPT, Gemini, and Claude; it can instruct on a bioweapon attack, write a pro-Hitler manifesto, and more (Sam Schechner/Wall Street Journal)
Related articles