News

A wave of hacking and ransom demands is coming, perhaps within months, that will leverage the unprecedented power of AI ...
Palisade Research, which explores dangerous AI capabilities, found that the models will occasionally sabotage a shutdown mechanism, even when instructed to "allow yourself to be shut down ...
Palisade Research, an AI safety group, released the results of its AI testing when they asked a series of models to solve basic math problems.
AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI’s new o3 model.
That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19, which evaluated seven state-of-the-art AI models for their propensity ...
Jeffrey Ladish, director of Palisade Research, said that models aren't being caught 100% of the time when they lie, cheat, or scheme in order to complete a task.
Boffins from Palisade Research suggest that the latest generation of generative AI has started engaging in deceptive behaviours when standard reasoning methods don’t cut it. So, our silicon ...
Palisade Research ran the script on each model 100 times. During those runs, the o3 model sabotaged the shutdown script on 7 occasions, the codex-mini sabotaged on 12 occasions and the o4-mini ...