Home / news

 

It Takes Only 250 Documents to Poison Any AI Model

from DarkReading 22 October indexed on 23 October 2025 4:01

Researchers find it takes far less to manipulate a large language model's (LLM) behavior than anyone previously assumed.

Read more.

 

TOP