Home / news

 

Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code

from DarkReading 21 August indexed on 21 August 2024 16:01

CodeBreaker technique can create code samples that poison the output of code-completing LLMs, resulting in vulnerable — and undetectable — code suggestions.

Read more.

 

TOP