Home Bots & Brains ChatGPT mutates malicious code

ChatGPT mutates malicious code

by Pieter Werner

Researchers at CyberArk Labs had ChatGPT write malicious code, and have it automatically mutate in order to circumvent security tools. Last week it was announced that ChatGPT could be used to write code as part of malware, but CyberArk Labs goes a step further by having ChatGPT create the malware itself, which is also not noticed by security tools.

CyberArk Labs first shows how they have bypassed the AI service’s content filter and, against its own rules, can still create malware that, in runtime queries, makes ChatGPT load malicious code. As a result, the malware itself does not contain any malicious code, because it receives the code from ChatGPT. In addition, it is validated (checking whether the code is correct), and then executed without leaving a trace. In addition, CyberArk Labs managed to ask ChatGPT to mutate the code.

The so-called ‘naked malware’ or polyform malware that arises does not contain any malicious modules and is therefore not noticed by security tools. This has not been successful in previous studies.

Eran Shimony, researcher at CyberArk Labs: “Polymorphic malware is very difficult to tackle for security products because you can’t really identify them. In addition, they usually leave no traces on the file system, as their malicious code is only processed in memory. Moreover, if one looks at the executable file itself, it probably looks harmless.”

Misschien vind je deze berichten ook interessant