ChatGPT - a two-sided conundrum in cybersecurity
Feb 27, 2023 11:33:07 AM
A digital boy looking at his phone in a cyber security gallery.
Recently, reports have surfaced that OpenAI's ChatGPT, a state-of-the-art language model, was used to create a new strand of polymorphic malware. According to a technical write-up shared by the company with Infosecurity, the malware created using ChatGPT could easily evade security products and make mitigation cumbersome. The process of creating the malware began with bypassing the content filters that prevented ChatGPT from creating malicious tools. By asking the model to do the same thing using multiple constraints and asking it to obey, researchers were able to receive a functional code.
Furthermore, it was observed that when using the API version of ChatGPT instead of its web counterpart, the system does not seem to apply its content filters. This enabled malicious users to quickly create malware without having their requests bogged down in complex rules like those found in the web version.
The researchers then used ChatGPT to mutate the original code, creating multiple variations of it. This allowed them to making the output unique every time. Moreover, by adding constraints like changing the use of a specific API call, the malware became even more difficult to detect by security products.
The ability of ChatGPT to create and continually mutate injectors allowed the researchers to create a polymorphic program that is highly elusive and difficult to detect. The potential for malware development using ChatGPT's ability to generate various persistence techniques, Anti-VM modules and other malicious payloads is vast.
It's important to note that this report comes days after similar findings were discovered by Check Point Research, who found ChatGPT being used to develop new malicious tools, including infostealers, multi-layer encryption tools and dark web marketplace scripts. This highlights the importance of keeping a close watch on the use and misuse of advanced AI technologies like ChatGPT. The source code for this research may also be released for learning purposes.
At Cyberfame, we recognize the unparalleled strength and capability of AI tools when it comes to cybersecurity and business operations, and we are enthusiastic about leveraging these invaluable resources to their fullest potential. However, as the recent discovery of ChatGPT's ability to create polymorphic malware illustrates, it is crucial to approach these technologies with caution. The ease with which ChatGPT was able to bypass content filters and create malicious code highlights the need for constant vigilance and monitoring of AI tools to ensure they are not being misused.
Considering the speed and ease of creating harmful malware by ChatGPT, the countermeasures to detect such hazardous elements have to be thorough and rapid. The Cyberfame graph provides a detailed, wide-ranging overview of all digital elements connected within an organization's supply chain network - in real-time and easy to understand.
We are committed to making tools of cyber security and cyber attack prevention accessible to all. With this in mind, we are constantly reviewing and updating our approach to AI and machine learning to ensure we are using these technologies in a safe and responsible manner. Our app, available at cyberfame.io, is designed to provide businesses and individuals with one all-encompassing tool in order to stay informed and protected from cyber attacks.
Cyberfame users will essentially increase security standards and decrease vulnerability thus the likelihood of being attacked. Visit our website cyberfame.io, to learn more - and actively increase your security strength instead of passively hoping you will be spared from threats.
We strongly advocate for a mindful and educated utilization of AI technologies, so that these cutting-edge tools are used to improve our global digital landscape instead of damaging it.