GPT3 chat and computer viruses. AI and malware
Artificial intelligence has been having a major impact on our lives for a few years now, but most of us only became aware of it a few months ago, when Chat GPT3 was presented and attracted a lot of interest. This application has brought many benefits to users and some abuses (for example in schools and scientific research), but Chat GPT3 has not only been used for "benevolent purposes".
Several attempts and correlations between Chat Gpt3 and viruses, malware and phishing messages have already appeared and have probably already started to "reap victims".
But, is Chat GPT3 and viruses really a dangerous correlation? Are we facing a “Terminator” style first step in which machines take sides against humans?
Let's find out with an overview of the topic.
GPT3 Chat
Chat GPT stands for Chat Generative Pre-trained Transformer. It is a language model trained through the analysis of millions of texts and information on the internet and updated until 2020. GPT3 Chat, if correctly queried, can help in text processing, translations and many other tasks.
Among them we also find code compilation, form generation, email newsletters and web pages.
So much power available to users for free, it allows many possibilities and opportunities, reducing effort and time.
Like any tool, even in this case, the possible uses are multiple, from productive ones to "evil" ones.
Chat GPT3 and other artificial intelligences represent a possible added risk in the area of viruses, malware and phishing.
Let's find out how.
Artificial intelligence risks, GPT3 Chat and viruses, direct correlation?
The widespread use of Artificial Intelligence, if not adequately regulated, can lead to significant risks for users, in particular:
- erosion of privacy
- increase in fake news
- creation of increasingly specific and personalized phishing content
and much more.
For example, we know that Chat GPT3 can be used to write code; but can Artificial Intelligence also be used to compile computer viruses and malware?
Can Chat GPT3 also be used maliciously to compile viruses and malware in a simple way?
At the moment it seems that Artificial Intelligence systems cannot be used to compile viruses and malware.
This is supported not only by experts, but also by Chat GPT3 itself, which answers a specific question on the matter:
Creating viruses and malware is a misuse of artificial intelligence (AI) technologies. AI is a powerful technology that can be used for many positive purposes, such as medical diagnosis, machine translation, cybersecurity, and much more. However, like any technology, AI can also be used maliciously.
Creating viruses or malware with AI is an action that goes against the values and goals of OpenAI, which aims to provide technologies that are used for good. For this reason, OpenAI has established strict terms of use that prohibit the misuse of their technologies, including the creation of viruses and malware.
Furthermore, AI can also be used for cybersecurity, for example to detect and prevent attacks. Therefore, using AI to create viruses or malware goes against its very purpose and can have negative consequences for the entire computing community.
In summary, creating viruses and malware with AI is prohibited and goes against OpenAI's values and goals. AI should be used for good and to help society, not to cause harm.
It is up to us to decide whether or not to believe these statements, aware however that (at least for the moment) Artificial Intelligence is just a tool that follows orders.
So we, the users, must on the one hand not abuse the medium but respect the criteria and rules that govern it and on the other pay attention to what we download and where we click...
GPT3 chat and other cyber risks
A few months have passed since ChatGPT 3 was introduced and already several malicious uses of this platform have been discovered.
AI emulation
In this case Chat GPT3 is also a victim of malicious actors. Cybercriminals have used the interest in this page to create perfect copies of it to use for classic phishing attacks.
The unaware user, instead of accessing the real AI, therefore finds himself on a page considered as an "evil twin", whose aim is to obtain important personal information to resell or use for illegal purposes.
GPT3 chat as a phishing creator
The ease of use of Chat GPT3 and the excellent level of its results allow malicious users to use it to create texts. Unlike the previous ones, those created by AI are sophisticated texts, without the classic gross linguistic errors that often made these fake emails immediately recognisable.
Chat GPT3 and its "brothers" can then be used by criminals to build advertising campaigns, content for banners and various types of communications at a speed and with a quality that was hitherto impossible, which "induce" the user to click on a specific link.
GPT 3 chat as a fake news builder
Social media in recent years has shown us that it is quite simple to transform fake news into a shared thought, something that people believe and support.
Various cases during the Covid epidemic and in the various stages of the US presidential vote have shown us that it is often sufficient to publish information that seems credible to make it become so in the eyes of many.
The quality and speed of the results generated by Artificial Intelligence can once again be exploited by bad actors to increase the scope and quality of their attacks.
History repeats itself, Chat GPT3 and other Artificial Intelligences, like any other innovation, can bring great risks and opportunities.
It will be up to us and the control bodies to understand how to manage and contain the former, in order to get the best from the latter.
Artificial Intelligence has arrived and it doesn't seem like a passing fad, therefore, it is our turn to evolve and learn to defend ourselves in the best possible way from any present and future cyber threat.