ChatGPT is dangerous: here are the risks
robort - 2023-06-06 09:42:13
What is the hot topic of the hi-tech sector in early 2023? Without a doubt ChatGPT. The OpenAI chatbot, although until now only accessible in the form of a prototype, has already demonstrated enormous potential.
As often happens with innovations made available on a large scale, the many positive aspects associated with its use are accompanied by others with a negative value. In short, the system inevitably brings with it risks resulting from its use, if conducted for malicious purposes.
The risks connected to the diffusion of ChatGPT
Panda Security researchers have identified three main dangers associated with the rise of this artificial intelligence. We are not dealing with excessively pessimistic hypotheses, nor with chronologically distant scenarios, as we have already had the opportunity to write several times
The first refers to phishing. The colloquial skills of ChatGPT can be exploited to write correct and convincing emails and texts, free of spelling or syntax errors that often emerge from malicious campaigns of this type, primarily because they are intended for countries where a different language is spoken than that of who organizes them.
There is also the risk of potentially devastating malware and malicious code being written that can bypass common and widely used control systems to help cyber criminals achieve their goals.
The third and final danger is that linked to social engineering practices. After all, by asking the right questions to the AI, useful information can be obtained to prepare a targeted attack, even spear-phishing, targeting a single person or a specific employee, to then penetrate the systems of an entire organization or company.
Finally, to these is added the almost inevitable spread of fake mirror sites and fake social profiles through which the bad guys pretend to be OpenAI to push the less attentive to download corrupted executables.
The suggestion is the same as always: refer exclusively to official resources (in this case openai.com ) and raise the level of attention when online activities are carried out or communications are received in which even a single detail appears suspicious. Furthermore, relying on an effective cybersecurity solution remains a good practice to follow.