managemnet company strategy managemanet The New Risks ChatGPT Poses to Cybersecurity

The New Risks ChatGPT Poses to Cybersecurity

The New Risks ChatGPT Poses to Cybersecurity post thumbnail image

The FBI’s 2021 Internet Crime Report found that phishing is the most common IT threat in America. From a hacker’s perspective, ChatGPT is a game changer, giving hackers from around the world a near-fluency in English to bolster their phishing campaigns. Bad actors can also trick the AI ​​into generating hacking code. And, of course, there is the potential for ChatGPT itself to be hacked, spreading dangerous misinformation and political propaganda. This article examines these new risks, examines the training and tools needed for cybersecurity professionals to respond, and calls for government oversight to ensure that the use of AI does not undermine cybersecurity efforts.

When OpenAI launched its revolutionary AI language model ChatGPT in November, million users relied on its capabilities. For many, however, curiosity quickly gave way to serious concern about the tool’s potential to advance the agendas of bad actors. In particular, ChatGPT opens up new ways for hackers to potentially breach advanced cybersecurity software. For a sector that already flows from a 38% increase in the world of data breaches in 2022, it is critical that leaders recognize the growing impact of AI and act accordingly.

Before we can come up with solutions, we need to recognize the significant threats that arise from the widespread use of ChatGPT. This article will examine these new risks, explore the training and tools needed for cybersecurity professionals to respond, and call for government oversight to ensure that the use of AI does not harm cybersecurity efforts.

AI-Generated Phishing Scams

While older versions of the language-based AI have been open source (or publicly available) for years, ChatGPT is far and away the most advanced iteration to date. In particular, ChatGPT’s ability to communicate seamlessly with users without spelling, grammar, and verb tense errors makes it seem like there is a real person on the other side of the chat window. From a hacker’s perspective, ChatGPT is a game changer.

the The FBI’s 2021 Internet Crime Report found that phishing is the most common IT threat in America. However, most phishing scams are easy to recognize, as they are often filled with misspellings, bad grammar, and generally bad phrasing, especially those from other countries where the first language of the bad actor is not English. ChatGPT can provide hackers from all over the world with near fluency in English to strengthen their phishing campaigns.

For cybersecurity leaders, the rise of sophisticated phishing attacks requires immediate attention, and actionable solutions. Leaders should equip their IT teams with the tools to determine what is generated by ChatGPT versus what is human-generated, specifically targeting incoming “cold” emails. Fortunately, “ChatGPT Detector” technology already available, and is likely to develop alongside ChatGPT itself. Ideally, the IT infrastructure will integrate AI detection software, automatically screening and flagging AI-generated emails. In addition, it is important for all employees to be regularly trained and retrained in the latest cybersecurity knowledge and prevention skills, with specific attention given to AI-supported phishing scams . However, the onus is on the sector and the wider public to continue advocating for advanced detection tools, rather than simply wasting AI capabilities.

How ChatGPT Writes Malicious Code

ChatGPT is adept at generating code and other computer programming tools, but the AI ​​is programmed not to generate code it considers malicious or intended for hacking purposes. When the hacking code is requested, ChatGPT informs the user that its purpose is to “assist in useful and ethical tasks while adhering to ethical guidelines and policies.”

However, manipulating ChatGPT is certainly possible and with enough creative prodding and prodding, bad actors can trick the AI ​​into generating hacking code. In fact, hackers are already planning for this purpose.

For example, the Israeli security company Check Point just discovered a thread on a well-known underground hacking forum from a hacker who claims to be testing the chatbot to recreate malware strains. Once one such thread is discovered, it’s safe to say that there are many more out there on the global and “dark” webs. Cybersecurity professionals need the proper training (i.e., continuous improvement) and resources to address ever-growing threats, AI-generated or otherwise.

There is also the opportunity to equip cybersecurity professionals with their own AI technology to better detect and defend against AI-generated hacker code. While public discourse is the first to lament the power that ChatGPT gives to bad actors, it is important to remember that this same power applies equally to good actors. In addition to trying to prevent ChatGPT-related threats, cybersecurity training should also include instruction on how ChatGPT can be an important tool in the arsenal of cybersecurity professionals. As this rapid technological evolution creates a new era of cybersecurity threats, we must explore these possibilities and develop new training to keep up. Additionally, software developers should look to create generative AI that can be more powerful than ChatGPT and designed for manned Security Operations Centers (SOCs).

Regulating the Use and Capabilities of AI

While there is significant discussion about bad actors using AI to help hack external software, what is rarely discussed is the potential for ChatGPT itself to be hacked. From there, bad actors can spread misinformation from a source that is often seen as, and designed to be, impartial.

ChatGPT has IS reported took steps to identify and avoid answering politically-related questions. However, if AI is hacked and manipulated to provide information that appears to be objective but is in fact well masked biased information or a distorted view, then AI can become a dangerous propaganda machine. The ability for a compromised ChatGPT to spread misinformation can be worrisome and may require a need for enhanced government oversight for advanced AI tools and companies like OpenAI.

The Biden administration released a “Blueprint for an AI Bill of Rights,” but the stakes are higher than ever with the launch of ChatGPT. To expand this, we need oversight to ensure that OpenAI and other companies launching generative AI products regularly review their security features to reduce the risk of them being hacked. In addition, new AI models should require a threshold of minimum security measures before an AI is open source. For example, Bing launched their own generative AI in early March, and Metas ending up with a powerful tool on their own, with more from other tech giants.

As people marvel at — and cybersecurity pros wonder — the potential of ChatGPT and the emerging AI market, checks and balances are essential to ensure the technology doesn’t become obsolete. Beyond cybersecurity leaders retraining and re-equipping their workforces, and the government taking on a greater regulatory role, a general shift in our thinking around and AI behavior is required.

We need to rethink what the fundamental basis for AI – especially open-sourced examples like ChatGPT – looks like. Before making a tool available to the public, developers need to ask themselves if its capabilities are ethical. Does the new tool have a foundational “programmatic core” that truly prohibits manipulation? How do we establish the standards that require this, and how do we hold developers accountable for failing to meet those standards? Organizations have established agnostic standards to ensure that exchanges in various technologies – from edtech to blockchains and even digital wallets – are safe and ethical. It is important that we apply the same principles to generative AI.

ChatGPT chatter has been around for a long time and as technology advances, it is imperative that technology leaders start thinking about what it means for their team, their company, and society as a whole. Otherwise, they will not only fall behind their competitors in adopting and deploying generative AI to improve business outcomes, they will also fail to anticipate and protect against the next generation of hackers that can already. manipulation this technology is for personal gain. With reputations and profits on the line, the industry needs to come together to have the right protections in place and make the ChatGPT revolution something to be welcomed, not feared.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post