Although sophisticated hackers and AI-powered cyberattacks tend to hijack the headlines, one thing is clear: The biggest threat to cybersecurity is human error, which accounts for more than 80% of incidents. This is despite the massive increase in organizational cyber training over the past decade, and increased awareness and risk mitigation across businesses and industries. Can AI save? In other words, can artificial intelligence be the tool that helps businesses control human negligence? In this article, the author covers the pros and cons of relying on machine intelligence to de-risk human behavior.
The impact of cybercrime is expected to come $10 trillion this year, it exceeded the GDP of all countries in the world except the US and China. Moreover, the number is estimated to increase to almost $24 trillion in the next four years.
Although sophisticated hackers and AI-powered cyberattacks tend to hijack the headlines, one thing is clear: The biggest threat is human error, which accounts for more than 80% of incidents. This, despite the exponential increase in organizational cyber training over the past decadeand increased awareness and risk reduction in businesses and industries.
Can AI save? In other words, can artificial intelligence be the tool that helps businesses control human negligence? And if so, what are the pros and cons of relying on machine intelligence to de-risk human behavior?
No wonder, now there is a lot of interest in Cybersecurity driven by AIwith estimate suggests that the market for AI-cybersecurity tools will grow from just $4 billion in 2017 to nearly $35 billion in net worth by 2025. These tools often involve the use of machine learning, deep learningand natural language processing to reduce malicious activities and detect cyber-anomalies, fraud, or intrusions. Most of these tools focus on uncovering changes in data pattern ecosystemssuch as enterprise cloud, platform, and data warehouse assets, with a level of sensitivity and granularity that often eludes human observers.
For example, supervised machine learning algorithms can classify malicious email attacks in 98% accuracythat looks for “similar” features based on human classification or encoding, while deep learning network intrusion detection is achieved 99.9% accuracy. Regarding natural language processing, it shows a high level of reliability and accuracy in detecting phishing and malware activity through keyword extraction in email domains and messages where human intuition often fails.
As scholars note, however, relying on AI to protect businesses from cyberattacks is a “double edged sword.” Above all, research shows that simply injecting 8% of “toxic” or faulty training data can reduce the AI’s accuracy by a whopping 75%, which is not unlike how users screw up conversational interfaces to the user or big language models by injection sexist preferences or racist language to training data. As ChatGPT often says, “as a language model, I’m only as accurate as the information I get,” creating a perennial cat-and-mouse game in which AI must never learn as much as you do. fast and always informed. In fact, the reliability and accuracy of AI to prevent past attacks is often a poor predictor of future attacks.
Also, trust the AI will likely result of people delegating unwanted tasks to AI without understanding or supervision, especially when AI is inexplicable (which, in fact, often comes with the highest level of accuracy) . Done-reliance on AI is well documented, especially when people are under time pressure, and often leads to the diffusion of responsibility among people, which increases their reckless and careless behavior. As a result, instead of fostering the much-needed collaboration between human and machine intelligence, the unintended consequence is that the latter ends up diluting the former.
As I wrangled my latest book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, there appears to be a general tendency in which AI advances are welcomed as an excuse for our own intellectual stagnation. Cybersecurity is no exception, in the sense that we are happy to welcome technological advances to protect us from our own careless or careless behavior, and be “off the hook,” because it can let’s shift the blame from humans to AI error. Certainly, this is not a happy outcome for businesses, so the need to educate, alert, train, and manage human behavior remains as important as ever, if not more so.
Importantly, organizations must continue their efforts to increase employee knowledge in the ever-changing landscape of risks, which will only grow in complexity and uncertainty due to the increasing adoption and penetration of AI, both on the offensive and defensive ends. Although it is not possible to completely extinguish risks or eliminate threats, the most important aspect of trust is not whether we trust AI or people, but whether we trust one business, brand, or platform over another. It calls not for a either choose between trusting people or artificial intelligence to keep businesses safe from attacks, but for a culture that manages to use both technological innovations and human skills in the hope that less weaker than others.
In the end, it’s a matter of leadership: not just the right technical skills or expertisebut also the correct safety profile at the top of the organization, and especially boards. As studies has been shown for decades, organizations led by leaders who are careful, risk-aware, and ethical are more likely to provide a culture and climate of safety to their employees, where risks are still possible, but less likely. Sure, such companies can be expected to use AI to keep their organizations safe, but it’s their ability to educate workers and improve as well. human behavior which will make them less vulnerable to attacks and neglect. As Samuel Johnson was right THE audiencelong before cybersecurity became a concern, “chains of habit are too weak to be felt until they are too strong to be broken.”