In 2022 alone, a total of 4,100 publicly disclosed data breaches occurred, covering some 22 billion records exposed. All this despite the fact that organizations worldwide will spend a record-breaking $150 billion on cybersecurity by 2021. The software itself is changing, too. The rise of artificial intelligence in general, and generative AI in particular, is fundamentally changing the way companies use software. The increasing use of AI, in turn, makes software attacks more complex and the software itself more vulnerable. How, then, do companies secure their software and data? What companies seek to achieve from their security programs must evolve, just as the way companies use data and software has evolved. It’s time for their cybersecurity efforts to change. This article covers three such changes that companies can make to adapt to the growing uncertainty of the digital world.
What is the point of cybersecurity?
The question may seem basic, but it touches on one of the most important issues facing companies around the world. In fact, this question is particularly critical because – despite repeated attempts to support digital systems in the last few decades – cybersecurity risks remain widespread.
Only in 2022, a total 4,100 publicly disclosed data breaches occurred, covering about 22 billion records exposed. All this despite the fact that organizations around the world spent record-breaking $150 billion in cybersecurity by 2021.
The software itself is also changing. The rise of artificial intelligence in general, and generative AI in particular, is fundamentally changing the way companies use software. The increasing use of AI, in turn, makes software attack surface more complicated and software itself weaker.
How, then, do companies secure their software and data?
The answer is not that cybersecurity is a futile endeavor – far from it. Instead, what companies seek to achieve from their security programs must evolve, just as the way companies use data and software has evolved. It’s time for their cybersecurity efforts to change as well.
More specifically, companies can adapt to the growing uncertainty of the digital world by making three changes to the ways they support their software:
3 Ways Companies Can Improve Their Cybersecurity
First, cybersecurity programs should no longer avoid failures as their primary goal.
Software systems, AI, and the data they rely on are all so complex and fragile that failure is truly a ABOUT in these systems, not a bug. Because AI systems themselves are inherently probabilistic, for example, AI guaranteed to make mistakes sometimes – preferably, though, less than people. The same is true for software systems, not because they are probabilistic, but because as their complexity increases, so do their weaknesses. For this reason, cybersecurity programs must shift their focus away from testing controlling incidents of recognize and respond of failures when they inevitably occur.
The adoption of so-called zero trust architectures, based on the assumption that all systems can be compromised by adversaries, is one of the many ways to identify and respond to these risks. Even the US government has a zero trust strategy, which it implements in departments and agencies. But the adoption of zero-trust architectures is only one of many changes that must occur on the road to accepting failures in software systems. Companies should also invest more in their incident response programs, red team their software and AI for more types of failures by simulating potential attacks, strengthening in -house incident response planning for traditional software and AI systems, and more.
Second, companies must also expand their definition of “failure” for software and data systems to encompass more than just security risks.
Digital failures are no longer just related to security, but now involve many other potential harms, from performance errors to privacy issues, discrimination, and more. In fact, with the rapid adoption of AI, the definition of a security incident itself is no longer clear.
The weights (the trained “knowledge” stored in a model) for the meta generative AI model LLaMA, for example, are leaked to the public in March, giving any user the ability to run multibillion-parameter models on their laptop. The leak may have started as a security incident, but it also caused new intellectual property concerns about who has the right to use the AI model (IP theft) and damaged the privacy of the data trained in model (knows the model model. parameters can be help to recreate its training data and therefore violated privacy). And now that it is freely accessible, the model can be used more widely to create and spread disinformation. Simply put, there is no need for an adversary to compromise the integrity or availability of software systems; Changing data, complex interdependencies, and unintended use of AI systems can lead to failures themselves.
Cybersecurity programs cannot be relegated to focusing only on security failures; this will, in practice, make information security teams less effective over time as the scope of software failures grows. Instead, cybersecurity programs should be part of broader efforts focused on overall risk management – assessing how failures occur and managing them, whether the failure is caused by an adversary or no.
This, in turn, means that information security and risk management teams must include personnel with a wide range of skills beyond just security. Privacy experts, lawyers, data engineers, and others all play an important role in protecting software and data from new and evolving threats.
Third, monitoring failures should be one of the highest priority efforts for all cybersecurity teams.
This, unfortunately, is not currently the case. Last year, for example, companies were taken over an average of 277 days, or roughly 9 months, to identify and contain infringement. And it is very common for organizations to learn about breaches and vulnerabilities in their systems not from their own security programs, but through third parties. Today’s reliance on outsiders for detection is itself a tacit admission that companies are not doing all they need to do to understand when and how their software fails.
What this means in practice is that every software system and every database needs a corresponding monitoring plan and metrics for potential failures. In fact, this approach is already gaining traction in the world of risk management for AI systems. The National Institute of Standards and Technology (NIST), for example, released it AI Risk Management Framework (AI RMF) earlier this year, which clearly recommends that organizations map the potential harms AI systems can create and create a corresponding plan to measure and manage each harm AI systems. (Full disclosure: I received a grant from NIST to support the development of AI RMF.) Applying this best practice to software systems and databases written large is a direct way to prepare for failures. in the real world.
This does not mean, however, that third parties cannot play an important role in identifying incidents. Quite the opposite: Third parties play an important part in detecting failures. Activities such as “bug bounties,” where rewards are offered in exchange for identifying risks, are a proven way to incentivize risk detection, as clear ways for consumers or users to communicate failures when they occur. In general, however, third parties cannot continue to play a primary role in identifying digital failures.
. . .
Are the above recommendations enough? Certainly not.
For cybersecurity programs to keep up with the growing risks posed by software systems, more work needs to be done. More resources, for example, are needed at all stages of the data and software life cycle, from monitoring data integrity over time to ensuring that security is not an afterthought through processes such as DevSecOps, a approach that integrates security throughout the development life cycle, and more. As the use of AI grows, data science programs must also invest more resources in risk management.
For now, however, failures are an increasingly important part of all digital systems, as companies continue to learn the hard way. Cybersecurity programs must recognize this fact in practice, if not only because it is actually a fact.