How AI is changing cloud security and the risk equation

According to cybersecurity expert Liat Hayun, the rise of artificial intelligence is amplifying risks across enterprise data spaces and cloud environments.

In an interview with TechRepublic, Hayun, vice president of product management and cloud security research at Tenable, advised organizations to prioritize understanding their risk exposure and tolerance while prioritizing key issues such as cloud misconfiguration and protecting sensitive data.

Profile photo of Liat Hayun.
Liat Hayun, VP of Product Management and Cloud Security Research at Tenable

She noted that while businesses remain cautious, the availability of AI highlights certain risks. However, she explained that CISOs today are evolving into business enablers – and AI could eventually serve as a powerful tool to enhance security.

How AI affects cybersecurity, data storage

TechRepublic: What’s changing the cybersecurity landscape with artificial intelligence?

Poured: First, AI has become much more accessible to organizations. If you look back 10 years ago, the only organizations that were building AI had to have this dedicated data science team that had PhDs in data science and statistics to build machine learning and AI algorithms. Creating AI has become much easier for organizations; it’s almost the same as introducing a new programming language or a new library to their environment. So many more organizations—not just big ones like Tenable and others—but any startup can now take advantage of AI and implement it into their products.

SEE: Gartner tells Australian IT leaders to embrace AI at their own pace

Second thing: AI requires a lot of data. Many more organizations need to collect and store larger volumes of data, which is also sometimes a higher level of sensitivity. Previously, my streaming service would store very few details about me. Now maybe it depends on my geographic location because they can make more specific recommendations based on that or my age and gender and so on. Since they can now use this data for their business purposes – to make more deals – they are now much more motivated to store this data in higher volumes and with increasing levels of sensitivity.

TechRepublic: Does this contribute to growing cloud adoption?

Poured: If you want to store large amounts of data, it’s much easier to do it in the cloud. Each time you decide to save a new type of data, the amount of data you save increases. You don’t need to go to your data center and order new volumes of data to install. Just click and boom, you have a new data storage location. So the cloud has made data storage much easier.

These three components form a kind of circle that feeds itself. Because if it’s easier to store data, you can upgrade more AI capabilities and then you’re motivated to store even more data and so on. This is what has happened in the world in the last few years – since LLMs have become a much more accessible and common capability for organizations – and has brought challenges in all three of these verticals.

Understanding the security risks of AI

TechRepublic: Do you see specific cybersecurity risks growing with AI?

Poured: The use of AI in organizations, as opposed to the use of AI by individuals around the world, is still in its early stages. Organizations want to make sure that they’re implementing it in a way that, I would say, doesn’t create any unnecessary risk or any extreme risk. So when it comes to statistics, we still only have a few examples and they are not necessarily good examples because they are more experimental.

One example of a risk is training artificial intelligence on sensitive data. That’s something we see. It’s not because organizations aren’t careful; this is because it is very difficult to separate sensitive data from non-sensitive data while having an effective AI mechanism that is trained on the right data set.

The second thing we see is what we call data poisoning. So even if you have an AI agent that is trained on non-sensitive data, if that non-sensitive data is publicly exposed, as an adversary, as an attacker, I can insert my own data into that publicly exposed, publicly accessible data. data store and let your AI say things you didn’t mean to say. It is not this omniscient entity. He knows what to see.

TechRepublic: How should organizations weigh AI security risks?

Poured: First, I would ask how organizations can understand the level of exposure they have, which includes cloud, AI and data … and everything related to how they use third-party vendors and how they use other software in their organization, etc. we.

SEE: Australia proposes mandatory guardrails for artificial intelligence

The second part is how do you identify critical exposures? So if we know it’s a public asset with a very serious vulnerability, that’s something you’ll probably want to address first. But it’s also a combination of impact, right? If you have two problems that are very similar, and one can compromise sensitive data and one can’t, you’ll want to address the first (problem) first.

You also need to know what steps to take to address these exposures with minimal business impact.

TechRepublic: What big cloud security risks do you warn about?

Poured: We usually advise our customers to do three things.

The first is related to misconfiguration. Just because of the complexity of the infrastructure, the complexity of the cloud and all the technologies that it provides, even if you’re in a single cloud environment—but especially if you’re moving to multicloud—the chances are that something will become a problem. just because it was not configured correctly it is still very high. So that’s definitely one thing I would focus on, especially when introducing new technologies like AI.

The second is overly privileged access. Many people think their organization is super secure. But if your house is a fortress and you’re handing out keys to everyone around you, it’s still a problem. Another area of ​​interest is therefore excessive access to sensitive data, to critical infrastructure. Even if everything is perfectly configured and you don’t have any hackers in your environment, this is an additional risk.

The aspect that people think about the most is identifying malicious or suspicious activity as soon as it occurs. This is where AI can be leveraged; because if we leverage AI tools within our security tools in our infrastructure, we can leverage the fact that they can look at large amounts of data and they can do it really quickly to be able to identify suspicious or malicious behavior in the environment as well. . So we can address these behaviors, these activities as soon as possible before something critical is compromised.

AI implementation ‘too good to miss’

TechRepublic: How do CISOs feel about the risks you see in AI?

Poured: I have been in the field of cyber security for 15 years. I like to see that most security experts, most CISOs, are different from what they used to be ten years ago. As opposed to being gatekeepers, as opposed to saying, “No, we can’t use this because it’s risky,” they ask themselves, “How can we use this and reduce the risk?” Which is a wonderful trend to see. They become more of an activator.

TechRepublic: Do you see both the good side of AI and the risks?

Poured: Organizations need to think more about how they implement AI than thinking “AI is too risky right now”. You can’t do that.

Organizations that don’t adopt AI in the next few years will be left behind. It’s an amazing tool that can benefit many business use cases, internally for collaboration and analytics and insights, and externally for tools we can provide to our customers. There is an opportunity too good to pass up. If I can help organizations get to that mindset where they say, “Okay, we can use AI, but we have to take these risks into account,” I’ve done my job.

Leave a Comment