Topics in this article

Advances in machine learning, natural language processing and computing capacity paved the way for the era of AI. Now, as the technology keeps rapidly evolving, it is changing the way we work — with the potential to revolutionize entire industries.

A Gartner Research report, Tech CEO Insight: Adoption Rates for AI and GenAI Across Verticals, shares highlights from the 2024 Gartner CIO and Tech Executive Survey, and states: “Seventy-three percent of respondents to the same survey indicated they will increase funding for AI in 2024. (Twenty-six percent said funding would stay at 2023 levels; just 1% will decrease spending.)”*

These are the highest levels of funding and rates of adoption that we have seen for AI until now.

At the same time, GenAI has surged in popularity thanks to the advent of consumer-friendly large language models (LLMs) like ChatGPT, adding to the wave of AI innovation. NTT DATA’s landmark Global GenAI Report reveals that 99% of organizations are now planning further investment in GenAI. 

Understanding AI risks

As more and more organizations adopt AI and GenAI solutions, it’s becoming clear that we are just scratching the surface of the immense value that these technologies can deliver. However, the rapid adoption of the technology has brought with it a significant increase in security vulnerabilities and AI-enabled threats.

Cybercriminals are exploiting weak security controls to manipulate AI models and compromise the data integrity and reliability of AI-enabled solutions.

According to a Gartner press release, “Artificial intelligence (AI)-enhanced malicious attacks are the top emerging risk for enterprises in the third quarter of 2024, according to Gartner, Inc. It’s the third consecutive quarter with these attacks being the top of emerging risk.”** 

Understanding these security risks is essential to unlocking the true potential of AI. We need the right policies and controls to deal proactively with the risks associated with AI models, protect the huge volumes of data being generated and secure the supporting infrastructure. 

Be aware of how AI risks manifest

Cybercriminals are always probing for vulnerabilities that would allow them to compromise the data and algorithms of AI systems. Risks can manifest as data poisoning (introducing biased or corrupted data) or adversarial attacks such as prompt injection or jailbreaks that manipulate algorithms.

Recent jailbreaking incidents involving the Chinese AI startup DeepSeek and the AI coding assistant GitHub Copilot, among others, serve as grim reminders of how quickly things can go wrong.

AI systems and AI-powered applications are as vulnerable to cyberattacks as any other technology. This is a growing problem as these systems become more advanced and autonomous. Breaches could have catastrophic consequences both for organizations and for society at large.

We must also be careful about using unrestricted or unvetted AI agents, such as note-taking AI assistants that can join company calls, or high-resolution photo-enhancing applications. These agents add value by making business processes such as recruiting and procurement more efficient, but only if they adhere to proper security controls and data-governance frameworks. It’s therefore critical to have a well-defined organizational policy in place regarding AI usage, data governance and privacy. 

Manage AI risks proactively across your organization

Securing AI systems in your organization requires a proactive approach that covers both technical and organizational measures.

Here’s how you can build a strong foundation:

  • Have a clear business case for how AI is creating value in your organization, and understand the essential elements you’ll need to implement AI — whether for broader applications of AI or for context-dependent GenAI.
  • Create a strong structure for AI security governance. Conduct continuous risk assessments to identify vulnerabilities that may affect AI applications and LLMs, or the components of the underlying technological infrastructure.
  • Consider frameworks and tools such as the AI Risk Management Framework of the US National Institute of Standards and Technology (NIST) and specific threat-modeling tools such as the MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (MITRE ATLAS), a globally accessible knowledge base of adversary tactics and techniques against Al-enabled systems.
  • Be aware of regulations that apply to your organization and your value chain, and keep up with the (rapid) changes in local, regional and global regulations. Refine your governance, risk and compliance approach by working with experts who have a deep understanding of these frameworks.

7 crucial aspects of AI security

Taking responsibility for security should be the collective responsibility of business leaders, developers and industrial and sectoral regulators. Let’s look at some of the most important areas that deserve your attention in this regard.

1. Security by design

Security, privacy and trust are integral elements when you’re designing AI systems. When you’re capturing data to feed into the design, ask your business leaders key questions such as why they need specific data, when and how they will use it, and whether there are any alternatives that can be used. 

2. Data privacy and protection

The data used by your AI systems must be accurate, secure and tamper-proof. Security controls should focus on both the AI models themselves and the data used to train them, with measures such as access control, encryption and data anonymization in place to safeguard sensitive information. 

3. Model validation and testing

Invest in thorough testing processes before deployment to rectify any vulnerabilities at an early stage, and continue to test regularly. To make your AI models more resistant to attacks, use techniques like adversarial training — training a model on normal, benign data and on data that has been intentionally modified to mimic the tactics of attackers.

4. Continuous monitoring and threat detection

Implementing advanced monitoring tools and techniques allows you to quickly identify suspicious activities or anomalies in your AI systems. By detecting and responding to threats in real time, you can prevent potential breaches before they cause major harm. 

5. Incident-response planning

Develop and maintain an incident-response plan specifically for AI-related incidents. The plan should include procedures for detecting, containing and recovering from AI-specific attacks. 

6. Collaboration and education

Encourage your security teams and AI developers to integrate security into the AI development lifecycle. Educate your employees about the risks associated with AI and the importance of adhering to best practices in cybersecurity. When you foster a culture of security awareness, your employees become your first line of defense. 

7. Risk management and compliance

AI governance is an evolving field, and some governments and regulatory bodies have made more progress than others. In Singapore, for example, governance regarding the safe and secure use of AI has been in place for some time. 

Thoroughly study and follow frameworks such as the NIST framework mentioned earlier, the ISO/IEC 42001:2023 standard that provides guidelines for managing AI systems in organizations, and the Open Web Application Security Project’s Top 10 for Large Language Model Applications. When you’re comparing frameworks, consider your organization’s specific AI use cases and your business goals, as well as industry regulations. 

Let’s discuss your AI journey

As you adopt AI in your organization, you need to adapt your security strategy to address the unique risks associated with this technology. Traditional security controls are no longer sufficient to protect AI systems from sophisticated AI-enabled attacks.

You can also learn more about how AI is transforming security operations centers (SOCs) in my recent blog about AI in SOCs and the future of cost-efficient cybersecurity.

Contact us to discuss how you can prepare your organization to manage your AI risks efficiently and effectively, no matter where you are on your AI journey.

WHAT TO DO NEXT
Read more about NTT DATA’s cybersecurity solutions to see how we keep your organization’s valuable data and applications safe.

Topics in this article

* Gartner Research, Tech CEO Insight: Adoption Rates for AI and GenAI Across Verticals, Whit Andrews, 11 March 2024

** Gartner, Gartner Survey Shows AI Enhanced Malicious Attacks as Top Emerging Risk for Enterprises for Third Consecutive Quarter, 1 November 2024