-
Featured services
2026 Global AI Report: A Playbook for AI Leaders
Why AI strategy is your business strategy: The acceleration toward an AI-native state. Explore executive insights from AI leaders.
Access the playbook -
Services
Alle Services und Produkte anzeigenNutzen Sie unsere Fähigkeiten, um die Transformation Ihres Unternehmens zu beschleunigen.
-
Services
Network-Services
Beliebte Produkte
-
Services
Cloud
Beliebte Produkte
-
Services
Consulting
-
Edge as a Service
-
Services
Data und Artificial Intelligence
- KI und intelligente Lösungen
- Daten-/KI-Strategie und -Programm
- Data Engineering und Plattformen
- Daten-Governance und -management
- Datenvisualisierung und Entscheidungsfindung
- $name
- GenAI Platforms
- GenAI Industry Services
- GenAI Infrastructure Services
- GenAI Value Transformation
- Data und Artificial Intelligence
-
-
Services
Global Data Centers
-
Beliebte Produkte
-
Services
Application Services
-
-
Services
Digital Workplace
-
Services
Business Process Services
-
Services
Generative AI
-
Services
Cybersecurity
-
Services
Enterprise Application Platforms
IDC MarketScape: Anbieterbewertung für Rechenzentrumsservices weltweit 2023
Wir glauben, dass Marktführer zu sein eine weitere Bestätigung unseres umfassenden Angebotes im Bereich Rechenzentren ist.
Holen Sie sich den IDC MarketScape -
-
Erkenntnisse
Einblicke und RessourcenErfahren Sie, wie die Technologie Unternehmen, die Industrie und die Gesellschaft prägt.
-
Erkenntnisse
Ausgewählte Einblicke
-
Die Zukunft des Networking
-
Using the cloud to cut costs needs the right approach
When organizations focus on transformation, a move to the cloud can deliver cost savings – but they often need expert advice to help them along their journey
-
So funktioniert Zero-Trust-Sicherheit für Ihr Unternehmen
Sorgen Sie dafür, dass Zero-Trust-Sicherheit für Ihr Unternehmen in hybriden Arbeitsumgebungen funktioniert.
-
-
Erkenntnisse
Copilot für Microsoft 365
Jeder kann mit einem leistungsstarken KI-Tool für die tägliche Arbeit intelligenter arbeiten.
Copilot noch heute entdecken -
-
Erfahren Sie, wie wir Ihre Geschäftstransformation beschleunigen können
-
Über uns
Neueste Kundenberichte
-
Liantis
Im Laufe der Zeit hatte Liantis, ein etabliertes HR-Unternehmen in Belgien, Dateninseln und isolierte Lösungen als Teil seines Legacysystems aufgebaut.
-
Randstad
We ensured that Randstad’s migration to Genesys Cloud CX had no impact on availability, ensuring an exceptional user experience for clients and talent.
-
-
NTT DATA und HEINEKEN
HEINEKEN revolutioniert die Mitarbeitererfahrung und die Zusammenarbeit mit einem hybriden Arbeitsplatzmodell.
Lesen Sie die Geschichte von HEINEKEN -
- Karriere
Topics in this article
Enterprise digitalization arrived steadily but relatively quietly. There was no single moment of disruption. Without much fanfare, it embedded itself into every corner of the business.
Of course, it came with a set of threats that no one had imagined. But organizations adapted. They built cybersecurity models to protect their connected systems and processes, and they continued working in these sheltered environments.
But AI is different. It is disrupting industries and introducing a fundamentally different set of attack vectors, trust challenges and governance gaps that outpace the security programs meant to manage them.
In a recent webinar with Prakash Narayanamoorthy, Global Capacity Leader: Emerging Technology (Cyber) at NTT DATA, and Peter Bailey, Senior Vice President and General Manager: Security at Cisco, we discussed what happens when AI adoption happens faster than security can keep up. Let’s explore some of the key themes that emerged from our conversation.
The gap between ambition and readiness
As AI fuels growth and innovation worldwide, the imperative to adopt it — and keep pace with competitors — has become undeniable.
What’s less clear is how to do this securely.
Findings in NTT DATA’s guide The AI security balancing act: From risk to innovation — based on conversations with more than 2,300 GenAI decision-makers across 34 countries and 12 industries — paint a consistent picture: While everyone understands the need for AI, few have a clear strategy for implementing and using it safely.
Leaders are walking a tightrope. They must balance the need to adopt AI quickly and demonstrate value with the requirement to foster trust and transparency and prevent misuse of the technology. Yet many find it difficult to put these guardrails in place.
This is where many organizations find themselves today: Their AI ambitions are accelerating faster than the structures needed to manage risk, trust and accountability.
As Peter says: “Cisco has also seen a striking gap today between AI ambition and AI readiness. Our goal is to turn security from a potential roadblock into a business enabler, giving organizations the foundation they need to innovate at the speed of the technology itself.”
Begin where you want to end
Most organizations don’t start their AI journey by thinking about risk but by seeing the opportunity. They know they need to start small with a new use case, pilot or proof of concept to test value and build momentum. But governance and security tend to be an afterthought — something to consider once the benefits are clear.
Security as an afterthought is never recommended, but it is particular risky because AI is so dynamic
AI systems don’t remain static. They learn, adapt, and begin to influence decisions and workflows in ways that can be difficult to roll back. By the time risks surface, AI is often already embedded throughout the business. That’s why it can’t be treated as a downstream concern.
If you want to scale AI responsibly, start by defining what trust, governance and accountability should look like once you’ve fully deployed your AI system.
You can then define acceptable use before rolling out new tools, establish ownership before deploying models, and design controls with the assumption that these systems will evolve.
Shadow AI: What you can see can’t hurt you
Shadow AI isn’t malicious. More often than not, it starts with good intentions. Maybe a team member downloads an AI tool that summarizes documents to speed up busy work. Great in principle, but they’ve unwittingly introduced AI into your IT environment without guardrails or governance.
If this becomes commonplace across your organization, your security team won’t have a clear view of which AI tools are in use, who shared what data and how the misuse of this data could influence decisions, recommendations or automated actions across the business.
This is why visibility is so important. It gives you a way to understand where and how AI is operating in your organization. Without it, governance remains theoretical and control becomes inconsistent.
AI security is an ongoing discipline
The challenge is that AI doesn’t exist in one place. It’s found in data, models, integrations and applications — increasingly in the form of autonomous agents.
Because each type of AI behaves differently, changes over time and introduces unique risks, visibility can’t stop at discovery. Understanding AI usage is only the first step. You also need to know how its behavior changes as models update, data shifts and usage evolves. And this is where traditional security approaches fall short.
One-time checks don’t pass muster in an environment where systems are always learning and adapting.
These systems need to be repeatedly tested and questioned to understand where they might fall short or be misused and whether existing controls still apply. Security, in this context, is an ongoing discipline, and it starts with visibility as a foundation for everything that follows.
What works today may not work tomorrow
AI security can’t rely on a single control. You need to build it in layers, covering the full AI lifecycle, because data, models, applications and integrations all introduce different risks. Each needs to be secured in context as AI systems become more interconnected and autonomous.
Because these layers aren’t static, controls that appear effective at launch may fail as models adapt and usage changes. This is where red teaming — deliberately stress-testing AI systems — becomes critical.
It helps you understand how AI behaves under pressure, where guardrails are breaking down, and how misuse or unintended outcomes can emerge as systems evolve. It’s about putting AI through its paces to validate that the controls in place always do what you expect them to do.
Zero trust principles also play a key role. In an AI-driven environment, you can’t assume trust. Whether the interaction comes from a person, a system or an autonomous agent, you need to verify every request, action or decision.
Scaling AI: There’s the fast way and there’s the right way
AI adoption isn’t slowing down, which means threat surfaces are also expanding. Reactive, one-off solutions won’t address this risk. You need continuous visibility, testing and a more flexible approach to security.
NTT DATA has partnered with Cisco to help you put these principles into practice. Together, we focus on securing the full AI lifecycle — from governance and visibility to layered protection, continuous testing and zero trust principles — so your security evolves in line with your technology.
AI is reshaping how organizations operate, and will continue to do so. But its impact is only as strong as the guardrails you put in place to control it. Let us help you build AI you can trust at scale.