-
Featured services
2026 Global AI Report: A Playbook for AI Leaders
Why AI strategy is your business strategy: The acceleration toward an AI-native state. Explore executive insights from AI leaders.
Access the playbook -
Services
View all services and productsLeverage our capabilities to accelerate your business transformation.
-
Services
Enterprise Networking
-
Services
Cloud
-
Services
Consulting
-
-
Services
Data and Analytics
-
Services
Infrastructure Solutions
-
Services
Global Data Centers
-
Services
CX and Digital Products
-
Services
Application Services
-
-
Services
Digital Workplace
-
Services
Business Process Services
-
Services
Generative AI
-
Services
Cybersecurity
-
Services
Enterprise Application Platforms
Accelerate outcomes with agentic AI
Optimize workflows and get results with NTT DATA's Smart AI AgentTM Ecosystem
Create your roadmap -
-
-
Insights
Insights
Recent Insights
-
The Future of Networking in 2025 and Beyond
-
Using the cloud to cut costs needs the right approach
When organizations focus on transformation, a move to the cloud can deliver cost savings – but they often need expert advice to help them along their journey
-
Make zero trust security work for your organization
Make zero trust security work for your organization across hybrid work environments.
-
-
2026 Global AI Report: A Playbook for AI Leaders
Why AI strategy is your business strategy: The acceleration toward an AI-native state. Explore executive insights from AI leaders.
Access the playbook -
-
Discover how we accelerate your business transformation
-
About us
CLIENT STORIES
-
Liantis
Over time, Liantis – an established HR company in Belgium – had built up data islands and isolated solutions as part of their legacy system.
-
Randstad
We ensured that Randstad’s migration to Genesys Cloud CX had no impact on availability, ensuring an exceptional user experience for clients and talent.
-
-
CLIENT STORIES
-
Liantis
Over time, Liantis – an established HR company in Belgium – had built up data islands and isolated solutions as part of their legacy system.
-
Randstad
We ensured that Randstad’s migration to Genesys Cloud CX had no impact on availability, ensuring an exceptional user experience for clients and talent.
-
2026 Global AI Report: A Playbook for AI Leaders
Why AI strategy is your business strategy: The acceleration toward an AI-native state. Explore executive insights from AI leaders.
Access the playbook -
- Careers
Topics in this article
When I talk to financial leaders, there’s a quiet hesitation that sits beneath the excitement about AI: We’re moving fast, but we’re not always sure the ground beneath us is solid.
Across the industry, discussions still gravitate toward performance — speed, scale, automation and efficiency. But at the 2025 FT Global Banking Summit, my focus was deliberately different. The most urgent question in banking is not how fast we can deploy AI, but how responsibly we can embed it into the fabric of our institutions.
At NTT DATA, our approach is grounded in a simple idea: AI must be designed for humans, around humans and with humans in the loop. Rather than a substitution strategy, it’s an augmentation philosophy. We call this human AI, a design principle that places human values, oversight and societal impact at the center of every system we build.
This approach is both ethical and essential. Banking is a high-risk, high-trust industry, and AI cannot succeed unless people trust it, regulators understand it and institutions can explain it. The decisions we make in these early years will lay the foundation for how AI shapes financial services for decades.
Before building anything, ask the hard questions
The industry is flooded with new models and breakthroughs, many deployed faster than we have time to understand their implications. But responsible AI begins long before deployment, with the fundamental questions that determine what we build and why we build it.
Before any financial institution introduces an intelligent system, it should ask:
- What problem are we truly solving, and who benefits?
- What are the ethical, social and economic consequences?
- Where does human oversight begin and end?
- How will we explain decisions to customers, regulators or even ourselves?
- What data is driving this model, and is it fair, secure and representative?
Without clarity at this stage, even the most advanced systems can become liabilities rather than assets. These early decisions influence every outcome that follows, from fairness to accountability.
Bias: The challenge we must stop tiptoeing around
Bias is often treated as a technical defect — something to patch, mitigate or manage. But it is inherently human. It lives in our histories, systems, datasets and assumptions. The real danger is not that AI reflects bias; it’s that AI can amplify it at machine scale. In banking, where decisions often have potential financial and social consequences, this amplification can be profound.
At NTT DATA, we take an engineering-first approach. Bias is a core design challenge, and we address it head-on by:
- Designing mitigation strategies from the earliest stages of development
- Using diverse, representative data rather than convenient data
- Adopting explainable AI tools that uncover the why, not just the what
- Embedding human-in-the-loop oversight for every sensitive decision
- Auditing real-world performance for drift, disparity and unintended impacts
Bias does not enter a system only through data. It also enters through assumptions, shortcuts and every seemingly small decision made during development. Understanding this reality — and engineering around it — is essential if financial AI is to be a force for inclusion rather than exclusion.
The governance vacuum: Using AI without the guardrails
Governance is another area where early decisions determine long-term outcomes. Across institutions, leaders are encouraging teams to “lean into AI” but not providing the guardrails, clarity or accountability structures they need to use it safely.
In practice, this creates unnecessary risks, including:
- Misuse of sensitive or restricted data
- Overautomation of judgment-driven decisions
- Unexplainable or untraceable outputs
- Unclear or uneven responsibility
We don’t need more AI. We need better governance. Good governance is the steering wheel that allows innovation to move faster with confidence. It should feel empowering, not restrictive. This means developing clear playbooks outlining where AI may or may not be used, setting thresholds for human intervention, defining transparent accountability models and building governance structures that evolve as quickly as the technology itself.
Trust is not created by algorithms but by governance that makes those algorithms dependable.
Automation versus human judgment: Finding the right balance
AI is exceptional at scale, pattern recognition and repeatability, but humans excel in the gray areas where context, consequence, ethics and empathy matter just as much as data. Every bank must determine not only where AI should operate but also where it should stop.
For high-risk or sensitive decisions, human oversight must be designed into the workflow from the start. One of the most effective mechanisms is the use of dynamic thresholds — systems in which AI triggers human review automatically when confidence falls, risk rises or circumstances deviate from the norm.
AI should tell us when it is uncertain. Humans should decide what to do with that uncertainty, and accountability must remain with the people who design, deploy and supervise these systems, not the systems themselves.
Legacy banks and fintechs: Two journeys, one destination
Working with organizations across the spectrum, from centuries-old institutions to fast-scaling fintechs, one lesson is consistent: The future of AI in financial services will be determined not by capability but by responsibility.
Legacy banks contribute maturity, regulatory experience and deep customer trust while navigating the challenge of integrating AI into complex, long-standing ecosystems. Fintechs bring speed, cloud-native design and a culture of rapid experimentation; as they scale, they face the natural task of expanding the governance frameworks that accompany growth.
Both will ultimately need to borrow strengths from the other: the prudence of the legacy bank and the fintech’s agility. And both will need AI that is transparent, fair, explainable, governable and aligned with clear human oversight.
Communicating the purpose of AI to employees
One of the most overlooked components of responsible AI is how we talk to the people who use it.
If employees believe AI exists to replace them, they will resist it. If they understand AI exists to empower them — taking care of low-value tasks, improving decision-making and reducing risk — they will embrace it.
Transparency is everything. AI cannot succeed if humans distrust its motives. That’s why human AI puts employees at the center, equipping them with skills, clarity and confidence to work alongside intelligent systems, not compete with them.
What “AI done right” will mean in five years
If I were to simplify AI regulation to its essentials, five principles matter above all: mandatory explainability for high-stakes decisions; proactive bias detection, auditing and remediation; human oversight embedded by design; data ethics and privacy as the starting point, not an afterthought; and governance that adapts as quickly as the technology and society around it. This is not about slowing progress but about enabling lasting innovation.
If we get this right, the next era of banking will not be defined by automation alone. It will be shaped by trustworthy hyperpersonalization that anticipates customer needs without compromising ethics; by resilient, predictive financial systems that identify risks before they emerge; by decision-making that is equitable and consistent no matter who or where the customer is; by AI ecosystems that are transparent and accountable end to end; and by seamless human–AI collaboration that elevates the role of every employee.
Together, these elements will define what “AI done right” truly looks like in financial services.
Thriving institutions will treat AI not as a shortcut but as a long-term commitment to responsible innovation. And the path to that future begins with one fundamental belief: AI must remain human at its core. That is the condition that makes real innovation possible.