Topics in this article

Data and AI

As a person with a visual disability, for most of my life, technology has been a quiet companion, speaking to me through the steady rhythm of a screen reader. This voice has turned flat pixels into meaning, navigation into possibility and digital spaces into something I can access and contribute to meaningfully.

But when vision features emerged in GenAI, I began experiencing something entirely new. Images that were once blank and silent shapes became detailed scenes. Complex charts became structured meaning. Context that used to slip through my fingers became information I could act on.

To be clear, the technology is not yet perfect. The experience is often a disjointed dance of high-friction steps: take a screenshot, switch applications, upload, enter a prompt and wait. Yet, even within this fragmented workflow, the sheer potential of “seeing” through an algorithm still fills me with wonder.

Enter the era of agentic AI

But just as we are beginning to grasp this capability, a deeper evolution is knocking at our door.

Agentic AI promises to take this experience to the next level, moving beyond isolated descriptions to integrated action. It introduces systems that can understand my intent, coordinate tools, make decisions within boundaries I define and complete multistep tasks on my behalf.

The shift from responding to my commands to collaborating with me on outcomes is the frontier that could redefine what independence looks like for more than a billion people living with disabilities worldwide.

According to the World Health Organization, an estimated 1.3 billion people globally live with a disability. This demonstrates the scale of unmet needs around accessibility and inclusion. Agentic AI could be the catalyst that finally closes long-standing participation gaps, but only if we guide its development with clear ethical guardrails and the lived expertise of disabled communities.

Understanding agentic AI in human terms

GenAI lets us ask questions and receive fluent answers. Agentic AI shifts from answers to actions. It demonstrates goal-oriented behavior by planning tasks, choosing tools, adapting when conditions change and working toward outcomes instead of following isolated instructions.

We can describe agentic AI as building on generative models with three extra ingredients:

  • It can break a goal into steps.
  • It can decide which tools or services to call at each step.
  • It can observe the results, then adjust its plan accordingly.

In ordinary language, this means the system doesn’t just respond but also helps you get things done across multiple applications and contexts.

In everyday life, it could look like this: Instead of asking, “Summarize this report,” I might say, “Prepare me for tomorrow’s client meeting.”

An agentic system could then search through emails, internal documents and chat threads, pull the most relevant updates, extract insights from slides and generate accessible summaries in formats I prefer. Instead of merely describing an image, it could find related information, proactively remove accessibility barriers, and convert visual data into structured tables I could navigate with a screen reader.

For someone who is blind or has low vision, this represents a move toward genuine digital autonomy.

How agentic AI expands human autonomy

Agentic AI has the potential to reshape experiences across three essential domains of daily life:

1. Information navigation: The cognitive ally

Digital information overload affects everyone, yet blind users often face added friction when interfaces are inconsistent or content is not structured for screen readers. GenAI already helps by summarizing messages and simplifying dense documents.

Agentic AI could go further, acting as a cognitive ally. It could filter information according to my priorities, identify urgent items, group related content, translate visual elements into accessible formats and build personalized knowledge briefs across applications. It removes the extra burden of wrestling with cluttered or partially accessible systems and replaces it with insight that is accessible by design.

2. Physical and digital orientation: Guided, not guessed

Orientation and mobility are complex, even with existing tools. Many of us combine mental maps, GPS data and memories of routes with clues from the environment.

Agentic AI could merge building data, sensor inputs, maps, computer vision and real-time environmental information to plan accessible routes through airports, offices and public spaces. Instead of simply describing what is around me, it could anticipate obstructions, reroute proactively, interact with building systems such as elevators or application programming interfaces for signage, and coordinate multiple data sources so that movement feels guided, not guessed. This is navigation as an intelligent partnership.

3. Work and economic participation: From barriers to equity

GenAI and related technologies could automate a large share of language and reasoning tasks, reshaping work across industries. For disabled professionals, this is about more than productivity. It is also about equity.

Agentic AI could operate as an accessibility engine in the background by converting inaccessible dashboards into narrative summaries, automating repetitive reporting, surfacing insights buried in knowledge bases, and flagging accessibility problems in team workflows. It helps people collaborate effectively even when tools are not fully compliant. Work becomes less about overcoming barriers and more about applying skills and creativity.

We face an ethical crossroads

Agentic AI’s autonomy also introduces ethical challenges that run deeper than those of earlier AI systems. Global frameworks such as the OECD AI Principles and UNESCO’s Recommendation on the Ethics of AI emphasize dignity, human rights and meaningful human oversight.

Applying those values to disability requires us to confront specific risks:

Who controls the goals?

A system that optimizes for safety or efficiency might quietly restrict options that it believes are too risky for me, without telling me what was removed. It could hide certain routes when navigating, filter opportunities when matching jobs or steer me toward “simpler” choices. But autonomy shrinks when optimization happens in the dark. Disabled users must be able to define goals, set limits and see when the agent has made trade-offs on their behalf.

What data shapes the agent’s decisions?

If training data does not adequately represent disabled people, agentic systems will make the wrong assumptions. They may misinterpret patterns of interaction, mark normal variations as anomalies or encode ableist norms present in historical data. When agents start to automate decisions in hiring, service delivery or support allocation, these biases turn into structural barriers at scale.

How much context is too much?

To help me in the most effective way, agentic systems may request access to my calendar, communication history, documents and even behavioral patterns. Disabled people already experience higher levels of monitoring in some workplaces and service systems. There is a real risk that an “assistive” AI becomes another layer of surveillance. Any deployment must strictly limit what is collected, who can see it and how it is used beyond direct user benefit.

What happens when agents fail?

Errors from reactive systems are usually localized. A misread image or a clumsy summary is annoying yet contained. Errors from agentic systems, however, can cascade. A misunderstanding in an email drafted on my behalf might harm a relationship. A navigation error might put me in a genuinely unsafe situation. Reliability, clear logs of what the agent did and practical ways to roll back actions become essential.

A disability-centered framework: The 4 A’s

To turn potential into progress, we need a disability-centered framework that leaders, designers and engineers can apply in practice. Four principles form a solid foundation.

  1. Autonomy (choice and control): The primary outcome of agentic AI should be expanded choice for the user. Goals must be set by the person, not inferred in opaque ways. Users must be able to override the agent easily, inspect its plans in understandable language and opt out of specific types of action.
  2. Accessibility (inclusive and assistive-ready): Agentic AI must integrate smoothly with assistive technologies, not compete with them. It should respect screen-reader conventions, support keyboard navigation, produce alt-text and structured outputs by default and offer multiple modalities (audio, text, tactile). Accessibility is not a feature; it is the delivery mechanism.
  3. Accountability (transparent and responsible): Responsibility for agent behavior belongs with organizations and developers, not with disabled users who rely on these tools. Systems must keep transparent records of what the agent did, which data it used and which decisions it took automatically. Users need clear channels to contest outcomes and humans in the loop when stakes are high.
  4. Agency (co-create with persons with disabilities): Disabled people must be co-designers and co-decision-makers in the development and governance of agentic AI. Early testing with disabled users, advisory roles, participatory design methods and leadership opportunities for persons with disabilities inside AI programs are not “nice to have.” They are the difference between tools that truly empower and tools that accidentally exclude.

The era of collaborative autonomy

The boundary between “assistive technology” and “mainstream technology” is blurring. This gives us a rare chance to bake accessibility into the core of how AI systems are built, turning them from reactive tools into intelligent partners.

However, this future is not guaranteed. We know the cost of being an afterthought. Agentic AI can be a force multiplier for persons with disabilities by removing the friction they have managed for years — but only if we shape it.

The technology is ready to collaborate. The question is: Are the developers ready to listen? It starts with including disabled voices in the coding, not just in the testing. If we embed the right values now, agentic AI will not just assist us but also help build a world designed for everyone.

WHAT TO DO NEXT
Read more about NTT DATA’s agentic AI services to see how we help organizations globally optimize their operations.
Jetzt Kontakt aufnehmen