New Delhi, March 13 -- As enterprises move from experimenting with generative AI to deploying agentic AI systems capable of executing end-to-end workflows, the role of global capability centres (GCCs) is poised for a major shift. Industry projections underline the scale of the transition. Research firm Gartner estimates that by 2029, agentic AI could resolve up to 80% of customer service interactions and reduce operational costs by 30%. That level of automation raises an important question for global companies: what happens to the large offshore delivery hubs that currently power many enterprise processes?

According to consulting and technology firm ZS Associates, GCCs are unlikely to become redundant. Instead, they could evolve from delivery centres focused on process efficiency into strategic hubs that orchestrate AI deployment, data readiness and enterprise-wide execution. In an interview with TechCircle, Karan Dhundia, principal at management consulting and technology firm ZS, explains why GCCs may become the operational backbone for scaling agentic AI, how their mandates are expanding beyond delivery, and what skills will define the next generation of GCC talent. Edited excerpts.

As AI agents scale across service operations, how will the role of GCCs change?

Yes, it changes the equation quite significantly. Historically, GCCs have been measured on efficiency metrics-cost per employee, productivity improvements and SLA adherence. Their value was largely tied to scaling human execution.

In an AI-led environment, that model no longer holds. As AI agents take on a larger share of operational work, GCCs will increasingly evolve into AI command centres responsible for designing, deploying and governing intelligent systems at scale.

Performance metrics will shift as well. Instead of measuring the number of people executing processes, organisations will track autonomous task completion rates, the speed of AI deployment and the effectiveness of human-AI collaboration.

Talent structures will also change. Rather than scaling headcount, enterprises will scale intelligence through roles such as AI orchestration engineers, governance specialists and data architects who manage intelligent systems.

Where are GCCs falling short today in preparing for this shift-data, integration or accountability?

The reality is that the gaps exist across all three. Many enterprises still operate with legacy systems and fragmented data environments that were originally designed for human reporting rather than autonomous AI systems. GCCs are often tasked with managing data but do not always have full ownership of data governance or quality.

Integration is another persistent challenge. Different functions connect to enterprise systems using different standards and architectures, which makes it difficult to build the unified platforms that AI systems require.

But the most critical issue is ownership and accountability. Strategy is often defined at headquarters while GCCs focus on execution. When AI-driven outcomes fall short, responsibility becomes diffused.

For GCCs to become true AI command hubs, they need clear end-to-end mandates covering data, platforms and business outcomes.

How are GCC mandates evolving as they take on a larger role in enterprise AI?

Traditionally, GCCs operated within tightly defined execution frameworks. Their focus was on following global policies, managing SLAs and ensuring operational discipline. With AI becoming embedded in enterprise workflows, its responsibilities are expanding significantly.

GCCs are now expected to help define standards for model development, determine what AI agents are authorised to do, and establish governance frameworks around them. This includes managing risks that did not previously exist at scale-such as model bias, explainability and reputational risk.

In regulated sectors, especially, GCCs may also need to interpret evolving AI regulations and respond to regulatory enquiries. That means the role is shifting from operational delivery to strategic governance.

How can GCCs balance rapid AI deployment with regulatory oversight in sectors like BFSI or healthcare?

Leading organisations are adopting a tiered approach to AI deployment based on risk levels.

Low-risk use cases-such as appointment reminders or basic service interactions-can be deployed quickly with relatively light governance. Medium-risk processes require more structured deployment and oversight. But in high-risk areas that affect health outcomes, financial decisions or regulatory compliance, strict controls must be built in from the start. These include explainability mechanisms, auditable decision trails and strong oversight.

The most important principle is that compliance must be embedded in the architecture, not added after deployment. When governance, reporting and traceability are built directly into AI systems, organisations can scale automation without compromising accountability.

Will agentic AI eliminate jobs within GCCs, or mainly reshape them?

Both dynamics will occur, but the larger story is role redesign. AI will reduce some categories of work, particularly repetitive and transactional tasks. That shift is already visible. However, the roles that remain will look fundamentally different. They will require stronger cognitive capabilities-problem solving, contextual judgement and the ability to collaborate with intelligent systems. Technical fluency will also become critical. Professionals will need to understand AI systems, data ecosystems and workflow orchestration, in addition to domain expertise. One interesting implication is that AI capability may accelerate career trajectories. Someone with two years of experience and strong AI fluency could outperform someone with significantly more traditional experience.

Are global firms ready to let GCCs drive core AI decisions rather than just execution?

Most organisations are not fully there yet. There is still a legacy mindset that views GCCs primarily as offshore execution centres. Allowing them to lead core AI decisions requires a shift in trust, governance structures and leadership models. Companies that succeed will likely adopt joint accountability frameworks, where strategy, execution and outcomes are shared across global headquarters and GCC teams. When GCCs are given end-to-end ownership-and when they consistently demonstrate impact-they can evolve from support engines into strategic co-creators of enterprise AI capability.

What will distinguish GCCs that successfully scale responsible AI from those that struggle?

Three factors will matter, but they must develop in the right sequence. First is data maturity. Without strong data governance and quality foundations, AI initiatives will fail quickly or create unintended risks. Second is technology architecture. Fragmented tools may deliver pilots, but scalable impact requires interoperable platforms that connect data, workflows and analytics. Only after those foundations are in place does organisational culture become the differentiator. Responsible AI requires cross-functional collaboration, leadership commitment and governance models that prioritise transparency without slowing innovation.

The GCCs that succeed will combine strong data foundations, scalable platforms and a culture of accountability-turning responsible AI into a long-term competitive advantage.

Published by HT Digital Content Services with permission from TechCircle.