Combine & Conquer In The Agentic AI Space - Part III - Measuring Disruption and Resilience
Series Recap
In Part I of this Combine & Conquer series, we saw a new reality emerge: frontline LLM providers like OpenAI, Google, and Anthropic are no longer content with just models and APIs - they are moving up the stack and entering the agent space. In Part II, we looked at the developments in the “vibe coding” category as a warning for how it is very feasible for these frontline LLM providers to rapidly absorb a massive agentic solution category even when there were several well regarded domain expert incumbents.
And now in Part III, we want to build on this theme of disruption in the Agentic AI space - but this time we want to build out a framework that can help us determine the level of disruption risk that companies in this space face, based on their current products/services/offerings.
Now the key question every Agentic AI startup must answer is no longer “What can we build?” It’s “What can we defend?”
To answer that, we need a way to cut through demos, hype, and press releases and judge companies based on structural resilience. In this post, we share the exact disruption rating criteria we use at The India Portfolio to evaluate Agentic AI startups.
The Core Conflict: The "OS vs. App" Dynamic
The Agentic AI space is now headed into the classic platform war: OS Vs. App. The "OS" (Operating System) Players like OpenAI, Google, Anthropic are already building their Agentic "App" layers, such as -
- OpenAI - AgentKit, Agents SDK, Codex, Project Mercury
- Google Gemini - Gemini Enterprise, Gemini Code Assist, Jules
- Anthropic - Claude Code, Claude for Financial Services, Claude Skills
(you can also include the browsers like Perplexity's Comet or OpenAI's Atlas and other such browser/computer use agents)
The goal of these frontline LLM providers is straightforward - absorb horizontal use cases right above their current stack or expertise. And then use their expertise across - model performance and distribution to build and scale their offerings to dominate these use cases.
What this means for the "Apps" i.e. the pure Agentic AI companies is that they have to switch from asking themselves - “What can we build?” to “What can we defend?”
That brings us to our Three-Tier Agent Hierarchy.
The Three Tiers Of AI Agents
The level of disruption risk is directly tied to the level of human expertise the agents look to emulate or replace. We classify agents into three tiers based on the real job they perform inside a business.
Tier I: Operational Agents (The "Script Follower")
- Analogy: Customer Support Rep, Data Entry Clerk
- Role: Executes high-volume, repetitive, scripted tasks
- Examples: Answer FAQs, Summarize this document, Book a meeting, etc.
- Risks: High — this is exactly what frontline LLMs already do out of the box. Any company selling Tier I agents is standing on a trapdoor.
Tier II: Tactical Agents (The "Specialist")
- Analogy: SDR, Claims Adjuster, Collections Agent, Loan Processor
- Role: Owns and executes a full workflow end-to-end, not just a single task
- Examples: Qualify a lead → follow up → book a meeting, Underwrite a loan, Manage an accounts payable cycle
- Risks: Moderate to Low — real defensibility starts here. These agents encode a repeatable playbook that reflects real business processes. Harder to copy, harder to replace.
Tier III: Strategic Agents (The "Thinker")
- Analogy: Business Analyst, FP&A Analyst, COO
- Role: Identifies patterns, reasons across messy data, and makes judgment calls
- Examples: Why did sales drop this quarter? Where are we leaking margin? How to evaluate disruption risk to Agentic AI companies? :D
- Risks: Low — strategic agents sit closest to real business value and require organizational context and proprietary data.
However, agent depth alone is not enough to judge defensibility. A company might build a Tactical (Tier II) or even Strategic (Tier III) agent, but still be exposed if:
- It sells horizontally into too many industries without real specialization
- It doesn’t sit close enough to customer data or workflows
- It has a weak go-to-market motion or can be displaced cheaply
- Or worst of all, it overlaps too much with OpenAI/Gemini’s roadmap
This is why we add a second layer of scoring: the Five Disruption Drivers. These measure the business durability of a company — things that agent tier alone can’t capture, like integration depth, switching costs, and platform pressure.
The Five Disruption Drivers
These five dimensions tell us whether a company has a real moat or is just surviving on demo energy.
| Dimension | What We're Really Asking |
|---|---|
| Workflow Depth & Autonomy | Does it automate full workflows (Tier II/Tier III) or just tasks Tier I)? |
| Domain Specialization | Is it embedded in a complex/regulated vertical or is it generic horizontal AI? |
| Data & Integration Moat | Does it sit close to customer data and legacy systems, making it hard to replace? |
| Distribution & Switching Costs | Does the GTM motion create stickiness (enterprise, high ACV) or churn risk (PLG, easy replace)? |
| Exposure to LLM Platform Overlap | How much of the product now overlaps with what OpenAI/Gemini are giving away? |
Putting It All Together: The Disruption Rating Framework
When we combine the Agent Tier Classification (Tiers I/II/III) with the Five Disruption Drivers, we get a clear picture of how exposed a company really is. This gives us the Disruption Rating — a simple 4-point score that tells us how likely a company is to get disrupted over the next 24 months.
| Rating | Label | What It Means |
|---|---|---|
| 1/4 | Low Risk | Strong moat. Deep Tier II/III workflow ownership in a complex vertical. Hard to displace. |
| 2/4 | Low-Medium Risk | Defensible niche with workflow depth or integrations, but some exposure. |
| 3/4 | Medium–High Risk | Horizontal positioning or shallow workflows make it vulnerable to platform pressure. |
| 4/4 | High Risk | Mostly Tier I automation. A feature waiting to be replaced by OpenAI/Gemini. |
This framework gives us a clear and structured way to separate companies that are strategically durable from those that are easily replaceable. It forces a shift away from surface-level product descriptions and toward a deeper evaluation of where value is created, how defensible it is, and how exposed the company is to platform pressure from OpenAI, Google and Anthropic.
Next, we will apply this framework to a set of leading India-based Agentic AI companies. The goal is simple: identify which companies are positioned to survive and thrive, and which business models are already on a path to disruption. Stay tuned for Part IV.
Related Reading
Who is winning in the Vibe Coding space and are there lessons in it for Agentic AI companies?
Combine & Conquer In The Agentic AI Space - Part II - Who Is Winning In The Vibe Coding Space? →