Brandon Harris bio photo

Brandon Harris

Cloud + Data Engineering + Analytics

Twitter LinkedIn Instagram Github Photography

mcp_a2a_header_image

TLDR:

Bottom Line: Two new protocols—Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent (A2A)—are solving enterprise AI’s biggest challenge: getting AI agents to work with your existing systems and collaborate with each other, rather than building expensive, disconnected point solutions.

What This Means for Your Organization:

  • MCP gives AI agents secure access to your databases, documents, and business applications (think ERP, CRM, inventory systems)
  • A2A enables different AI agents to collaborate on complex, cross-functional problems
  • Together, they create an “AI Fabric” that scales intelligently as you add new capabilities

The Strategic Shift: Stop asking “Which AI platform should we choose?” Start asking “How can we create an ecosystem where specialized AI agents work together effectively?”

Organizations building connected AI ecosystems will adapt faster to new requirements, leverage best-of-breed models for different tasks, and scale AI initiatives without exponential complexity.


FULL TEXT:

I’ve been working at the intersection of data platforms and AI / Advanced Analytics for long enough to see plenty of “next big things” come and go (raise your virtual hand if you remember Pig on Hadoop!). Every once in a while, something emerges beyond the level of “shiny new thing” that makes you sit up and take notice. These tend to be paradigm shifts in how we think about operating in the data space: These are typically platforms or frameworks that help solve real problems that have been plaguing enterprises for quite some time (Apache Spark, Cloud Compute, data transformation frameworks like dbt/coalesce.io all come to mind). The combination of Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent (A2A) protocol has created what I like to call an “AI Fabric”, and it is exactly that kind of fundamental development.

Over the past few months I’ve been having a lot of cross-industry conversations with other data leaders, and there are a couple of pretty consistent refrains: How do we connect our AI to our existing data and tools in a meaningful way? And more importantly, How do we stop thinking in the mindset of one-off, point solutions for AI and move towards AI-powered solutions working together effectively? On the surface these may sound like tech-oriented challenges, but underlying these challenges are tangible business problems that can directly impact decision making at the highest levels.

The good news is that we’re finally seeing solutions emerge that address these challenges in a standardized, scalable manner. MCP and A2A represent two complementary approaches that, when combined, create something truly powerful. Think of it as building both the nervous system (MCP) and the communication network (A2A) for enterprise AI.

The Tale of Two Protocols: Understanding the Problem Space

Let me describe a scenario that might feel familiar. You’re leading a data organization for an e-commerce or manufacturing company running JDE or SAP for an ERP, Salesforce for CRM, a custom inventory management system built 15 years ago, and now you want to add AI agents to help with everything from supply chain optimization to warranty work or customer service. Or maybe you’re with a law firm that has decades of case documents in various systems, specialized legal research tools, and now wants AI to help with document review and case strategy.

The traditional approach is that organizations start by building custom interfaces (e.g. the ubiquitous “chatbot” interfaces) backed by RAG pipelines that are specific to each data set and then iframe or embed the interface into the application through extensions or customization. Then we expand that pattern to each app, and one day someone decides to try and tie them together to cross those application borders. So we write custom integration code for each source system / connection and then need to decide how to consolidate on a limited set of front-end interfaces, as well as how you now handle crossing security borders and contexts. It’s a nightmare for both development and long-term support and maintenance.

This is where our story takes a new (and positive) twist in our “AI Age” with MCP and A2A.

MCP: The Universal Adapter for AI

Model Context Protocol (MCP) is what I’d classify as “vertical integration”, it’s all about connecting AI agents to the tools and data they need to do their jobs. Anthropic describes it as the “USB-C for LLMs,” but I always think about it more in terms of the old-school term “middleware”.

Think of MCP as solving the problem of giving your AI agents hands and eyes. Without it, even the smartest AI is like a consultant (hopefully a good one) locked in a room with no access to your company’s data or tools. With MCP, suddenly that consultant can query databases, read documents, interact with APIs, and perform actions in your systems.

The beauty of MCP is its simplicity. It uses a client-server architecture where:

  • The AI application (like Claude Desktop or your custom agent) runs an MCP client
  • Each tool or data source has an MCP server that exposes its capabilities
  • Communication happens via JSON-RPC, making it language-agnostic and easy to implement

Here’s what makes MCP particularly powerful for enterprises:

Resources: These are your read-only data sources – documents, database records, API responses. In a legal context, this might be case files, precedents, or client records. For manufacturing, think inventory levels, production schedules, or quality metrics.

Tools: These are actions your AI can take – running queries, creating documents, updating systems. A legal AI might draft contracts or file documents, while a manufacturing AI could adjust production schedules or trigger reorder points.

Prompts: Standardized templates for common tasks. This is huge for maintaining consistency across your organization. Imagine having approved templates for “analyze this contract for risk factors” or “optimize this production schedule for cost.”

A2A: When Agents Need to Talk

Now, MCP is fantastic for connecting agents to tools and data, but what happens when you need agents to talk to each other? This is where Google’s Agent-to-Agent (A2A) protocol comes in.

A2A is the “horizontal integration” to MCP’s vertical integration. It’s all about enabling different AI agents to collaborate, delegate tasks, and share information. It’s the difference between having a bunch of smart individuals working in isolation versus having a coordinated team.

The protocol uses a similar approach to MCP (JSON-RPC over HTTP with Server-Sent Events), but it’s designed specifically for agent-to-agent communication. Key concepts include:

Agent Cards: Think of these as business cards for AI agents. Each agent advertises its capabilities, making it easy for other agents to know who can do what.

Task Negotiation: Agents can negotiate task parameters, expected outputs, and timelines before starting work.

Artifact Exchange: Agents can pass complex data structures, documents, or analysis results between each other in a standardized way.

Stateful Management: Unlike simple API calls, A2A supports long-running tasks that might take hours or days to complete.

The Magic Happens When They Work Together

Hopefully the picture is getting more clear and the potential applications are starting to take shape. MCP and A2A aren’t competing standards, they’re complementary technologies that create a complete ecosystem for enterprise AI. Let’s review some examples of what this might look like in practice.

Manufacturing Example: Supply Chain Optimization

Imagine a manufacturing company dealing with supply chain disruptions due to some completely hypothetical and very unlikely change in international tariffs. Here’s how MCP and A2A work together:

  1. The Planning Agent receives a request to optimize next month’s production schedule

  2. Using

    A2A

    it delegates data gathering to specialized agents:

    • The Inventory Agent uses MCP to query the warehouse management system and a Databricks Lakehouse / Fabric Data Warehouse.
    • The Supplier Agent uses MCP to check supplier APIs and delivery schedules
    • The Demand Forecast Agent uses MCP to analyze sales data from Salesforce and market trends
  3. Each specialist agent returns its findings via A2A

  4. The Planning Agent synthesizes this information and uses

    MCP

    to:

    • Update the production schedule in the ERP system
    • Trigger purchase orders for materials running low
    • Send alerts to floor managers about schedule changes

The beauty of this approach? Each agent is specialized and manageable, but together they solve complex, cross-functional problems. And here’s the really powerful notion, if you want to add weather prediction to your supply chain analysis, you just add a Weather Agent to the mix! No need to rewrite the entire system. As your agents increase in number, your “AI Fabric” with MCP and A2A allows the entire system to scale and become more intelligent and capable.

For our investment banking or legal industry friends, consider a firm handling a complex M&A deal:

  1. The Deal Coordinator Agent manages the overall process

  2. Using

    A2A

    it coordinates with:

    • The Document Review Agent that uses MCP to access and analyze contracts from the document management system
    • The Precedent Research Agent that uses MCP to search through historical cases in LexisNexis and internal data lake repositories
    • The Risk Assessment Agent that uses MCP to run analysis models and check regulatory databases
    • The Client Communication Agent that uses MCP to draft updates and schedule meetings
  3. When the Document Review Agent finds a concerning clause, it uses

    A2A

    to:

    • Alert the Risk Assessment Agent for deeper analysis
    • Request the Precedent Research Agent to find similar cases
    • Notify the Deal Coordinator to flag for human review

This creates a collaborative AI workforce that mirrors how human teams actually work in this space, but with the ability to process vastly more information in parallel.

Integrating with Your Data Platform: The Databricks Connection

So what might some of this look like architecturally? I think Databricks may currently have one of the more compelling platform stories here due to Unity Catalog and their tight integration between services, so we’ll walk through this scenario with them in mind. One could also argue that you could actually abstract a lot of this to work with many different cloud data platforms, and indeed, some interesting work is underway on exactly that. SQLDBm announced an MCP Server release along these lines, and AtScale is doing work to provide just such an abstraction layer.

For MCP: You can create MCP servers that expose:

  • Unity Catalog tables as resources for agents to query
  • Delta Lake historical data for trend analysis
  • ML models hosted in Databricks as tools agents can invoke (risk profiling, document classification, etc..)
  • Streaming data from Delta Live Tables for real-time insights

For A2A: Databricks can host:

  • Specialist agents that run complex analytics
  • Orchestration agents that coordinate data pipelines
  • Model monitoring agents that ensure AI quality

Here’s a practical example: A manufacturing quality control system where:

  1. IoT sensors stream data to Databricks Delta Live Tables
  2. An MCP server exposes this streaming data to a Quality Monitoring Agent
  3. When anomalies are detected, the agent uses A2A to notify:
    • A Root Cause Analysis Agent (which uses MCP to query historical data in Databricks)
    • A Maintenance Scheduling Agent (which uses MCP to check equipment records)
    • A Supply Chain Agent (to verify if material quality might be an issue)

The result? A predictive maintenance use case that actually works, because all your agents have access to all your data and can collaborate to solve problems.

Implementation Considerations: Making It Real

Now, I know what you’re thinking, “This sounds great, Brandon, but how do we actually implement this?” Let me share some practical advice from the trenches.

Start Small, Think Big

Don’t try to do all the things. Pick a specific use case with clear value:

  • For manufacturing: Start with inventory optimization or quality control
  • For finance/legal: Begin with contract review or research automation

Build your first MCP integration with your most critical data source, then add A2A when you need your second agent. It’s much easier to get buy-in when you can show concrete results.

Security First (Not Second)

Both protocols support security, but you need to implement it properly:

  • MCP servers should authenticate clients and authorize access to resources
  • A2A agent communications need encryption and proper identity management
  • Consider running agents in isolated environments with limited permissions

This is especially critical in legal environments where confidentiality is paramount, but it’s important everywhere (bonus tip: how about an Ethical Wall agent?).

Mind the Operational Overhead

Running multiple agents and MCP servers means more moving parts:

  • Plan for monitoring and alerting (How do you know when an agent is down?)
  • Think about resource management (Agents can consume significant compute)
  • Design for failure (What happens when an agent or connection fails?)
  • Think about AI for AI! Don’t build all of this by hand, how about AI agents that proactively monitor the entire flow?

Embrace the Ecosystem

One of the best things about both MCP and A2A being open standards is the growing ecosystem:

The Future Is Connected

Looking ahead, I see these protocols fundamentally changing how we think about enterprise AI. We’re moving from monolithic AI applications to ecosystems of specialized agents that work together. It’s similar to the shift from mainframes to microservices, but for AI.

The organizations that embrace this connected approach will have a significant advantage. They’ll be able to:

  • Adapt quickly to new requirements by adding new agents
  • Leverage best-of-breed AI models for different tasks
  • Maintain security and governance while enabling innovation
  • Scale their AI initiatives without exponential complexity

For CIOs and CDOs, this means rethinking your AI strategy. Instead of asking “Which AI platform should we choose?” start asking “How can we create an ecosystem where different AI agents work together effectively?”

For data engineers, this is your chance to be at the forefront of a major architectural shift. Understanding MCP and A2A now positions you as a key player in your organization’s AI transformation.

Wrapping Up: Your Next Steps

If you’re intrigued by the possibilities (and you should be!), here’s what I recommend:

  1. Experiment with MCP: Download Claude Desktop and try connecting it to a simple data source using MCP. Get a feel for how it works.
  2. Explore A2A: While it’s newer, start thinking about which processes in your organization could benefit from multi-agent collaboration.
  3. Audit Your Systems: Identify which tools and data sources would benefit most from AI access. Prioritize based on business value.
  4. Build a Pilot: Choose a bounded problem and build a proof of concept using both protocols. Measure the results.
  5. Plan for Scale: Think about governance, monitoring, and management before you have 50 agents running in production.

The combination of MCP and A2A represents a fundamental shift in how we build and deploy AI systems. It’s not just about making AI smarter, it’s about making it more connected, more capable, and more aligned with how our organizations actually work.

The future of enterprise AI isn’t a single, all-knowing system. It’s a network of specialized agents, each excellent at their job, working together seamlessly. MCP and A2A are the protocols that make this future possible. The question isn’t whether to adopt them, but how quickly your organization can start benefiting from them.

Time to start building that future.