A Clear Intro to MCP (Model Context Protocol) with Code Examples


to move AI agents from prototype to production heats up, the need for a standardized way for agents to call tools across different providers is pressing. This transition to a standardized approach to agent tool calling is similar to what we saw with REST APIs. Before they existed, developers had to deal with a mess of proprietary protocols just to pull data from different services. REST brought order to chaos, enabling systems to talk to each other in a consistent way. MCP (Model Context Protocol) is aiming to, as it sounds, provide context for AI models in a standard way. Without it, we’re headed towards tool-calling mayhem where multiple incompatible versions of “standardized” tool calls crop up simply because there’s no shared way for agents to organize, share, and invoke tools. MCP gives us a shared language and the democratization of tool calling.

One thing I’m personally excited about is how tool-calling standards like MCP can actually make Ai Systems safer. With easier access to well-tested tools more companies can avoid reinventing the wheel, which reduces security risks and minimizes the chance of malicious code. As Ai systems start scaling in 2025, these are valid concerns.

As I dove into MCP, I realized a huge gap in documentation. There’s plenty of high-level “what does it do” content, but when you actually want to understand how it works, the resources start to fall short—especially for those who aren’t native developers. It’s either high level explainers or deep in the source code.

In this piece, I’m going to break MCP down for a broader audience—making the concepts and functionality clear and digestible. If you’re able, follow along in the coding section, if not it will be well explained in natural language above the code snippets.

An Analogy to Understand MCP: The Restaurant

Let’s imagine the concept of MCP as a restaurant where we have:

The Host = The restaurant building (the environment where the agent runs)

The Server = The kitchen (where tools live)

The Client = The waiter (who sends tool requests)

The Agent = The customer (who decides what tool to use)

The Tools = The recipes (the code that gets executed)

The Components of MCP

Host
This is where the agent operates. In our analogy, it’s the restaurant building; in MCP, it’s wherever your agents or LLMs actually run. If you’re using Ollama locally, you’re the host. If you’re using Claude or GPT, then Anthropic or OpenAI are the hosts.

Client

This is the environment that sends tool call requests from the agent. Think of it as the waiter who takes your order and delivers it to the kitchen. In practical terms, it’s the application or interface where your agent runs. The client passes tool call requests to the Server using MCP.

Server

This is the kitchen where recipes, or tools, are housed. It centralizes tools so agents can access them easily. Servers can be local (spun up by users) or remote (hosted by companies offering tools). Tools on a server are typically either grouped by function or integration. For instance, all Slack-related tools can be on a “Slack server,” or all messaging tools can be grouped together on a “messaging server”. That decision is based on architectural and developer preferences.

Agent

The “brains” of the operation. Powered by an LLM, it decides which tools to call to complete a task. When it determines a tool is needed, it initiates a request to the server. The agent doesn’t need to natively understand MCP because it learns how to use it through the metadata associated with each of the tools. This metadata associated with each tool tells the agent the protocol for calling the tool and the execution method. But it is important to note that the platform or agent needs to support MCP so that it handles tool calls automatically. Otherwise it is up to the developer to write the complex translation logic of how to parse the metadata from the schema, form tool call requests in MCP format, map the requests to the correct function, execute the code, and return the result in MCP complaint format back to the agent.

Tools

These are the functions, such as calling APIs or custom code, that “does the work”. Tools live on servers and can be:

  • Custom tools you create and host on a local server.
  • Premade tools hosted by others on a remote server.
  • Premade code created by others but hosted by you on a local server.

How the components fit together

  1. Server Registers Tools
    Each tool is defined with a name, description, input/output schemas, a function handler (the code that runs) and registered to the server. This usually involves calling a method or API to tell the server “hey, here’s a new tool and this is how you use it”.
  2. Server Exposes Metadata
    When the server starts or an agent connects, it exposes the tool metadata (schemas, descriptions) via MCP.
  3. Agent Discovers Tools
    The agent queries the server (using MCP) to see what tools are available. It understands how to use each tool from the tool metadata. This typically happens on startup or when tools are added.
  4. Agent Plans Tool Use
    When the agent determines a tool is needed (based on user input or task context), it forms a tool call request in a standardized MCP JSON format which includes tool name, input parameters that match the tool’s input schema, and any other metadata. The client acts as the transport layer and sends the MCP formatted request to the server over HTTP.
  5. Translation Layer Executes
    The translation layer takes the agent’s standardized tool call (via MCP), maps the request to the corresponding function on the server, executes the function, formats the result back to MCP, and sends it back to the agent. A framework that abstracts MCP for you deos all of this without the developer needing to write the translation layer logic (which sounds like a headache).
Image by Sandi Besen

Code Example of A Re-Act Agent Using MCP Brave Search Server

In order to understand what MCP looks like when applied, let’s use the beeAI framework from IBM, which natively supports MCP and handles the translation logic for us.

 If you plan on running this code you will need to:

  1. Clone the beeai framework repo to gain access to the helper classes used in this code 
  2. Create a free Brave developer account and get your API key. There are free subscriptions available (credit card required). 
  3. Create an OpenAI developer account and create an API Key
  4. Add your Brave API key and OpenAI key to the .env file at the python folder level of the repo.
  5. Ensure you have npm installed and have set your path correctly.

Sample .env file

BRAVE_API_KEY= "<Your API Key Here>"
BEEAI_LOG_LEVEL=INFO
OPENAI_API_KEY= "<Your API Key Here>"

Sample mcp_agent.ipynb

1. Import the necessary libraries

import asyncio
import logging
import os
import sys
import traceback
from typing import Any
from beeai_framework.agents.react.runners.default.prompts import SystemPromptTemplate
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from beeai_framework import Tool
from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.agents.types import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel, ChatModelParameters
from beeai_framework.emitter.emitter import Emitter, EventMeta
from beeai_framework.errors import FrameworkError
from beeai_framework.logger import Logger
from beeai_framework.memory.token_memory import TokenMemory
from beeai_framework.tools.mcp_tools import MCPTool
from pathlib import Path
from beeai_framework.adapters.openai.backend.chat import OpenAIChatModel
from beeai_framework.backend.message import SystemMessa

2. Load the environment variables and set the system path (if needed)

import os
from dotenv import load_dotenv

# Absolute path to your .env file
# sometimes the system can have trouble locating the .env file
env_path = <Your path to your .env file>
# Load it
load_dotenv(dotenv_path=env_path)

# Get current working directory
path = <Your path to your current python directory> #...beeai-framework/python'
# Append to sys.path
sys.path.append(path)

3. Configure the logger

# Configure logging - using DEBUG instead of trace
logger = Logger("app", level=logging.DEBUG)

4. Load helper functions like process_agent_events,observer, and create an instance of ConsoleReader

  • process_agent_events: Handles agent events and logs messages to the console based on the event type (e.g., error, retry, update). It ensures meaningful output for each event to help track agent activity.
  • observer: Listens for all events from an emitter and routes them to process_agent_events for processing and display.
  • ConsoleReader: Manages console input/output, allowing user interaction and formatted message display with color-coded roles.
#load console reader
from examples.helpers.io import ConsoleReader
#this is a helper function that makes the assitant chat easier to read
reader = ConsoleReader()

def process_agent_events(data: dict[str, Any], event: EventMeta) -> None:
  """Process agent events and log appropriately"""

  if event.name == "error":
      reader.write("Agent 🤖 : ", FrameworkError.ensure(data["error"]).explain())
  elif event.name == "retry":
      reader.write("Agent 🤖 : ", "retrying the action...")
  elif event.name == "update":
      reader.write(f"Agent({data['update']['key']}) 🤖 : ", data["update"]["parsedValue"])
  elif event.name == "start":
      reader.write("Agent 🤖 : ", "starting new iteration")
  elif event.name == "success":
      reader.write("Agent 🤖 : ", "success")
  else:
      print(event.path)

def observer(emitter: Emitter) -> None:
  emitter.on("*.*", process_agent_events)

5. Set the Brave API Key and server parameters.

Anthropic has a list of MCP servers here.

brave_api_key = os.environ["BRAVE_API_KEY"]

brave_server_params = StdioServerParameters(
  command="/opt/homebrew/bin/npx",  # Full path to be safe
  args=[
      "-y",
      "@modelcontextprotocol/server-brave-search"
  ],
  env={
      "BRAVE_API_KEY": brave_api_key,
        "x-subscription-token": brave_api_key
  },
)

6. Create the brave tool that initiates the connection to the MCP server, discovers tools, and returns the discovered tools to the Agents so it can decide what tool is appropriate to call for a given task. 

In this case 2 tools are discoverable on the Brave MCP Server:

  • brave_web_search: Execute web searches with pagination and filtering
  • brave_local_search: Search for local businesses and services
async def brave_tool() -> MCPTool:
  brave_env = os.environ.copy()
  brave_server_params = StdioServerParameters(
      command="/opt/homebrew/bin/npx",
      args=["-y", "@modelcontextprotocol/server-brave-search"],
      env=brave_env
  )

  print("Starting MCP client...")
  try:
      async with stdio_client(brave_server_params) as (read, write), ClientSession(read, write) as session:
          print("Client connected, initializing...")

          await asyncio.wait_for(session.initialize(), timeout=10)
          print("Initialized! Discovering tools...")

          bravetools = await asyncio.wait_for(
              MCPTool.from_client(session, brave_server_params),
              timeout=10
          )
          print("Tools discovered!")
          return bravetools
  except asyncio.TimeoutError as e:
      print("❌ Timeout occurred during session initialization or tool discovery.")
  except Exception as e:
      print("❌ Exception occurred:", e)
      traceback.print_exc()

(Optional) Check the connection to the MCP server and ensure it returns all the available tools before providing it to the agent.

tool = await brave_tool()
print("Discovered tools:", tool)

for tool in tool:
  print(f"Tool Name: {tool.name}")
  print(f"Description: {getattr(tool, 'description', 'No description available')}")
  print("-" * 30)

OUTPUT:

Starting MCP client...

Client connected, initializing...

Initialized! Discovering tools...

Tools discovered!

Discovered tools: [<beeai_framework.tools.mcp_tools.MCPTool object at 0x119aa6c00>, <beeai_framework.tools.mcp_tools.MCPTool object at 0x10fee3e60>]

Tool Name: brave_web_search

Description: Performs a web search using the Brave Search API, ideal for general queries, news, articles, and online content. Use this for broad information gathering, recent events, or when you need diverse web sources. Supports pagination, content filtering, and freshness controls. Maximum 20 results per request, with offset for pagination. 

------------------------------

Tool Name: brave_local_search

Description: Searches for local businesses and places using Brave's Local Search API. Best for queries related to physical locations, businesses, restaurants, services, etc. Returns detailed information including:

- Business names and addresses

- Ratings and review counts

- Phone numbers and opening hours

Use this when the query implies 'near me' or mentions specific locations. Automatically falls back to web search if no local results are found.

7. Write the function that creates the agent:  

  • assign an LLM
  • create an instance of the brave_tool() function and assign it to a tools variable
  • create a re-act agent and assign it the chosen llm, tools, memory (so it can have constinous conversation)
  • Add a system prompt to the re-act agent.  

Note: You might notice that I added a sentence to the system prompt that reads “If you need to use the brave_tool you must use a count of 5.” This is a bandaid work-around becasue of a bug I found in the index.ts file of the brave server. I will contribute to the repo to fix it.

async def create_agent() -> ReActAgent:
  """Create and configure the agent with tools and LLM"""
  #using openai api instead
  llm = OpenAIChatModel(model_id="gpt-4o")
 
  # Configure tools
  tools: list[Tool] = await brave_tool()
  #tools: list[Tool] = [await brave_tool()]

  # Create agent with memory and tools
  agent = ReActAgent(llm=llm, tools=tools, memory=TokenMemory(llm), )
 
  await agent.memory.add(SystemMessage(content="You are a helpful assistant. If you need to use the brave_tool you must use a count of 5."))

  return agent

8. Create the main function

  • Creates the agent
  • Enters a conversation loop with the user and runs the agent with the user prompt and some configuration settings. Finishes the conversation if the user types “exit” or “quit”.
import asyncio
import traceback
import sys

# Your async main function
async def main() -> None:
  """Main application loop"""

  # Create agent
  agent = await create_agent()

  # Main interaction loop with user input
  for prompt in reader:
      # Exit condition
      if prompt.strip().lower() in {"exit", "quit"}:
          reader.write("Session ended by user. Goodbye! 👋n")
          break

      # Run agent with the prompt
      try:
          response = await agent.run(
              prompt=prompt,
              execution=AgentExecutionConfig(max_retries_per_step=3, total_max_retries=10, max_iterations=20),
          ).observe(observer)

          reader.write("Agent 🤖 : ", response.result.text)
      except Exception as e:
          reader.write("An error occurred: ", str(e))
          traceback.print_exc()
# Run main() with error handling
try:
  await main()
except FrameworkError as e:
  traceback.print_exc()
  sys.exit(e.explain())

OUTPUT:

Starting MCP client...

Client connected, initializing...

Initialized! Discovering tools...

Tools discovered!

Interactive session has started. To escape, input 'q' and submit.

Agent 🤖 : starting new iteration

Agent(thought) 🤖 : I will use the brave_local_search function to find the open hours for La Taqueria on Mission St in San Francisco.

Agent(tool_name) 🤖 : brave_local_search

Agent(tool_input) 🤖 : {'query': 'La Taqueria Mission St San Francisco'}

Agent(tool_output) 🤖 : [{"annotations": null, "text": "Error: Brave API error: 422 Unprocessable Entityn{"type":"ErrorResponse","error":{"id":"ddab2628-c96e-478f-80ee-9b5f8b1fda26","status":422,"code":"VALIDATION","detail":"Unable to validate request parameter(s)","meta":{"errors":[{"type":"greater_than_equal","loc":["query","count"],"msg":"Input should be greater than or equal to 1","input":"0","ctx":{"ge":1}}]}},"time":1742589546}", "type": "text"}]

Agent 🤖 : starting new iteration

Agent(thought) 🤖 : The function call resulted in an error. I will try again with a different approach to find the open hours for La Taqueria on Mission St in San Francisco.

Agent(tool_name) 🤖 : brave_local_search

Agent(tool_input) 🤖 : {'query': 'La Taqueria Mission St San Francisco', 'count': 5}

Agent(tool_output) 🤖 : [{"annotations": null, "text": "Title: LA TAQUERIA - Updated May 2024 - 2795 Photos & 4678 Reviews - 2889 Mission St, San Francisco, California - Mexican - Restaurant Reviews - Phone Number - YelpnDescription: LA TAQUERIA, <strong>2889 Mission St, San Francisco, CA 94110</strong>, 2795 Photos, Mon - Closed, Tue - Closed, Wed - 11:00 am - 8:45 pm, Thu - 11:00 am - 8:45 pm, Fri - 11:00 am - 8:45 pm, Sat - 11:00 am - 8:45 pm, Sun - 11:00 am - 7:45 pmnURL: https://www.yelp.com/biz/la-taqueria-san-francisco-2nnTitle: La Taqueria: Authentic Mexican Cuisine for Every TastenDescription: La Taqueria - <strong>Mexican Food Restaurant</strong> welcomes you to enjoy our delicious. La Taqueria provides a full-service experience in a fun casual atmosphere and fresh flavors where the customer always comes first!nURL: https://lataqueria.gotoeat.net/nnTitle: r/sanfrancisco on Reddit: Whats so good about La Taqueria in The Mission?nDescription: 182 votes, 208 comments. Don't get me wrong its good but I failed to see the hype. I waited in a long line and once I got my food it just tastes like…nURL: https://www.reddit.com/r/sanfrancisco/comments/1d0sf5k/whats_so_good_about_la_taqueria_in_the_mission/nnTitle: LA TAQUERIA, San Francisco - Mission District - Menu, Prices & Restaurant Reviews - TripadvisornDescription: La Taqueria still going strong. <strong>Historically the most well known Burrito home in the city and Mission District</strong>. Everything is run like a clock. The fillings are just spiced and prepared just right. Carnitas, chicken, asada, etc have true home made flavors. The Tortillas both are super good ...nURL: https://www.tripadvisor.com/Restaurant_Review-g60713-d360056-Reviews-La_Taqueria-San_Francisco_California.htmlnnTitle: La Taqueria – San Francisco - a MICHELIN Guide RestaurantnDescription: San Francisco Restaurants · La Taqueria · 4 · <strong>2889 Mission St., San Francisco, 94110, USA</strong> · $ · Mexican, Regional Cuisine · Visited · Favorite · Find bookable restaurants near me · <strong>2889 Mission St., San Francisco, 94110, USA</strong> · $ · Mexican, Regional Cuisine ·nURL: https://guide.michelin.com/us/en/california/san-francisco/restaurant/la-taqueria", "type": "text"}]

Agent 🤖 : starting new iteration

Agent(thought) 🤖 : I found the open hours for La Taqueria on Mission St in San Francisco. I will provide this information to the user.

Agent(final_answer) 🤖 : La Taqueria, located at 2889 Mission St, San Francisco, CA 94110, has the following opening hours:

- Monday: Closed

- Tuesday: Closed

- Wednesday to Saturday: 11:00 AM - 8:45 PM

- Sunday: 11:00 AM - 7:45 PM

For more details, you can visit their [Yelp page](https://www.yelp.com/biz/la-taqueria-san-francisco-2).

Agent 🤖 : success

Agent 🤖 : success

run.agent.react.finish

Agent 🤖 : La Taqueria, located at 2889 Mission St, San Francisco, CA 94110, has the following opening hours:

- Monday: Closed

- Tuesday: Closed

- Wednesday to Saturday: 11:00 AM - 8:45 PM

- Sunday: 11:00 AM - 7:45 PM

For more details, you can visit their [Yelp page](https://www.yelp.com/biz/la-taqueria-san-francisco-2).

Conclusion, Challenges, and Where MCP is Headed

In this article you’ve seen how MCP can provide a standardized way for agents to discover tools on an MCP server and then interact with them without the developer needing to specify the implementation details of the tool call. The level of abstraction that MCP offers is powerful. It means developers can focus on creating valuable tools while agents can seamlessly discover and use them through standard protocols.

Our Restaurant example helped us visualize how MCP concepts like the host, client, server, agent, and tools work together – each with their own important role. The code example, where we used a Re-Act Agent in the Beeai framework, which handles MCP tool calling natively, to call the Brave MCP server with access to two tools provided a real world understanding of MCP can be used in practice.
Without protocols like MCP, we face a fragmented landscape where every AI provider implements their own incompatible tool-calling mechanisms– creating complexity, security vulnerabilities, and wasted development effort.

In the coming months, we’ll likely see MCP gain significant traction for several reasons:

  • As more tool providers adopt MCP, the network effect will accelerate adoption across the industry.
  • Standardized protocols mean better testing, fewer vulnerabilities, and reduced risks as AI systems scale.
  • The ability to write a tool once and have it work across multiple agent frameworks will dramatically reduce development overhead.
  • Smaller players can compete by focusing on building excellent tools rather than reinventing complex agent architectures.
  • Organizations can integrate AI agents more confidently knowing they’re built on stable, interoperable standards.

That said, MCP faces important challenges that need addressing as adoption grows:

  • As demonstrated in our code example, agents can only discover tools once connected to a server
  • The agent’s functionality becomes dependent on server uptime and performance, introducing additional points of failure.
  • As the protocol evolves, maintaining compatibility while adding new features will require governance.
  • Standardizing how agents access potentially sensitive tools across different servers introduces security considerations.
  • The client-server architecture introduces additional latency.

For developers, AI researchers, and organizations building agent-based systems, understanding and adopting MCP now—while being mindful of these challenges—will provide a significant advantage as more AI solutions begin to scale.


Note: The opinions expressed both in this article and paper are solely those of the authors and do not necessarily reflect the views or policies of their respective employers.

Interested in connecting? Drop me a DM on Linkedin! I‘m always eager to engage in food for thought and iterate on my work.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here