Back to Workshop
Agent Challenge ~45 min

Connect Multiple MCP Servers

Your agent currently talks to one MCP server. In the real world, tools live on different servers. Let's teach your agent to discover and use tools from multiple sources — including a live news server we've deployed for you.

What You'll Learn

Multi-Server Discovery

Query multiple MCP servers for their tools

Smart Routing

Route tool calls to the correct server

Remote Services

Connect to external MCP servers over the network

The Remote News Server

We've deployed an MCP-compatible news server that's ready for you to use. It fetches real RSS feeds, summarizes articles with AI, and exposes everything via JSON-RPC 2.0 — exactly like your local MCP server.

News Server Details

Hostname: news.hlutur.com IP Address: 128.199.62.215 Port: 8002 Endpoint: POST /message Protocol: JSON-RPC 2.0

DNS may not be live yet. Use the IP address if the hostname doesn't resolve.

Try it right now — no setup needed:

terminal
# List available tools on the remote news server
curl -X POST "http://128.199.62.215:8002/message" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}'

# Search for news about a topic
curl -X POST "http://128.199.62.215:8002/message" \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0", "id": 2,
    "method": "tools/call",
    "params": {
      "name": "get_news",
      "arguments": {"topic": "artificial intelligence", "language": "en"}
    }
  }'

# Get today's top headlines
curl -X POST "http://128.199.62.215:8002/message" \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0", "id": 3,
    "method": "tools/call",
    "params": {
      "name": "get_headlines",
      "arguments": {"language": "en", "count": 3}
    }
  }'

The server returns real, AI-summarized news articles. It exposes three tools: get_news, get_headlines, and list_sources.

The Challenge

Step 1

Understand Current Discovery

Open services/agent/app.py and find where the agent discovers tools. At startup, it calls tools/list on the MCP server and stores the result. Right now it talks to just one server defined by MCP_SERVER_URL.

Your mission: Make the agent discover tools from multiple MCP servers, merge all tools into one list for OpenAI, and route tool calls to the correct server.

Step 2

Configure Multiple Server URLs

Instead of a single MCP_SERVER_URL, support a comma-separated list of servers. Each server gets a name so you can track which tools came from where.

services/agent/app.py
import os

# Support multiple MCP servers
# Format: "name1=url1,name2=url2" or just "url1,url2"
MCP_SERVERS = {}

def parse_mcp_servers():
    """Parse MCP server configuration from environment."""
    # Keep backward compatibility with single server
    single_url = os.getenv("MCP_SERVER_URL")
    if single_url:
        MCP_SERVERS["local"] = single_url

    # Support additional servers
    extra = os.getenv("MCP_EXTRA_SERVERS", "")
    for entry in extra.split(","):
        entry = entry.strip()
        if not entry:
            continue
        if "=" in entry:
            name, url = entry.split("=", 1)
            MCP_SERVERS[name.strip()] = url.strip()
        else:
            MCP_SERVERS[f"server-{len(MCP_SERVERS)}"] = entry

parse_mcp_servers()

Update docker-compose.yml to add the news server:

docker-compose.yml
travel-agent:
  environment:
    - MCP_SERVER_URL=http://mcp-server:8000
    - MCP_EXTRA_SERVERS=news=http://128.199.62.215:8002  # <-- Add this!
Step 3

Discover Tools From All Servers

Now update the tool discovery to loop through all configured servers. The key insight: you need to remember which server each tool came from, so you can route calls correctly later.

services/agent/app.py
# Maps tool_name → server_url (so we know where to send calls)
tool_to_server: Dict[str, str] = {}

# All discovered tools (merged from all servers)
all_tools: list = []

async def discover_all_tools():
    """Discover tools from every configured MCP server."""
    global all_tools, tool_to_server
    all_tools = []
    tool_to_server = {}

    for name, url in MCP_SERVERS.items():
        try:
            logger.info(f"Discovering tools from {name} ({url})...")
            async with httpx.AsyncClient() as client:
                resp = await client.post(
                    f"{url}/message",
                    json={"jsonrpc": "2.0", "id": 1, "method": "tools/list"},
                    timeout=10.0
                )
            tools = resp.json()["result"]["tools"]

            for tool in tools:
                tool_name = tool["name"]
                if tool_name in tool_to_server:
                    logger.warning(f"Duplicate tool '{tool_name}' from {name}, skipping")
                    continue
                tool_to_server[tool_name] = url
                all_tools.append(tool)

            logger.info(f"Found {len(tools)} tools from {name}")
        except Exception as e:
            logger.error(f"Failed to discover tools from {name} ({url}): {e}")

    logger.info(f"Total tools discovered: {len(all_tools)}")

Key pattern: The tool_to_server dictionary is your routing table. When OpenAI says "call get_news", you look up which server owns that tool and forward the request there.

Step 4

Route Tool Calls to the Right Server

Find where the agent handles tool calls from OpenAI. Instead of always sending to the same server, look up the correct server from tool_to_server.

services/agent/app.py
async def call_mcp_tool(tool_name: str, arguments: dict) -> str:
    """Call an MCP tool on the correct server."""
    server_url = tool_to_server.get(tool_name)

    if not server_url:
        return f"Error: Unknown tool '{tool_name}'. Available: {list(tool_to_server.keys())}"

    logger.info(f"Calling {tool_name} on {server_url}")

    async with httpx.AsyncClient() as client:
        resp = await client.post(
            f"{server_url}/message",
            json={
                "jsonrpc": "2.0",
                "id": 1,
                "method": "tools/call",
                "params": {
                    "name": tool_name,
                    "arguments": arguments
                }
            },
            timeout=30.0
        )

    result = resp.json()["result"]
    return result["content"][0]["text"]

That's it. The routing is simple because every MCP server speaks the exact same protocol. The agent doesn't need to know whether a tool is local or remote — same JSON-RPC call either way.

Step 5

Handle Server Failures Gracefully

When you connect to remote servers, things will fail sometimes. Network issues, server downtime, slow responses. Your agent should handle these gracefully without crashing.

services/agent/app.py
async def call_mcp_tool_safe(tool_name: str, arguments: dict) -> str:
    """Call an MCP tool with error handling."""
    try:
        return await call_mcp_tool(tool_name, arguments)
    except httpx.ConnectError:
        server = tool_to_server.get(tool_name, "unknown")
        return f"The server hosting '{tool_name}' is unreachable. It may be down temporarily."
    except httpx.ReadTimeout:
        return f"The tool '{tool_name}' took too long to respond. Try again shortly."
    except Exception as e:
        logger.error(f"Tool call failed: {tool_name} - {e}")
        return f"Error calling {tool_name}: {str(e)}"

Why this matters: If the news server is down, the agent should still be able to answer weather questions. OpenAI will see the error message and explain to the user what happened.

Step 6

Test Multi-Server Integration

Rebuild and test that your agent discovers tools from both servers:

terminal
# Rebuild and restart
docker compose build travel-agent && docker compose up -d

# Check the logs — you should see tools from BOTH servers
docker compose logs travel-agent | grep -i "tools\|discover"

# Ask for weather (local MCP server)
curl -X POST "http://localhost:8001/query" \
  -H "Content-Type: application/json" \
  -d '{"query": "What is the weather in Oslo?"}'

# Ask for news (remote news server!)
curl -X POST "http://localhost:8001/query" \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the latest news about AI?"}'

# The big test — ask for BOTH in one query
curl -X POST "http://localhost:8001/query" \
  -H "Content-Type: application/json" \
  -d '{"query": "I am going to Bergen tomorrow. What is the weather and any travel news for Norway?"}'

When the agent answers the last query, it will call get_weather_forecast on your local server AND get_news on the remote news server. Two servers, one seamless response.

Stretch Goal

Add Health-Aware Discovery

Right now, tool discovery happens once at startup. What if a server goes down and comes back? Try adding periodic re-discovery that checks /health on each server and refreshes the tool list when a server recovers.

Hints:

  • Use asyncio.create_task() to run a background loop
  • Check GET /health on each server every 60 seconds
  • Only re-discover if a previously-down server comes back healthy
  • The news server exposes /health at http://128.199.62.215:8002/health

You've Nailed It When...

Agent logs show tools discovered from both local and remote servers
Weather queries still work (routed to local server)
News queries work (routed to remote news server)
A single query can use tools from both servers in one response
If the remote server is down, weather still works (graceful degradation)