Your agent currently talks to one MCP server. In the real world, tools live on different servers. Let's teach your agent to discover and use tools from multiple sources — including a live news server we've deployed for you.
Query multiple MCP servers for their tools
Route tool calls to the correct server
Connect to external MCP servers over the network
We've deployed an MCP-compatible news server that's ready for you to use. It fetches real RSS feeds, summarizes articles with AI, and exposes everything via JSON-RPC 2.0 — exactly like your local MCP server.
news.hlutur.com IP Address: 128.199.62.215 Port: 8002 Endpoint: POST /message Protocol: JSON-RPC 2.0 DNS may not be live yet. Use the IP address if the hostname doesn't resolve.
Try it right now — no setup needed:
# List available tools on the remote news server curl -X POST "http://128.199.62.215:8002/message" \ -H "Content-Type: application/json" \ -d '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}' # Search for news about a topic curl -X POST "http://128.199.62.215:8002/message" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "get_news", "arguments": {"topic": "artificial intelligence", "language": "en"} } }' # Get today's top headlines curl -X POST "http://128.199.62.215:8002/message" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 3, "method": "tools/call", "params": { "name": "get_headlines", "arguments": {"language": "en", "count": 3} } }'
The server returns real, AI-summarized news articles. It exposes three tools: get_news,
get_headlines, and list_sources.
Open services/agent/app.py and find where the agent discovers tools.
At startup, it calls tools/list on the MCP server and stores the result.
Right now it talks to just one server defined by MCP_SERVER_URL.
Your mission: Make the agent discover tools from multiple MCP servers, merge all tools into one list for OpenAI, and route tool calls to the correct server.
Instead of a single MCP_SERVER_URL, support a comma-separated list of servers.
Each server gets a name so you can track which tools came from where.
import os # Support multiple MCP servers # Format: "name1=url1,name2=url2" or just "url1,url2" MCP_SERVERS = {} def parse_mcp_servers(): """Parse MCP server configuration from environment.""" # Keep backward compatibility with single server single_url = os.getenv("MCP_SERVER_URL") if single_url: MCP_SERVERS["local"] = single_url # Support additional servers extra = os.getenv("MCP_EXTRA_SERVERS", "") for entry in extra.split(","): entry = entry.strip() if not entry: continue if "=" in entry: name, url = entry.split("=", 1) MCP_SERVERS[name.strip()] = url.strip() else: MCP_SERVERS[f"server-{len(MCP_SERVERS)}"] = entry parse_mcp_servers()
Update docker-compose.yml to add the news server:
travel-agent:
environment:
- MCP_SERVER_URL=http://mcp-server:8000
- MCP_EXTRA_SERVERS=news=http://128.199.62.215:8002 # <-- Add this! Now update the tool discovery to loop through all configured servers. The key insight: you need to remember which server each tool came from, so you can route calls correctly later.
# Maps tool_name → server_url (so we know where to send calls) tool_to_server: Dict[str, str] = {} # All discovered tools (merged from all servers) all_tools: list = [] async def discover_all_tools(): """Discover tools from every configured MCP server.""" global all_tools, tool_to_server all_tools = [] tool_to_server = {} for name, url in MCP_SERVERS.items(): try: logger.info(f"Discovering tools from {name} ({url})...") async with httpx.AsyncClient() as client: resp = await client.post( f"{url}/message", json={"jsonrpc": "2.0", "id": 1, "method": "tools/list"}, timeout=10.0 ) tools = resp.json()["result"]["tools"] for tool in tools: tool_name = tool["name"] if tool_name in tool_to_server: logger.warning(f"Duplicate tool '{tool_name}' from {name}, skipping") continue tool_to_server[tool_name] = url all_tools.append(tool) logger.info(f"Found {len(tools)} tools from {name}") except Exception as e: logger.error(f"Failed to discover tools from {name} ({url}): {e}") logger.info(f"Total tools discovered: {len(all_tools)}")
Key pattern: The tool_to_server dictionary is your routing table.
When OpenAI says "call get_news", you look up which server owns that tool and forward the request there.
Find where the agent handles tool calls from OpenAI. Instead of always sending to the same server,
look up the correct server from tool_to_server.
async def call_mcp_tool(tool_name: str, arguments: dict) -> str: """Call an MCP tool on the correct server.""" server_url = tool_to_server.get(tool_name) if not server_url: return f"Error: Unknown tool '{tool_name}'. Available: {list(tool_to_server.keys())}" logger.info(f"Calling {tool_name} on {server_url}") async with httpx.AsyncClient() as client: resp = await client.post( f"{server_url}/message", json={ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": tool_name, "arguments": arguments } }, timeout=30.0 ) result = resp.json()["result"] return result["content"][0]["text"]
That's it. The routing is simple because every MCP server speaks the exact same protocol. The agent doesn't need to know whether a tool is local or remote — same JSON-RPC call either way.
When you connect to remote servers, things will fail sometimes. Network issues, server downtime, slow responses. Your agent should handle these gracefully without crashing.
async def call_mcp_tool_safe(tool_name: str, arguments: dict) -> str: """Call an MCP tool with error handling.""" try: return await call_mcp_tool(tool_name, arguments) except httpx.ConnectError: server = tool_to_server.get(tool_name, "unknown") return f"The server hosting '{tool_name}' is unreachable. It may be down temporarily." except httpx.ReadTimeout: return f"The tool '{tool_name}' took too long to respond. Try again shortly." except Exception as e: logger.error(f"Tool call failed: {tool_name} - {e}") return f"Error calling {tool_name}: {str(e)}"
Why this matters: If the news server is down, the agent should still be able to answer weather questions. OpenAI will see the error message and explain to the user what happened.
Rebuild and test that your agent discovers tools from both servers:
# Rebuild and restart docker compose build travel-agent && docker compose up -d # Check the logs — you should see tools from BOTH servers docker compose logs travel-agent | grep -i "tools\|discover" # Ask for weather (local MCP server) curl -X POST "http://localhost:8001/query" \ -H "Content-Type: application/json" \ -d '{"query": "What is the weather in Oslo?"}' # Ask for news (remote news server!) curl -X POST "http://localhost:8001/query" \ -H "Content-Type: application/json" \ -d '{"query": "What are the latest news about AI?"}' # The big test — ask for BOTH in one query curl -X POST "http://localhost:8001/query" \ -H "Content-Type: application/json" \ -d '{"query": "I am going to Bergen tomorrow. What is the weather and any travel news for Norway?"}'
When the agent answers the last query, it will call get_weather_forecast on your local server
AND get_news on the remote news server. Two servers, one seamless response.
Right now, tool discovery happens once at startup. What if a server goes down and comes back?
Try adding periodic re-discovery that checks /health on each server
and refreshes the tool list when a server recovers.
Hints:
asyncio.create_task() to run a background loopGET /health on each server every 60 seconds/health at http://128.199.62.215:8002/health