AI: Creating a World Time MCP‑Server and Agent with a Local Ollama LLM (llama3.1:latest) on Linux

AI: Creating a World Time MCP‑Server and Agent with a Local Ollama LLM (llama3.1:latest) on Linux

As the wold of AI is progressing very fast it is time to dive into it. I love the idea to combine LLMs with tools to execute or create context data. This is a very first approach with a local running LLM.

In this tutorial, we’ll build a Linux-based system where a custom MCP‑Server provides the current time for various cities worldwide.
The Agent, powered by a locally installed Ollama LLM (llama3.1:latest), will interact with users to fetch time data from the server.
We'll also use UV, a modern Python package manager, instead of Pip for dependency management.


What You’ll Build

  • A MCP‑Server: Exposes a tool called get_time(), which accepts a city name and returns its local time.
  • An Agent: Uses the MCP‑Server alongside a locally installed Ollama LLM to process and respond to queries.
  • Local Setup (without Docker): Install Ollama directly on Linux.
  • Dependency Management via UV: Use UV for package installation (uv add) instead of traditional pip commands.
  • Creating the Project with UV: Use uv init and uv venv to properly set up the virtual environment.
  • Importance of Clear Tool Descriptions: Ensure the Agent correctly selects the right tool with well-crafted descriptions.

Installing Ollama Locally on Linux

Instead of using Docker, we install Ollama natively on Linux.

1️⃣ Install Ollama

Download and install Ollama directly:

curl -fsSL https://ollama.com/install.sh | bash

Verify installation:

ollama --version

2️⃣ Pull the Llama Model

Once Ollama is installed, pull the llama3.1:latest model:

ollama pull llama3.1:latest

Now, Ollama is installed locally and ready to process queries!


Creating the Python Project with UV

1️⃣ Install UV

First, ensure UV is installed using pip:

pip install uv

Verify installation:

uv --version

2️⃣ Initialize the Project with UV

Navigate to your desired project directory and initialize a UV-based Python project:

mkdir world-time-agent
cd world-time-agent
uv init
uv venv

Activate the virtual environment:

source .venv/bin/activate

3️⃣ Install Dependencies Using UV

Install all necessary packages without specifying versions:

uv add nest-asyncio llama-index llama-index-llms-ollama llama-index-tools-mcp mcp[cli] pytz

These packages provide:

  • nest-asyncio → Handles nested event loops.
  • llama-index → Connects to the Ollama LLM.
  • llama-index-llms-ollama → Interface for LLM interactions with Ollama.
  • llama-index-tools-mcp → Enables communication between MCP‑Server and Agent.
  • mcp[cli] → Provides MCP functionality with CLI support.
  • pytz → Provides timezone support for the MCP‑Server.

4️⃣ Create the MCP‑Server and Agent Files

Now let's create the necessary Python scripts inside the project.

Building the MCP‑Server

Now, we define a MCP‑Server using FastMCP.
Notice the well-structured tool description—this ensures the Agent understands it correctly.

Open an editor and call the file mcp_server.py

from mcp.server.fastmcp import FastMCP
import logging
import argparse
from datetime import datetime
import pytz

# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Create the MCP server with identifier "world-time"
app = FastMCP("world-time")

@app.tool()
def get_time(city: str) -> str:
    """
    Fetches the current local time for a given city.

    Args:
        city: City name (Supported: New York, Berlin, Tokyo, London, Sydney, Moscow)

    Returns:
        A formatted string displaying the city's current time.

    Description:
        This tool matches a city name to its timezone and retrieves its current local time.
        The Agent uses this tool to correctly answer queries like "What time is it in Tokyo?"
    """
    city_timezones = {
        "New York": "America/New_York",
        "Berlin": "Europe/Berlin",
        "Tokyo": "Asia/Tokyo",
        "London": "Europe/London",
        "Sydney": "Australia/Sydney",
        "Moscow": "Europe/Moscow"
    }
    if city not in city_timezones:
        return f"City not supported. Supported cities: {', '.join(city_timezones.keys())}"
    
    tz = pytz.timezone(city_timezones[city])
    local_time = datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")
    return f"Current time in {city} is {local_time}"

if __name__ == "__main__":
    print("🚀 Starting world-time MCP‑Server...")
    parser = argparse.ArgumentParser()
    parser.add_argument("--server_type", type=str, default="sse", choices=["sse", "stdio"])
    args = parser.parse_args()
    app.run(args.server_type)

Start the MCP‑Server

Run the MCP‑Server:

python mcp_server.py --server_type sse

Now, the server is ready and listening for tool calls!


Creating the Agent

Next, we build an Agent that connects to our local MCP‑Server and leverages Ollama’s LLM for message processing.

Open editor and call the file agent.py

import nest_asyncio
import asyncio
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
from llama_index.core.agent.workflow import (
    FunctionAgent, 
    ToolCallResult, 
    ToolCall)

from llama_index.core.workflow import Context

# Apply nest_asyncio to allow nested event loops
nest_asyncio.apply()

# Initialize the Ollama LLM
llm = Ollama(model="llama3.1:latest", request_timeout=600.0)
#llm = Ollama(model="mistral:latest", request_timeout=600.0)
Settings.llm = llm

# Create an MCP client
mcp_client = BasicMCPClient("http://127.0.0.1:8000/sse")
mcp_tools = McpToolSpec(client=mcp_client)  # You can also pass a list of allowed tools

async def server_logic():
    tools = await mcp_tools.to_tool_list_async()
    for tool in tools:
        print(tool.metadata.name, tool.metadata.description)

    agent = await get_agent(mcp_tools)

    # create the agent context
    agent_context = Context(agent)        
    
    while True:
        user_input = input("Enter your message: ")
        if user_input == "exit":
            break
        print("User: ", user_input)
        response = await handle_user_message(user_input, agent, agent_context, verbose=True)
        print("Agent: ", response)

SYSTEM_PROMPT = """\
Time Agent
"""

from llama_index.tools.mcp import McpToolSpec
from llama_index.core.agent.workflow import FunctionAgent

async def get_agent(tools: McpToolSpec):
    tools = await tools.to_tool_list_async()
    agent = FunctionAgent(
        name="Agent",
        description="An agent that can work with Our Database software.",
        tools=tools,
        llm=llm,
        system_prompt=SYSTEM_PROMPT
    )
    return agent




async def handle_user_message(
    message_content: str,
    agent: FunctionAgent,
    agent_context: Context,
    verbose: bool = False,
):
    handler = agent.run(message_content, ctx=agent_context, timeout=600)
    async for event in handler.stream_events():
        if verbose and type(event) == ToolCall:
            print(f"Calling tool {event.tool_name} with kwargs {event.tool_kwargs}")
        elif verbose and type(event) == ToolCallResult:
            print(f"Tool {event.tool_name} returned {event.tool_output}")

    response = await handler
    return str(response)

# Run the async function to fetch tools
asyncio.run(server_logic())

Now start the agent.

source .venv/bin/activate
python agent.py

Conclusion

With this setup, you've built a Linux-based system using local Ollama and UV for dependency management! 🚀

Your MCP‑Server efficiently handles world time queries, and your Agent selects tools intelligently based on their descriptions.

Now, extend this setup—perhaps add weather tools or calendar integrations—to make something even more powerful! 🚀

Happy coding!