Connecting Your AI to the World: MCP in Action

Connecting Your AI to the World: MCP in Action

Connecting Your AI to the World: MCP in Action

Part 5 of 5: Making AI Actually Useful with Model Context Protocol

Part 5 of 5: Making AI Actually Useful with Model Context Protocol

Welcome to our final article! We’ve built RAG systems, fine-tuned models, and created custom AI from scratch. Now comes the most exciting part — making your AI actually useful in the real world. Enter Model Context Protocol (MCP), the bridge between AI and everything else.

What is MCP and Why Should You Care?

Think about how frustrating it is when you ask ChatGPT to “check my calendar” or “send this email” and it says “I can’t do that.” That’s because these AI models live in isolation — they can think and write, but they can’t actually interact with your tools, databases, and applications in a standardized way.

Model Context Protocol (MCP), developed by Anthropic, solves this fundamental problem. It’s not just about giving your AI “hands and eyes” — it’s about creating a standardized way for AI models to discover, connect to, and interact with external resources and tools.

Here’s the key insight: instead of every developer building their own custom tool integrations (which all work differently), MCP creates a universal standard. Think of it like USB ports for AI tools — any MCP-compatible tool can work with any MCP-compatible AI application.

Let me show you with a real example I built: CmdGenie, which demonstrates MCP principles in action.

Let’s take an analogy of CmdGenie: a tool I created for terminal commands:

CmdGenie is a command-line tool that converts your natural language requests into actual terminal commands. While it’s not technically an MCP server itself, it demonstrates the core MCP philosophy: AI connected to your actual system through standardized interfaces.

Instead of googling “how to find large files in Linux” and copying terminal commands, you just say:

cmdgenie "find all files larger than 100MB in my home directory"

And it generates and offers to execute:

find ~/ -type f -size +100M -exec ls -lh {} \;

Understanding MCP Architecture

Before diving deeper into CmdGenie, let’s understand what MCP actually is:

MCP Servers — Applications that provide tools and resources that AI can use. Think of them as specialized service providers.

MCP Clients — AI applications that connect to MCP servers to discover and use available capabilities.

Transports — The communication layer (typically stdio or Server-Sent Events) that allows clients and servers to talk.

Here’s the crucial part: The AI model runs on the client side, but it can discover and call tools that live on various MCP servers. This creates a distributed ecosystem where:

  • Tools are standardized and reusable
  • Security is maintained (servers run in isolation)
  • Capabilities can be shared across applications
  • Each server can implement fine-grained access controls

The CmdGenie Architecture: MCP Principles Applied

Here’s what happens when you use CmdGenie, demonstrating key MCP concepts:

1. Context Discovery and Standardization

const systemContext = {
platform: os.platform(), // Windows, Mac, Linux
currentDir: process.cwd(),
availableCommands: checkAvailableCommands(),
userPreferences: loadUserConfig()
};

In true MCP fashion, the system discovers available capabilities and context. The AI knows what system it’s running on and adapts accordingly — ask for “process list” on Windows, get tasklist; ask on Linux, get ps aux.

2. Multi-Provider Standardization

const providers = {
'openai': { defaultModel: 'gpt-3.5-turbo' },
'anthropic': { defaultModel: 'claude-3-haiku-20240307' },
'google': { defaultModel: 'gemini-pro' },
'cohere': { defaultModel: 'command' }
};

Just like MCP isn’t tied to one specific AI model, CmdGenie demonstrates provider flexibility. You can switch between different AI providers based on your needs, costs, or performance requirements.

3. Safe Execution with Human-in-the-Loop

console.log(`\n💡 Generated command: ${command}`);
rl.question('\n🚀 Execute this command? (y/N): ', async (answer) => {
if (answer.toLowerCase() === 'y') {
try {
const { stdout, stderr } = await execAsync(command);
if (stdout) console.log(stdout);
if (stderr) console.error(stderr);
} catch (error) {
console.error('❌ Execution error:', error.message);
}
}
});

This demonstrates a crucial MCP principle: AI suggests, humans approve. The client (your terminal) remains in control of what actually gets executed.

Building a Real MCP Server

While CmdGenie shows MCP principles, let’s look at how you’d build an actual MCP server. Here’s a simple mathematical operations server:

from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="mcp-server", host="0.0.0.0", port=8000)
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="sse")

The beautiful thing about MCP is standardization:

  • The @mcp.tool() decorator automatically generates proper tool schemas
  • Any MCP client can discover and use these tools
  • The server runs independently and can serve multiple clients

Beyond CmdGenie: Real MCP Applications

The principles behind CmdGenie apply to countless MCP scenarios:

Email Management Server

@mcp.tool()
async def send_email(to: str, subject: str, body: str):
"""Send an email through the configured email service"""
# Implementation connects to actual email service
pass

Calendar Integration Server

@mcp.tool()
async def find_available_slot(duration: int, participants: list):
"""Find available meeting time for given participants"""
# Implementation connects to calendar service
pass

File System Operations Server

@mcp.tool()
async def organize_directory(path: str, rules: dict, preview: bool = True):
"""Organize files in directory according to specified rules"""
# Implementation handles actual file operations
pass

The Real Power: Contextual AI with Standardized Tools

What makes MCP powerful isn’t just that AI can execute commands — it’s that it provides a standardized way to make contextually aware decisions. CmdGenie demonstrates this contextual awareness:

User says: “Show me what’s taking up space”

  • On Windows: dir /s /-c | sort /r
  • On Linux: du -h --max-depth=1 | sort -hr
  • On macOS: du -h -d 1 | sort -hr

In an MCP ecosystem, this context awareness becomes even more powerful because:

  • Multiple servers can provide context (system info, user preferences, available tools)
  • AI can chain tools from different servers to accomplish complex tasks
  • Everything follows the same standardized protocol

Building Your Own MCP Integration

Ready to build in the MCP ecosystem? Here’s the pattern:

1. Define Your MCP Server

from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="my-custom-server")
@mcp.tool()
def my_tool(param: str) -> str:
"""Description of what this tool does"""
# Your implementation here
return result

2. Create Context-Aware Tools

@mcp.tool()
def system_aware_command(user_request: str):
"""Generate system-appropriate command for user request"""
system_info = get_system_context()
user_prefs = get_user_preferences()

return generate_command(user_request, system_info, user_prefs)

3. Implement Safety and Validation

@mcp.tool()
def safe_file_operation(operation: str, path: str):
"""Perform file operation with safety checks"""
if not is_safe_path(path):
raise ValueError("Path not approved for operations")

preview = generate_preview(operation, path)
# In real implementation, you'd have approval mechanism
return execute_with_safety_checks(operation, path)

4. Connect with MCP Clients

Any MCP-compatible client can discover and use your server:

# Client code (simplified)
async def use_mcp_server():
session
= await connect_to_mcp_server("http://localhost:8000/sse")
tools = await session.list_tools()

result = await session.call_tool(
name="my_tool",
arguments={"param": "value"}
)
return result

MCP Design Principles

From building tools like CmdGenie and understanding MCP architecture, here are the key principles:

Standardization: Tools follow consistent schemas and protocols 
Human-in-the-Loop: AI suggests, humans approve critical actions
Context-Aware: AI adapts to environment, user preferences, and current state 
Safety First: Validate, preview, and limit potentially dangerous operations Transparency: Show users exactly what the AI is doing 
Reversibility: When possible, allow users to undo AI actions 
Learning: Improve based on successes and failures 
Interoperability: Any MCP client can work with any MCP server

Common MCP Pitfalls

Over-Automation: Don’t automate everything. Some tasks should remain manual for safety and oversight.

Context Overload: Too much context can confuse AI. Be selective about what information to include.

Security Blindness: MCP servers have access to sensitive operations. Implement proper security from day one.

Ignoring Standards: The power of MCP comes from standardization. Don’t create custom protocols.

Poor Error Handling: Distributed systems fail. Plan for network issues, server downtime, and edge cases.

Your MCP Journey Starts Here

You don’t need to build the next Cursor to benefit from MCP. Start small:

Pick One Workflow: Choose something you do repeatedly 
Identify Required Tools: What capabilities would an AI need to help? 
Build or Find MCP Servers: Create simple servers or use existing ones 
Add Safety Rails: Implement approval and validation workflows 
Learn and Iterate: Improve based on real usage

The future of AI isn’t just smarter models — it’s AI that can actually do things in the world through standardized, safe, and reusable protocols. MCP is that protocol, and the ecosystem is just beginning to flourish.

Want to explore the code? Check out CmdGenie here and try it using npm from here. Wrapping Up Our Journey

We’ve covered a lot of ground in this series:

  • Article 1: Understanding the AI landscape
  • Article 2: Building RAG systems for better information access
  • Article 3: Fine-tuning models for your specific needs
  • Article 4: Creating custom models from scratch
  • Article 5: Connecting AI to the real world with MCP

The common thread? AI is most powerful when it’s connected to your specific context, data, and workflows. Whether through RAG, fine-tuning, custom models, or MCP integrations, the goal is the same: making AI that actually helps you get things done.

The tools exist. The models are available. The only question is: what will you build?

The future of AI isn’t about replacing humans — it’s about augmenting human capabilities with intelligent, contextual, connected systems. And now you know how to build them.

Thanks for going through this article series. I am always open to talking about tech or movies. You can reach out to me on LinkedIn.