πŸ”Œ Model Context Protocol (MCP) Interview Questions

Understanding MCP architecture, servers, tools, and integration with AI applications

⚑

15-Minute MCP Cheatsheet

Quick reference for last-minute interview preparation

πŸ”Œ What is MCP?

Model Context Protocol - Open protocol by Anthropic
Standardizes how AI apps provide context to LLMs
Solves NΓ—M integration problem
Uses JSON-RPC 2.0 over stdio or HTTP/SSE
Enables AI to access external data/tools

πŸ—οΈ Architecture

Host: AI application (Claude Desktop, IDE)
Client: Protocol client inside host
Server: Exposes tools/resources to clients
Transport: stdio or HTTP with SSE
One host connects to multiple servers

🧱 Core Primitives

Tools: Functions LLM can invoke (actions)
Resources: File-like data to read (context)
Prompts: Pre-written message templates
Sampling: Server requests LLM completion
Each has list + read/call handlers

πŸ’» Server Implementation

@modelcontextprotocol/sdk - TypeScript SDK
mcp - Python SDK
Define capabilities in server constructor
Set request handlers for each primitive
Connect via StdioServerTransport

πŸ”§ Tool Definition

name: Unique tool identifier
description: What the tool does
inputSchema: JSON Schema for parameters
Return content array with text/image/etc.
Set isError: true on failure

βš™οΈ Configuration

claude_desktop_config.json
Define mcpServers object
Specify command and args
Pass secrets via env variables
Restart Claude Desktop to apply

πŸ“¦ Popular MCP Servers

@mcp/server-filesystem - File access
@mcp/server-github - GitHub API
@mcp/server-postgres - Database
@mcp/server-slack - Slack integration
@mcp/server-puppeteer - Web automation
@mcp/server-memory - Persistent memory

⚠️ Key Interview Points

β€’ MCP is protocol, not implementation
β€’ Servers are stateless between requests
β€’ Tools are for actions, Resources for data
β€’ Security: validate inputs, limit permissions
β€’ Error handling is critical for reliability
β€’ Debug with logging to stderr (not stdout)

Model Context Protocol (MCP) is an open protocol created by Anthropic that standardizes how AI applications provide context to Large Language Models (LLMs).

Key Problems MCP Solves:

  • Fragmentation: Each AI application builds custom integrations for every data source
  • Context Access: LLMs need access to external data, tools, and resources
  • Standardization: No standard way to connect AI models to data sources
  • Scalability: NΓ—M integration problem (N apps Γ— M data sources)

MCP Architecture:

  • MCP Hosts: AI applications (Claude Desktop, IDEs) that want to access context
  • MCP Clients: Protocol clients within hosts that connect to servers
  • MCP Servers: Lightweight programs that expose data/tools to clients
  • Local Data Sources: Databases, files, APIs that servers connect to
TypeScript

MCP servers expose tools (functions) and resources (data) that AI models can use. Servers communicate with clients using JSON-RPC over stdio or HTTP.

TypeScript

MCP servers are configured in Claude Desktop's configuration file, allowing Claude to access external tools and data sources.

JSON

After configuration:

  1. Restart Claude Desktop
  2. Look for the πŸ”Œ icon in the chat interface
  3. Click to see available MCP servers and their tools
  4. Claude can now use these tools automatically during conversations

Custom MCP servers can integrate any external API, making their functionality available to AI applications. Here's an example that integrates with a REST API.

TypeScript
JSON
JSON

Interview Tips for MCP