MCP Servers
Deploy Model Context Protocol (MCP) servers that provide tools and context to AI agents and Claude clients.
Overview
MCP Servers are deployable endpoints that expose tools and resources via the Model Context Protocol. They enable AI agents to:
- Query databases and data sources
- Execute file operations
- Call external APIs
- Perform computations
- Access knowledge bases
Quick Start
import { Tenzro } from '@tenzro/cloud';const client = new Tenzro({ apiKey: 'your-api-key' });// Deploy an MCP server connected to your databaseconst server = await client.server.create({projectId: 'project-id',deploymentName: 'my-database-server',aiModel: 'gemini-2.5-flash',dataSourceType: 'data',dataSourceId: 'your-data-db-id', // PostgreSQL databasepublicAccess: false,minInstances: 1,maxInstances: 3,});console.log('Endpoint:', server.full_endpoint);// https://my-database-server.mcp.tenzro.network// Chat with the server (uses AI to call tools)const response = await client.server.chat(server.deployment_id, {message: 'How many users are in the database?',});console.log(response.text);console.log('Tools used:', response.toolCalls);
Data Source Types
MCP servers can connect to different data sources:
| Type | Description | Available Tools |
|---|---|---|
vec | Vector database | search_vectors, insert_vectors, get_stats |
kev | Key-value store | get_value, set_value, list_keys, increment |
data | SQL database | query_database, list_tables, get_schema |
file | Object storage | list_files, get_file_url, upload_file |
graph | Graph database | query_graph, traverse_graph, mutate_graph |
SDK Reference
Create Server
const server = await client.server.create({projectId: string,deploymentName: string,deploymentDescription?: string,aiModel: string,aiProvider?: 'google' | 'openai' | 'anthropic',dataSourceType?: 'vec' | 'kev' | 'data' | 'file' | 'graph',dataSourceId?: string,encryptionKeyId?: string, // For encrypted enclavesapiKeys?: string[], // Additional auth keyspublicAccess?: boolean, // Default: falsecorsOrigins?: string[], // Allowed CORS originsmemoryLimitMb?: number, // Default: 512cpuLimit?: number, // Default: 1maxInstances?: number, // Default: 5minInstances?: number, // Default: 1environmentVars?: Record<string, string>,mcpConfig?: Record<string, any>,});
Chat with Server
// Chat interface - AI determines which tools to callconst response = await client.server.chat(deploymentId, {message: string,conversationId?: string,context?: Record<string, any>,});// Response includes tool usageinterface ChatResponse {success: boolean;text: string;toolCalls?: ToolCall[];usage: TokenUsage;}
Lifecycle Management
// Start a stopped serverawait client.server.start(deploymentId);// Stop a running serverawait client.server.stop(deploymentId);// Update configurationawait client.server.update(deploymentId, {maxInstances: 10,memoryLimitMb: 1024,});// Delete serverawait client.server.delete(deploymentId);
Connecting to Agents
Connect MCP servers to agents for tool access:
// Create MCP server for vector searchconst vecServer = await client.server.create({projectId: 'project-id',deploymentName: 'knowledge-base',aiModel: 'gemini-2.5-flash',dataSourceType: 'vec',dataSourceId: vecDbId,});// Create agent with MCP server accessconst agent = await client.agents.create({projectId: 'project-id',agentName: 'research-assistant',aiModel: 'gemini-2.5-pro',systemPrompt: 'You are a research assistant with access to a knowledge base.',mcpServerIds: [vecServer.deployment_id],});await client.agents.activate(agent.agent_id);// Agent can now search the vector databaseconst response = await client.agents.chat(agent.agent_id, {message: 'Find information about quantum computing',});
Using with Claude Desktop
Add your MCP server to Claude Desktop's configuration:
{"mcpServers": {"tenzro-database": {"command": "npx","args": ["-y", "@modelcontextprotocol/server-fetch"],"env": {"MCP_SERVER_URL": "https://your-server.mcp.tenzro.network","MCP_API_KEY": "your-api-key"}}}}
Security
Access Control
// Restrict access to specific API keysconst server = await client.server.create({projectId: 'project-id',deploymentName: 'secure-server',publicAccess: false,apiKeys: ['key-1', 'key-2'], // Only these keys can access});// Configure CORS for browser accessawait client.server.update(deploymentId, {corsOrigins: ['https://your-app.com', 'https://staging.your-app.com'],});
Encrypted Enclaves
// Deploy in a secure enclave with encryptionconst server = await client.server.create({projectId: 'project-id',deploymentName: 'secure-database-server',dataSourceType: 'data',dataSourceId: dbId,encryptionKeyId: 'your-encryption-key-id', // Uses secure enclave});
Auto-scaling
Servers automatically scale based on load:
const server = await client.server.create({projectId: 'project-id',deploymentName: 'scalable-server',aiModel: 'gemini-2.5-flash',minInstances: 1, // Always at least 1 instancemaxInstances: 10, // Scale up to 10 under loadmemoryLimitMb: 512, // Per instancecpuLimit: 1, // Per instance});
Best Practices
- Use minimum instances: Set minInstances to 1 for production to avoid cold starts
- Limit data access: Connect servers to specific databases, not entire projects
- Monitor metrics: Track error rates and latency for performance optimization
- Secure access: Disable public access and use API keys for authentication
- Use enclaves: For sensitive data, deploy in encrypted enclaves
Limits
| Resource | Limit |
|---|---|
| Max instances | 20 |
| Max memory per instance | 4 GB |
| Max CPU per instance | 4 cores |
| Request timeout | 5 minutes |
| Max servers per project | 50 |