MCP Servers

Deploy Model Context Protocol (MCP) servers that provide tools and context to AI agents and Claude clients.

Overview

MCP Servers are deployable endpoints that expose tools and resources via the Model Context Protocol. They enable AI agents to:

  • Query databases and data sources
  • Execute file operations
  • Call external APIs
  • Perform computations
  • Access knowledge bases

Quick Start

import { Tenzro } from '@tenzro/cloud';
const client = new Tenzro({ apiKey: 'your-api-key' });
// Deploy an MCP server connected to your database
const server = await client.server.create({
projectId: 'project-id',
deploymentName: 'my-database-server',
aiModel: 'gemini-2.5-flash',
dataSourceType: 'data',
dataSourceId: 'your-data-db-id', // PostgreSQL database
publicAccess: false,
minInstances: 1,
maxInstances: 3,
});
console.log('Endpoint:', server.full_endpoint);
// https://my-database-server.mcp.tenzro.network
// Chat with the server (uses AI to call tools)
const response = await client.server.chat(server.deployment_id, {
message: 'How many users are in the database?',
});
console.log(response.text);
console.log('Tools used:', response.toolCalls);

Data Source Types

MCP servers can connect to different data sources:

TypeDescriptionAvailable Tools
vecVector databasesearch_vectors, insert_vectors, get_stats
kevKey-value storeget_value, set_value, list_keys, increment
dataSQL databasequery_database, list_tables, get_schema
fileObject storagelist_files, get_file_url, upload_file
graphGraph databasequery_graph, traverse_graph, mutate_graph

SDK Reference

Create Server

const server = await client.server.create({
projectId: string,
deploymentName: string,
deploymentDescription?: string,
aiModel: string,
aiProvider?: 'google' | 'openai' | 'anthropic',
dataSourceType?: 'vec' | 'kev' | 'data' | 'file' | 'graph',
dataSourceId?: string,
encryptionKeyId?: string, // For encrypted enclaves
apiKeys?: string[], // Additional auth keys
publicAccess?: boolean, // Default: false
corsOrigins?: string[], // Allowed CORS origins
memoryLimitMb?: number, // Default: 512
cpuLimit?: number, // Default: 1
maxInstances?: number, // Default: 5
minInstances?: number, // Default: 1
environmentVars?: Record<string, string>,
mcpConfig?: Record<string, any>,
});

Chat with Server

// Chat interface - AI determines which tools to call
const response = await client.server.chat(deploymentId, {
message: string,
conversationId?: string,
context?: Record<string, any>,
});
// Response includes tool usage
interface ChatResponse {
success: boolean;
text: string;
toolCalls?: ToolCall[];
usage: TokenUsage;
}

Lifecycle Management

// Start a stopped server
await client.server.start(deploymentId);
// Stop a running server
await client.server.stop(deploymentId);
// Update configuration
await client.server.update(deploymentId, {
maxInstances: 10,
memoryLimitMb: 1024,
});
// Delete server
await client.server.delete(deploymentId);

Connecting to Agents

Connect MCP servers to agents for tool access:

// Create MCP server for vector search
const vecServer = await client.server.create({
projectId: 'project-id',
deploymentName: 'knowledge-base',
aiModel: 'gemini-2.5-flash',
dataSourceType: 'vec',
dataSourceId: vecDbId,
});
// Create agent with MCP server access
const agent = await client.agents.create({
projectId: 'project-id',
agentName: 'research-assistant',
aiModel: 'gemini-2.5-pro',
systemPrompt: 'You are a research assistant with access to a knowledge base.',
mcpServerIds: [vecServer.deployment_id],
});
await client.agents.activate(agent.agent_id);
// Agent can now search the vector database
const response = await client.agents.chat(agent.agent_id, {
message: 'Find information about quantum computing',
});

Using with Claude Desktop

Add your MCP server to Claude Desktop's configuration:

{
"mcpServers": {
"tenzro-database": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"],
"env": {
"MCP_SERVER_URL": "https://your-server.mcp.tenzro.network",
"MCP_API_KEY": "your-api-key"
}
}
}
}

Security

Access Control

// Restrict access to specific API keys
const server = await client.server.create({
projectId: 'project-id',
deploymentName: 'secure-server',
publicAccess: false,
apiKeys: ['key-1', 'key-2'], // Only these keys can access
});
// Configure CORS for browser access
await client.server.update(deploymentId, {
corsOrigins: ['https://your-app.com', 'https://staging.your-app.com'],
});

Encrypted Enclaves

// Deploy in a secure enclave with encryption
const server = await client.server.create({
projectId: 'project-id',
deploymentName: 'secure-database-server',
dataSourceType: 'data',
dataSourceId: dbId,
encryptionKeyId: 'your-encryption-key-id', // Uses secure enclave
});

Auto-scaling

Servers automatically scale based on load:

const server = await client.server.create({
projectId: 'project-id',
deploymentName: 'scalable-server',
aiModel: 'gemini-2.5-flash',
minInstances: 1, // Always at least 1 instance
maxInstances: 10, // Scale up to 10 under load
memoryLimitMb: 512, // Per instance
cpuLimit: 1, // Per instance
});

Best Practices

  • Use minimum instances: Set minInstances to 1 for production to avoid cold starts
  • Limit data access: Connect servers to specific databases, not entire projects
  • Monitor metrics: Track error rates and latency for performance optimization
  • Secure access: Disable public access and use API keys for authentication
  • Use enclaves: For sensitive data, deploy in encrypted enclaves

Limits

ResourceLimit
Max instances20
Max memory per instance4 GB
Max CPU per instance4 cores
Request timeout5 minutes
Max servers per project50