Ploinky Architecture
Technical architecture and implementation details of the Ploinky AI agent deployment system.
System Overview
Ploinky is built as a modular system with clear separation of concerns:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Interface β
β (CLI Commands / Web Console / Chat / Dashboard) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Ploinky CLI Core β
β (Command Handler / Service Manager / Config) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββΌββββββββββββββββββββ
β β β
βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ
β Routing Serverβ β Web Services β βContainer Mgmt β
β (HTTP API) β β (WebTTY/Chat)β β (Docker/Podman)β
βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ
β β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent Containers β
β (Isolated Linux containers per agent) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Key Design Principles
- Isolation First: Every agent runs in its own container
- Workspace Scoped: All configuration is local to the project directory
- Zero Global State: No system-wide installation or configuration
- Git-Friendly: Configuration stored in .ploinky folder, can be gitignored
- Runtime Agnostic: Supports Docker and Podman transparently
Core Components
CLI Command System (cli/commands/cli.js)
The main entry point that handles all user commands:
// Command routing structure
handleCommand(args) {
switch(command) {
case 'add': // Repository management
case 'enable': // Agent/repo activation
case 'start': // Workspace initialization
case 'shell': // Interactive container access
case 'webchat': // Web interface launchers
// ... more commands
}
}
Service Layer (cli/services/)
| Service | Responsibility |
|---|---|
workspace.js |
Manages .ploinky directory and configuration |
docker/ |
Container lifecycle management modules (runtime helpers, interactive commands, agent management) |
repos.js |
Repository management and agent discovery |
agents.js |
Agent registration and configuration |
secretVars.js |
Environment variable and secrets management |
config.js |
Global configuration constants |
help.js |
Help system and documentation |
Container Management
Container Lifecycle
Ploinky manages containers with specific naming conventions and lifecycle hooks:
// Container naming convention
function getAgentContainerName(agentName, repoName) {
const proj = path.basename(process.cwd()).replace(/[^a-zA-Z0-9_.-]/g, '_');
const wsid = crypto.createHash('sha256')
.update(process.cwd())
.digest('hex')
.substring(0, 6);
return `ploinky_${proj}_${wsid}_agent_${agentName}`;
}
// Service container (for API endpoints)
function getServiceContainerName(agentName) {
// Similar but with _service_ prefix
return `ploinky_${proj}_${wsid}_service_${agentName}`;
}
Volume Mounts
Each container gets specific volume mounts for security:
{
binds: [
{ source: process.cwd(), target: process.cwd() }, // Workspace
{ source: '/Agent', target: '/Agent' }, // Agent runtime
{ source: agentPath, target: '/code' } // Agent code
]
}
Runtime Detection
Automatically detects and uses available container runtime:
function getRuntime() {
try {
execSync('docker --version', { stdio: 'ignore' });
return 'docker';
} catch {
try {
execSync('podman --version', { stdio: 'ignore' });
return 'podman';
} catch {
throw new Error('No container runtime found');
}
}
}
Routing Server
Purpose
The RoutingServer (cli/server/RoutingServer.js) acts as a reverse proxy, routing API requests to appropriate agent containers:
// routing.json structure
{
"port": 8088,
"static": {
"agent": "demo",
"container": "ploinky_myproject_abc123_service_demo",
"hostPath": "/path/to/demo/agent"
},
"routes": {
"agent1": {
"container": "ploinky_myproject_abc123_service_agent1",
"hostPort": 7001
},
"agent2": {
"container": "ploinky_myproject_abc123_service_agent2",
"hostPort": 7002
}
}
}
Request Flow
- Client sends request to
http://localhost:8088/apis/agent1/method - RoutingServer extracts agent name from path
- Looks up agent's container port in routing.json
- Proxies request to
http://localhost:7001/api/method - Returns response to client
Static File Serving
The router serves static files from the host filesystem in two ways:
- Static agent root (existing): requests like
/index.htmlmap tostatic.hostPathinrouting.json. - Agent-specific static routing (new): requests like
/demo/ui/index.htmlmap to thehostPathof thedemoagent fromroutes.demo.hostPath.
// Static agent root
GET /index.html β routing.static.hostPath/index.html
GET /assets/app.js β routing.static.hostPath/assets/app.js
// Agent-specific static routing
GET /demo/ui/index.html β routing.routes.demo.hostPath/ui/index.html
GET /simulator/app.js β routing.routes.simulator.hostPath/app.js
Blob Storage API
The router exposes a simple blob storage API for large files with streaming upload/download.
// Upload (streaming)
POST /blobs/<agentName>
Headers:
Content-Type: application/octet-stream
X-Mime-Type: text/plain # optional; falls back to Content-Type
X-File-Name: report.pdf # optional; original filename for metadata
Body: raw bytes (streamed)
Response: 201 Created
{ "id": "", "url": "/blobs/<agentName>/", "size": N, "mime": "text/plain", "agent": "", "filename": "report.pdf" }
// Download (streaming, supports Range)
GET /blobs/<agentName>/<id>
HEAD /blobs/<agentName>/<id>
- Streams bytes from <agentWorkspace>/blobs/<id> with metadata from .../blobs/<id>.json
- Sets Content-Type, Content-Length, Accept-Ranges, and supports partial responses (206)
Workspace System
Directory Structure
.ploinky/
βββ agents/ # Agent registry (JSON)
βββ repos/ # Cloned agent repositories
β βββ basic/
β β βββ shell/
β β β βββ manifest.json
β β βββ node-dev/
β β βββ manifest.json
β βββ demo/
β β βββ demo/
β β βββ simulator/
β βββ custom-repo/
βββ routing.json # Router configuration
βββ .secrets # Environment variables
βββ running/ # Process PID files
βββ router.pid
βββ webtty.pid
βββ webchat.pid
βββ dashboard.pid
logs/ # Application logs
βββ router.log
Agent Registry (agents/)
JSON file storing enabled agents and their configuration:
{
"ploinky_project_abc123_agent_demo": {
"agentName": "demo",
"repoName": "demo",
"containerImage": "node:18-alpine",
"createdAt": "2024-01-01T00:00:00Z",
"projectPath": "/home/user/project",
"type": "agent",
"config": {
"binds": [...],
"env": [...],
"ports": [{"containerPort": 7000}]
}
}
}
Configuration Management
Workspace configuration persists across sessions:
// Stored in agents/_config
{
"static": {
"agent": "demo",
"port": 8088
}
}
Security Model
Container Isolation
- Filesystem: Containers only access current workspace directory
- Network: Isolated network namespace per container
- Process: No access to host processes
- Resources: Can set CPU/memory limits
Secret Management
Environment variables stored in .ploinky/.secrets with aliasing support:
API_KEY=sk-123456789
PROD_KEY=$API_KEY # Alias reference
DATABASE_URL=postgres://localhost/db
Web Access Control
- Password protection for web interfaces
- Session-based authentication
- WebSocket token validation
- CORS headers configuration
Web Services Architecture
WebTTY/Console (cli/webtty/)
Provides terminal access through web browser:
// Component structure
server.js // HTTP/WebSocket server
tty.js // PTY management
console.js // Client-side terminal UI
clientloader.js // Dynamic UI loader
WebChat (cli/webtty/chat.js)
Chat interface for CLI programs:
- Captures stdout/stdin through PTY
- WebSocket-based real-time communication
- WhatsApp-style UI with message bubbles
- Automatic reconnection handling
Dashboard (dashboard/)
Management interface components:
landingPage.js // Main dashboard UI
auth.js // Authentication
repositories.js // Repo management
configurations.js // Settings management
observability.js // Monitoring views
WebSocket Protocol
// Message types
{ type: 'input', data: 'user command' } // User input
{ type: 'output', data: 'program output' } // Program output
{ type: 'resize', cols: 80, rows: 24 } // Terminal resize
{ type: 'ping' } // Keep-alive
Data Flow Examples
Starting an Agent
1. User: enable agent demo
β Find manifest in repos/demo/demo/manifest.json
β Register in .ploinky/agents
β Generate container name
2. User: start demo 8088
β Read agents registry
β Start container for each agent
β Map ports (container:7000 β host:7001)
β Update routing.json
β Start RoutingServer on 8088
3. Container startup:
β Pull image if needed
β Mount volumes (workspace, code, Agent)
β Set environment variables
β Run agent command or supervisor
API Request Routing
1. Client: GET http://localhost:8088/apis/simulator/monty-hall
2. RoutingServer:
β Extract agent: "simulator"
β Lookup in routing.json: hostPort: 7002
β Proxy to: http://localhost:7002/api/monty-hall
3. Agent Container:
β Process request
β Return response
4. RoutingServer:
β Forward response to client
WebChat Session
1. User: webchat secret python bot.py
2. WebTTY Server:
β Start PTY with command: python bot.py
β Create HTTP server on port 8080
β Serve chat.html interface
3. Browser connects:
β WebSocket handshake
β Authenticate with password
β Establish bidirectional channel
4. Message flow:
β User types in chat
β WebSocket β Server β PTY stdin
β Program output β PTY stdout β WebSocket β Browser
β Display as chat bubble
Agent MCP Bridge
AgentServer (Agent/server/AgentServer.mjs) expune capabilitΔΘi prin Model Context Protocol (MCP) folosind transport Streamable HTTP la ruta /mcp pe portul containerului (implicit 7000).
Router β Agent Communication
- RouterServer abstraction: RouterServer talks to agents through
cli/server/AgentClient.js, which wraps MCP transports. - MCP protocol: AgentClient builds a
StreamableHTTPClientTransporttowardshttp://127.0.0.1:<hostPort>/mcpand exposeslistTools(),callTool(),listResources(), andreadResource(). - Unified routing: Requests hitting
/mcpcarry commands such aslist_tools,list_resources, ortool. RouterServer fans these calls out to every registered MCP endpoint and aggregates the replies. - Per-agent routes: Legacy paths like
/mcps/<agent>remain available for direct calls when needed. - Transport independence: RouterServer stays agnostic of protocol details; AgentClient encapsulates the MCP implementation.
Tools and Resources
Agents declare their MCP surface through a JSON file committed alongside the agent source code: .ploinky/repos/<repo>/<agent>/mcp-config.json. When the CLI boots an agent container it copies this file to /tmp/ploinky/mcp-config.json (also keeping /code/mcp-config.json for reference). The file can expose tools, resources, and prompts, and each tool is executed by spawning a shell command. AgentServer does not register anything if the configuration file is missing.
{
"tools": [
{
"name": "list_things",
"title": "List Things",
"description": "Enumerate items in a category",
"command": "node scripts/list-things.js",
"input": {
"type": "object",
"properties": {
"category": {
"type": "string",
"description": "fruits | animals | colors"
}
},
"required": ["category"],
"additionalProperties": false
}
}
],
"resources": [
{
"name": "health",
"uri": "health://status",
"description": "Service health state",
"mimeType": "application/json",
"command": "node scripts/health.js"
}
],
"prompts": [
{
"name": "summarize",
"description": "Short summary",
"messages": [
{ "role": "system", "content": "You are a concise analyst." },
{ "role": "user", "content": "${input}" }
]
}
]
}
AgentServer pipes a JSON payload to each command via stdin. Tool invocations receive { tool, input, metadata }; resources receive { resource, uri, params }. Command stdout is forwarded to the MCP response, while non-zero exit codes surface as MCP errors.
MCP Task Queue
Longer running MCP tools are orchestrated through Agent/server/TaskQueue.mjs. Every tool execution is wrapped in a task object that captures the command, input payload, timeout hints, timestamps, and eventual result/error.
- Concurrency guard: AgentServer limits work in flight (default 10, overridable via
"maxParallelTasks"inmcp-config.json) and keeps the rest in a FIFO pending queue. - Durable state: Tasks are stored in
$PWD/.tasksQueue. On restart, pending items resume and previously running entries are rewound to pending so they execute again. - Per-task payloads: The queue injects a unique
taskIdinto the JSON delivered to each command so downstream scripts can correlate logs or offer a status channel. - Timeout + lifecycle: Tool definitions may specify
timeoutMs. The queue arms a timer, kills the underlying process if it runs too long, and marks the task as failed with a timeout message. - Response capture: Successful executions persist their stdout (and stderr if present) as MCP content results. Failures store stderr/exit codes so RouterServer surfaces meaningful diagnostics back to the caller.
Task Status Polling
The Router/CLI side uses Agent/client/MCPBrowserClient.js to follow long-running jobs.
- Polling endpoint: After a task is enqueued, the client hits
/mcps/<agent>/task?taskId=...(falling back to/getTaskStatus) every 30 seconds and adds a timestamp query parameter to avoid caching. - Incremental updates: Each status response (HTTP 200) updates the console only if the task status changed (pending β running β completed/failed). Terminal states stop the poller immediately.
- Error handling: Non-200 responses are logged and the poller keeps retrying (except
404 task not found, which stops polling and reports failure), so status checks continue even across transient outages.
Performance Considerations
Container Optimization
- Reuse existing containers when possible
- Lazy image pulling
- Shared base layers between agents
- Volume mount caching
Network Efficiency
- Local port mapping avoids network overhead
- HTTP keep-alive for persistent connections
- WebSocket for real-time communication
- Request buffering and batching
Resource Management
- Automatic container cleanup on exit
- PID file tracking for process management
- Log rotation for long-running services
- Memory-efficient streaming for large outputs