Back to Documentation

Agent Integration

BringSkills Adaptive Engine (BAE) - Seamless integration with any AI agent.

BringSkills Adaptive Engine (BAE)

BAE is our intelligent system that automatically detects which AI agent is making requests and adapts skill output formats for optimal compatibility. No configuration required - it just works.

How BAE Works

  1. 1Analyzes request headers, User-Agent, and behavioral patterns
  2. 2Identifies the AI agent with 95%+ accuracy
  3. 3Transforms skill output to the agent's preferred format
  4. 4Returns optimized response for seamless integration

Supported AI Agents

BAE supports 13 AI coding agents with automatic detection and format optimization. 7 agents support MCP natively, 5 use HTTP API integration.

OpenClaw

Multi-channel AI gateway

  • MCP native
  • Tool-use format
  • Memory integration

Claude Code

Anthropic's coding assistant

  • MCP native
  • Artifact format
  • Multi-file support

Cursor

AI-first code editor

  • MCP native
  • Inline suggestions
  • Diff output

Codex (OpenAI)

OpenAI's code model

  • MCP native
  • Function calling
  • TOML config

Windsurf

Codeium's IDE

  • MCP native
  • Cascade format
  • Context awareness

GitHub Copilot

GitHub's AI assistant

  • MCP native
  • VS Code integration
  • Chat mode

Amazon Q

AWS AI assistant

  • MCP native
  • AWS integration
  • Code transform

Aider

CLI pair programming

  • HTTP API
  • Git integration
  • SEARCH/REPLACE

Also supports: Tabnine, Amazon Q, Cody, JetBrains AI, Replit

Agent Detection

BAE uses multiple signals to identify your AI agent:

Token Agent Hint

Primary

Highest priority. Set when creating your API token.

agent_hint: "claude-code"

X-Agent-Type Header

High

Explicit header in your request.

X-Agent-Type: cursor

User-Agent Analysis

Medium

Automatic detection from User-Agent string patterns.

User-Agent: Claude-Code/1.0

Behavioral Fingerprinting

Fallback

Request patterns, timing, and payload structure analysis.

Request frequency, header combinations, etc.

Format Transformation

BAE transforms skill outputs to match each agent's expected format. Here's an example of the same skill output adapted for different agents:

Original Skill Output

{
  "word_count": 42,
  "sentiment": "positive",
  "keywords": ["AI", "integration", "skills"]
}

OpenClaw Format

{
  "tool_result": {
    "success": true,
    "data": {
      "word_count": 42,
      "sentiment": "positive",
      "keywords": ["AI", "integration", "skills"]
    }
  }
}

Claude Code Format

{
  "type": "tool_result",
  "content": [
    {
      "type": "text",
      "text": "Analysis complete:\n- Word count: 42\n- Sentiment: positive\n- Keywords: AI, integration, skills"
    }
  ]
}

Cursor Format

{
  "result": "**Text Analysis**\n\n| Metric | Value |\n|--------|-------|\n| Words | 42 |\n| Sentiment | positive |\n| Keywords | AI, integration, skills |"
}

Manual Override

🚧 Coming Soon: Manual agent override via headers and query parameters is planned for a future release. Currently, agent detection is automatic based on your API key's configured agent type.

To specify your agent type, configure it when creating your API key in the Dashboard, or update an existing key's agent_type setting.

Response Metadata

Every response includes metadata about the detected agent and applied transformations:

{
  "success": true,
  "result": { ... },
  "meta": {
    "agent_detected": "claude-code",
    "detection_confidence": 0.95,
    "detection_method": "user_agent",
    "format_applied": "anthropic_tool_result",
    "execution_time_ms": 45
  }
}
Skill ExecutionError Handling