Universal Gateway API

Fire Mission's Universal Gateway API consolidates all operations into a single secure endpoint (/api/v1/gateway), reducing attack surface while enabling universal AI cost tracking, security scanning, and intelligent routing across our security-vetted provider list (OpenAI, Anthropic, Google Gemini, Groq, Together AI, and self-hosted LLMs)—without vendor lock-in.

Security Architecture

Single External Endpoint: Fire Mission consolidates 50+ individual endpoints into one Universal Gateway, dramatically reducing attack surface. All operations (AI proxy, metrics, provider management) route through POST /api/v1/gateway with operation-based routing.

Base URL

https://firemission.us

API Version

v1

Format

JSON

Key Features

  • āœ“ Transparent proxy for OpenAI, Anthropic, Google Gemini, Groq, Together AI, and custom APIs
  • āœ“ Real-time cost tracking and forecasting across all providers
  • āœ“ Integrated security scanning for PII, prompt injection, and compliance violations (Paid tiers)
  • āœ“ Intelligent MoE routing for cost/speed/quality optimization (SPECIAL RECON+)
  • āœ“ RESTful JSON API with session-based authentication

Documentation Notice: All code examples, request/response formats, and values shown in this documentation are illustrative only. They demonstrate how to integrate with the Fire Mission API but are not indicative of internal implementation logic, proprietary algorithms, or actual system behavior.

US & DoW-Allied Provider Framework

firemission.us only allows management and tracking of AI providers from US-based and Department of War (DoW) allied entities. API endpoints support providers from United States, NATO, and Major Non-NATO Ally (MNNA) nations only.

Authentication

Fire Mission uses session-based authentication with HTTP-only cookies. All API requests must include a valid session cookie obtained through the authentication flow.

Authentication Flow

  1. 1. Login: Navigate to /api/login to authenticate with your Fire Mission account
  2. 2. Session Cookie: Upon successful login, a secure session cookie is set automatically
  3. 3. API Requests: Include the session cookie in all subsequent API requests
  4. 4. Logout: Call POST /api/logout to invalidate the session

Security Note: Session cookies are HTTP-only and secure. Never expose session cookies in client-side code or logs. Sessions expire after 24 hours of inactivity.

API Key Authentication for External Applications

For external applications, scripts, CI/CD pipelines, or any automated systems, Fire Mission provides API key authentication. This allows your applications to access Fire Mission's AI proxy, cost tracking, and security scanning capabilities programmatically.

Key Benefits

  • One Key to Rule Them All: Your external applications only need a single Fire Mission API key
  • Zero Key Management Overhead: Configure your AI provider keys once in the dashboard
  • Automatic Key Resolution: Fire Mission automatically retrieves your configured AI provider keys
  • Organization Support: Service accounts can use any organization member's configured keys

Getting Started

  1. Navigate to API Key Management in your dashboard
  2. Create a new service account and API key
  3. Configure your AI provider keys in AI Provider Management (one-time setup)
  4. Use your Fire Mission API key in your application's Authorization header

Example: Python

import requests

# Only ONE key needed - your Fire Mission API key
FIRE_MISSION_KEY = "fm_live_abc123..."

response = requests.post(
    "https://your-domain.com/api/v1/gateway",
    headers={"Authorization": f"Bearer {FIRE_MISSION_KEY}"},
    json={
        "operation": "ai.proxy",
        "params": {
            "provider": "openai",
            "endpoint": "/chat/completions",
            "body": {
                "model": "gpt-4",
                "messages": [{"role": "user", "content": "Hello!"}]
            }
        }
    }
)

# Fire Mission automatically uses your configured OpenAI key
# All costs tracked, security scanned, metrics logged
print(response.json())

Flexible Integration: You can either configure AI provider keys once in Fire Mission (zero overhead) OR provide them per-request if you need more control. Both approaches work seamlessly.

Organization Admin Operations

POST

Organization admins can manage API keys, service accounts, and team member access across their entire organization using these gateway operations. All operations require organization admin role.

šŸ” Admin-Only Access

These operations enforce role-based access control (RBAC). Only users with organizationRole: 'admin' can execute these operations. Regular organization members will receive 403 Forbidden responses.

Available Operations

1. List All Organization Service Accounts

org.service-accounts.list

Retrieves all service accounts created by any member of the organization, providing complete visibility into organizational API key infrastructure.

{
  "operation": "org.service-accounts.list",
  "params": {}
}

Response: Array of all organization service accounts with creator info, creation dates, and associated API keys

2. List All Organization API Keys

org.api-keys.list

Returns all active API keys across all organization members. Critical for security audits and compliance tracking.

{
  "operation": "org.api-keys.list",
  "params": {}
}

Response: Array of all API keys with owner email, key type (live/test), proxy mode, and last used timestamp

3. Create Organization-Level Service Account

org.service-account.create

Creates a service account that belongs to the organization (not an individual member). Ideal for shared CI/CD pipelines or organization-wide applications.

{
  "operation": "org.service-account.create",
  "params": {
    "serviceAccountName": "Production CI/CD Pipeline",
    "name": "prod-pipeline-key",
    "description": "Shared key for GitHub Actions workflows",
    "keyType": "live",
    "proxyMode": "security_enhanced"
  }
}

Response: Service account details with newly generated API key (shown only once)

4. Revoke Any Organization Member's API Key

org.api-key.revoke

Admins can revoke any API key within their organization, including keys created by other members. Essential for offboarding or security incidents.

{
  "operation": "org.api-key.revoke",
  "params": {
    "keyId": "key_abc123def456",
    "reason": "Employee offboarding - terminated 2025-11-05"
  }
}

Response: Confirmation with revocation timestamp and audit log entry ID

Full Request Example

curl -X POST https://firemission.us/api/v1/gateway \
  -H "Content-Type: application/json" \
  -b "session=YOUR_SESSION_COOKIE" \
  -d '{
    "operation": "org.api-keys.list",
    "params": {}
  }'

Response Example

{
  "success": true,
  "data": {
    "serviceAccounts": [
      {
        "id": "sa_123456",
        "name": "Production API Gateway",
        "createdBy": "admin@company.com",
        "createdAt": "2025-10-15T08:30:00Z",
        "apiKeys": [
          {
            "id": "key_abc123",
            "name": "prod-gateway-key",
            "keyType": "live",
            "proxyMode": "security_enhanced",
            "lastUsed": "2025-11-05T14:22:33Z",
            "requestCount": 15234
          }
        ]
      }
    ]
  }
}

āš ļø Security Notice

Organization admin operations are logged in compliance audit trails. All key revocations, creations, and access requests are tracked with IP addresses, user agents, and timestamps for regulatory compliance (GDPR/CCPA).

šŸ’” Use Cases

  • • Offboarding: Instantly revoke all API keys when an employee leaves
  • • Security Audits: Review all active keys across the organization
  • • Compliance: Generate reports of API key usage for SOC 2 audits
  • • Shared Infrastructure: Create organization-level keys for CI/CD pipelines
  • • Cost Tracking: Monitor which teams/members generate AI costs

Rate Limits

API rate limits are tier-based to ensure fair usage and platform stability.

TierBYOK Requests/MonthAPI Rate LimitSecurity ScanningMoE Routing
RECON ($75/mo)Unlimited60 req/mināœ“ Mandatoryāœ“ Available
ASYMMETRIC OPS ($175/mo)Unlimited60 req/mināœ“ Mandatoryāœ“ Available
FULL SPECTRUM ($275/mo)Unlimited120 req/mināœ“ Mandatoryāœ“ Available

Rate Limit Headers: All responses include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers.

Universal Gateway

POST

Fire Mission's Universal Gateway consolidates all operations into a single secure endpoint, dramatically reducing attack surface from 50+ endpoints to 1. All operations (AI proxy, metrics, provider management) route through operation-based routing.

Endpoint

POST /api/v1/gateway

Security Architecture Benefits

  • • Reduced Attack Surface: Single external endpoint vs. 50+ individual routes
  • • Centralized Security: Authentication, rate limiting, and audit logging at gateway entry
  • • Simplified Firewall Rules: One endpoint to whitelist for air-gapped deployments
  • • Operation-Based Routing: Flexible internal routing without exposing implementation

Request Format

FieldTypeRequiredDescription
operationstringYesOperation to execute: ai.proxy, metrics.cost, provider.list, etc.
paramsobjectYesOperation-specific parameters (see operation documentation below)
contextobjectNoOptional context: applicationId, scope (user|organization)

Supported Operations

AI Operations

  • • ai.proxy - Proxy AI requests to providers
  • • ai.cost - Calculate costs for requests
  • • ai.models - List available models

Metrics Operations

  • • metrics.cost - Get cost analytics
  • • metrics.rate-limits - Check rate limit status
  • • metrics.comprehensive - Full metrics dashboard

Provider Management

  • • provider.list - List configured providers
  • • provider.create - Add new provider
  • • provider.update - Update provider config
  • • provider.delete - Remove provider

User Operations

  • • user.me - Get current user info
  • • user.preferences - Get/update preferences

Organization Admin Operations

  • • org.service-accounts.list - List all org service accounts
  • • org.api-keys.list - List all org API keys
  • • org.service-account.create - Create org-level service account
  • • org.api-key.revoke - Revoke any member's API key

Requires organization admin role

Code Examples

# Example: AI Proxy Operation
curl -X POST https://your-domain.com/api/v1/gateway \
  -H "Content-Type: application/json" \
  -b "session=YOUR_SESSION_COOKIE" \
  -d '{
    "operation": "ai.proxy",
    "params": {
      "provider": "openai",
      "model": "gpt-4",
      "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is AI cost optimization?"}
      ],
      "temperature": 0.7,
      "max_tokens": 500
    },
    "context": {
      "scope": "user"
    }
  }'

# Example: Get Cost Metrics
curl -X POST https://your-domain.com/api/v1/gateway \
  -H "Content-Type: application/json" \
  -b "session=YOUR_SESSION_COOKIE" \
  -d '{
    "operation": "metrics.cost",
    "params": {
      "period": "30d",
      "groupBy": "provider"
    }
  }'

Response

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "AI cost optimization involves strategies to reduce spending on AI API calls while maintaining quality..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  },
  "estimated_cost": 0.00525
}

Transparent Proxy: Response format matches the original AI provider's API exactly. Fire Mission adds estimated_cost for convenience but does not modify the provider's response.

Cost Analytics Endpoints

GET

Retrieve comprehensive cost analytics, forecasts, trends, and provider comparison data for all AI API usage through multiple specialized endpoints.

Available Endpoints

  • GET /api/byok/analytics/forecast/:period - Cost forecasts (7d, 30d, 90d)
  • GET /api/byok/analytics/anomalies - Spending anomaly detection
  • GET /api/byok/analytics/trends/:period - Usage trends analysis
  • GET /api/byok/analytics/budget - Budget tracking and alerts
  • GET /api/byok/analytics/providers/:period - Provider-specific cost breakdown

Query Parameters

ParameterTypeDescription
timeframestringTime period: 7d, 30d, 90d (optional, defaults to 30d)
providerstringFilter by specific provider (optional)

Response

{
  "totalCost": 127.45,
  "totalRequests": 1543,
  "avgCostPerRequest": 0.0826,
  "costByProvider": [
    {
      "provider": "openai",
      "totalCost": 89.32,
      "requests": 856,
      "avgCost": 0.1043,
      "avgLatency": 1234
    },
    {
      "provider": "anthropic",
      "totalCost": 38.13,
      "requests": 687,
      "avgCost": 0.0555,
      "avgLatency": 982
    }
  ],
  "forecast": {
    "7day": 48.20,
    "30day": 206.57,
    "90day": 619.71
  },
  "anomalies": [
    {
      "date": "2025-10-24",
      "cost": 45.67,
      "expectedCost": 12.30,
      "deviation": 271,
      "severity": "high"
    }
  ],
  "trends": {
    "dailyAverage": 4.25,
    "weeklyGrowth": 12.5,
    "efficiencyScore": 78
  }
}

ETL Optimization Metrics Endpoint

GET

Retrieve document processing optimization metrics including token savings, transformation types, and cost analytics. Shows how much you're saving through ETL optimization.

Endpoint

GET /api/user/etl-metrics

Response

{
  "totalDocuments": 247,
  "totalOriginalTokens": 1250000,
  "totalOptimizedTokens": 875000,
  "totalTokensSaved": 375000,
  "averageSavingsPercent": 30.0,
  "totalCostSavings": 7.50,
  "transformations": {
    "deduplication": 89,
    "chunking": 156,
    "compression": 142,
    "normalization": 67
  },
  "topTransformation": "chunking",
  "recentDocuments": [
    {
      "documentId": "doc-123",
      "timestamp": "2025-10-26T14:30:00Z",
      "originalTokens": 5000,
      "optimizedTokens": 3200,
      "savingsPercent": 36.0,
      "transformationsApplied": ["chunking", "compression"]
    }
  ]
}

Real Savings: All token counts and savings are based on actual document processing. Cost savings calculated using current provider pricing. Dashboard widget refreshes every 5 minutes.

Rate Limit Status Endpoint

GET

Real-time AI provider rate limit tracking with usage percentage, remaining requests, and reset time. Proactive warnings at 75% and 90% thresholds.

Endpoint

GET /api/user/rate-limit-status

Response

{
  "providers": [
    {
      "provider": "openai",
      "limits": {
        "requestsPerMinute": 3500,
        "tokensPerMinute": 90000,
        "requestsPerDay": 200000
      },
      "usage": {
        "requestsPerMinute": 2625,
        "tokensPerMinute": 67500,
        "requestsPerDay": 150000
      },
      "percentages": {
        "requestsPerMinute": 75.0,
        "tokensPerMinute": 75.0,
        "requestsPerDay": 75.0
      },
      "status": "warning",
      "resetTime": "2025-10-26T15:00:00Z",
      "warningThreshold": 75,
      "message": "Approaching rate limit (75%)"
    },
    {
      "provider": "anthropic",
      "limits": {
        "requestsPerMinute": 1000,
        "tokensPerMinute": 40000
      },
      "usage": {
        "requestsPerMinute": 450,
        "tokinsPerMinute": 18000
      },
      "percentages": {
        "requestsPerMinute": 45.0,
        "tokensPerMinute": 45.0
      },
      "status": "ok",
      "resetTime": "2025-10-26T15:00:00Z",
      "message": "Within safe limits"
    }
  ],
  "overallStatus": "warning",
  "hasWarnings": true,
  "hasCritical": false
}

Status Levels: ok (<75%), warning (75-89%), critical (90%+). System checks every 5 minutes and sends automated alerts via systemNotifications.

MoE Routing Endpoint

SPECIAL RECON+ POST

Intelligent Mixture-of-Experts routing that automatically selects the optimal AI provider based on cost, speed, quality, or balanced optimization. Requires SPECIAL RECON tier or higher.

Endpoint

POST /api/byok/route

Request Body

FieldTypeRequiredDescription
strategystringYescost, speed, quality, balanced
messagesarrayYesArray of message objects
task_typestringNoHint for routing: chat, reasoning, code
fallbackbooleanNoEnable automatic fallback (default: true)

Response

{
  "selectedProvider": "groq",
  "selectedModel": "llama-3-70b",
  "strategy": "cost",
  "reasoning": "Selected based on lowest cost per token for chat task",
  "alternativeProviders": [
    {
      "provider": "openai",
      "model": "gpt-3.5-turbo",
      "estimatedCost": 0.0015,
      "estimatedLatency": 1200,
      "score": 0.82
    }
  ],
  "response": {
    "id": "chatcmpl-456",
    "choices": [
      {
        "message": {
          "role": "assistant",
          "content": "..."
        }
      }
    ],
    "usage": {
      "prompt_tokens": 45,
      "completion_tokens": 120,
      "total_tokens": 165
    },
    "estimated_cost": 0.0008
  },
  "performanceMetrics": {
    "latency": 345,
    "costSavings": 0.0007,
    "qualityScore": 0.88
  }
}

Intelligent Routing: MoE analyzes historical performance data, current provider health, and your optimization strategy to automatically select the best provider for each request. Fallback providers ensure reliability.

Error Handling

All API errors follow a consistent JSON format with HTTP status codes and detailed error messages.

Error Response Format

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "You have exceeded your tier's rate limit of 10 requests per minute",
    "details": {
      "limit": 10,
      "remaining": 0,
      "resetAt": "2025-10-26T15:00:00Z"
    }
  }
}

Common Error Codes

HTTP StatusError CodeDescription
400invalid_requestMalformed request body or missing required fields
401unauthorizedMissing or invalid session cookie
403tier_limit_exceededFeature requires higher subscription tier
429rate_limit_exceededToo many requests; rate limit exceeded
500internal_server_errorUnexpected server error
503provider_unavailableAI provider temporarily unavailable

Need Help?

Fire Mission provides AI-powered technical support for all subscription tiers.

Support

All tiers receive AI-powered technical support. Contact us through the dashboard for assistance with API integration, troubleshooting, or feature requests.

Fire Mission by Blackburn Tactical

The Industry's First Security-Focused AI LLM Middleware

Blackburn Tactical Headquarters
11168 Wheeler Ridge Drive
Manassas, VA 20109

Contact:
Phone: 571.774.8648
Email: mark@blackburntactical.us

Connect with founder Mark Sicat on LinkedIn

Trademark Notices:

Product names, logos, brands, and other trademarks referenced within Fire Mission are the property of their respective trademark holders. AI Providers: Googleā„¢ and Geminiā„¢ are trademarks of Google LLC. Anthropicā„¢ and Claudeā„¢ are trademarks of Anthropic PBC. OpenAIā„¢, ChatGPTā„¢, and GPTā„¢ are trademarks of OpenAI OpCo, LLC. Groqā„¢ is a trademark of Groq, Inc. Together AIā„¢ is a trademark of Together AI, Inc. Cohereā„¢ is a trademark of Cohere Inc. Cloud Providers: Amazon Web Servicesā„¢, AWSā„¢, AWS GovCloudā„¢, Bedrockā„¢, and SageMakerā„¢ are trademarks of Amazon.com, Inc. or its affiliates. Microsoftā„¢, Azureā„¢, Azure Governmentā„¢, and Azure OpenAIā„¢ are trademarks of Microsoft Corporation. Google Cloudā„¢ and Google Government Cloudā„¢ are trademarks of Google LLC. Oracleā„¢ and Oracle Cloudā„¢ are trademarks of Oracle Corporation. IBMā„¢, IBM Cloudā„¢, and watsonxā„¢ are trademarks of International Business Machines Corporation. Competitive Solutions: AWS LISAā„¢ is a trademark of Amazon.com, Inc. LangSmithā„¢ and LangChainā„¢ are trademarks of LangChain, Inc. Portkeyā„¢ is a trademark of Portkey AI, Inc. Heliconeā„¢ is a trademark of Helicone AI. Langfuseā„¢ is a trademark of Langfuse GmbH. LiteLLMā„¢ is a trademark of BerriAI, Inc. Vantageā„¢ is a trademark of Vantage.sh. Strike Graphā„¢ is a trademark of Strike Graph, Inc. Datadogā„¢ is a trademark of Datadog, Inc. Kongā„¢ is a trademark of Kong Inc. Cloudflareā„¢ and Cloudflare AI Gatewayā„¢ are trademarks of Cloudflare, Inc. OpenRouterā„¢ is a trademark of OpenRouter. TrueFoundryā„¢ is a trademark of TrueFoundry. Graviteeā„¢ is a trademark of Gravitee.io. Apigeeā„¢ is a trademark of Google LLC. Tykā„¢ is a trademark of Tyk Technologies Ltd. Development Tools & Platforms: LM Studioā„¢ is a trademark of LM Studio, Inc. Ollamaā„¢ is a trademark of Ollama. LocalAIā„¢ is a trademark of Mudler. PostgreSQLĀ® is a registered trademark of the PostgreSQL Community Association of Canada. MySQLĀ® is a registered trademark of Oracle Corporation. Node.jsĀ® is a registered trademark of OpenJS Foundation. TypeScriptā„¢ is a trademark of Microsoft Corporation. ngrokĀ® is a registered trademark of ngrok, Inc. Payment & Compliance: Shift4ā„¢ is a trademark of Shift4 Payments, LLC. PCI DSSā„¢ is a trademark of PCI Security Standards Council, LLC. FedRAMPĀ® is a registered trademark of the US General Services Administration. These trademark holders are not affiliated with, do not endorse, and are not sponsored by Fire Mission or Blackburn Tactical. All trademarks, registered trademarks, product names, and company names or logos mentioned herein are the property of their respective owners.

Non-Affiliation Disclaimer:

Fire Mission is an independent third-party platform that provides universal API proxy services and is not affiliated with, endorsed by, or sponsored by any AI provider, cloud hosting provider, or technology company mentioned on this website. Fire Mission operates on a BYOK (Bring Your Own Keys) model, where customers use their own API keys from third-party providers. Fire Mission does not provide, source, or resell AI services directly. All AI-powered features (including document generation, AI API calls, and intelligent routing) require customers to maintain active accounts and API keys with external AI providers from our security-vetted list: OpenAI, Anthropic, Google Gemini, Groq, Together AI, or self-hosted OpenAI-compatible LLM endpoints. Fire Mission purposely limits supported providers to ensure US-focused security and compliance for government and enterprise deployment. We are not responsible for the availability, pricing, terms of service, or policies of third-party AI providers.

Compliance & Certification Disclaimer:

FedRAMP Ready* and DoW IL2-Top Secret Ready* indicate that Fire Mission's platform architecture and security controls are designed to meet published FedRAMP and DoW IL2 through Top Secret control baselines. These designations do not represent current FedRAMP authorization or DoW certification. Fire Mission is actively pursuing formal FedRAMP authorization and DoW certifications. Enterprise customers requiring these certifications should contact sales for the current status and roadmap.

Ā© 2025 Mark Christopher Consulting Group, LLC DBA Blackburn Tactical. All rights reserved.

Fire Mission is a product of Mark Christopher Consulting Group, LLC DBA Blackburn Tactical, an independently owned and operated limited liability company registered in the state of Virginia.

Government Contracting Information:

UEI: LNTTFLJSAEM5 • CAGE: 9JD87

Fire Mission operates in compliance with applicable US federal and state laws including CCPA, GDPR (for EU users), and state-specific privacy regulations.