Skip to main content
POST
/
automation
/
automations
Create Automation
curl --request POST \
  --url https://api.keywordsai.co/automation/automations/ \
  --header 'Authorization: Bearer <token>'
{
  "id": "auto-eval-001",
  "automation_slug": "prod_quality_monitor",
  "name": "Production Quality Monitor",
  "automation_type": "online_eval",
  "condition": {
    "id": "cond-12345",
    "condition_slug": "success_logs",
    "name": "Successful Requests",
    "condition_type": "single_log"
  },
  "evaluator_ids": [
    "eval-quality-uuid",
    "eval-safety-uuid"
  ],
  "evaluator_details": [
    {
      "id": "eval-quality-uuid",
      "name": "Response Quality Evaluator",
      "evaluator_slug": "response_quality",
      "eval_class": "keywordsai_custom_evaluator"
    },
    {
      "id": "eval-safety-uuid",
      "name": "Safety Check",
      "evaluator_slug": "safety_check",
      "eval_class": "keywordsai_custom_evaluator"
    }
  ],
  "is_enabled": true,
  "configuration": {
    "evaluator_ids": [
      "eval-quality-uuid",
      "eval-safety-uuid"
    ],
    "sampling_rate": 0.1
  },
  "created_at": "2025-01-15T10:30:00Z",
  "updated_at": "2025-01-15T10:30:00Z"
}

Documentation Index

Fetch the complete documentation index at: https://docs.keywordsai.co/llms.txt

Use this file to discover all available pages before exploring further.

Creates an online evaluation automation that automatically runs evaluators on logs matching specified conditions.

Authentication

All endpoints require API key authentication:
Authorization: Bearer YOUR_API_KEY
Note: Use your API Key (not JWT token) for all requests. You can find your API keys in the Keywords AI platform under Settings > API Keys.

Required Fields

FieldTypeDescription
automation_slugstringUnique identifier for the automation
namestringHuman-readable name for the automation
automation_typestringMust be "online_eval" for evaluation automations
conditionstringID of the condition (from Create Condition endpoint)
evaluator_idsarray[string]Array of evaluator UUIDs to run (from List Evaluators endpoint)
configuration.sampling_ratenumberSampling rate between 0.0-1.0 (e.g., 0.1 = 10%)

Optional Fields

FieldTypeDescription
is_enabledbooleanWhether automation is active (default: false)

Validation Rules

  • evaluator_ids must not be empty
  • All evaluator IDs must exist and belong to your organization
  • sampling_rate must be between 0.0 and 1.0
  • automation_type must be "online_eval" for evaluation automations
  • Condition must be of type "single_log" (aggregation not yet supported)

Examples

Basic Automation (10% Sampling)

import requests

url = "https://api.keywordsai.co/automation/automations/"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

data = {
    "automation_slug": "prod_quality_monitor",
    "name": "Production Quality Monitor",
    "automation_type": "online_eval",
    "condition": "cond-12345",
    "evaluator_ids": [
        "eval-quality-uuid",
        "eval-safety-uuid"
    ],
    "is_enabled": True,
    "configuration": {
        "sampling_rate": 0.1
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Cost-Effective Monitoring (1% Sampling)

data = {
    "automation_slug": "budget_eval",
    "name": "Budget-Friendly Evaluation",
    "automation_type": "online_eval",
    "condition": "cond-12345",
    "evaluator_ids": [
        "eval-quality-uuid"
    ],
    "is_enabled": True,
    "configuration": {
        "sampling_rate": 0.01
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Full Evaluation (100% Sampling)

data = {
    "automation_slug": "vip_customer_eval",
    "name": "VIP Customer Evaluation",
    "automation_type": "online_eval",
    "condition": "cond-vip-customer",
    "evaluator_ids": [
        "eval-comprehensive-uuid"
    ],
    "is_enabled": True,
    "configuration": {
        "sampling_rate": 1.0
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Disabled Automation (Created but Not Active)

data = {
    "automation_slug": "test_automation",
    "name": "Test Automation",
    "automation_type": "online_eval",
    "condition": "cond-12345",
    "evaluator_ids": [
        "eval-quality-uuid"
    ],
    "is_enabled": False,
    "configuration": {
        "sampling_rate": 0.1
    }
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Response

Status: 201 Created
{
  "id": "auto-eval-001",
  "automation_slug": "prod_quality_monitor",
  "name": "Production Quality Monitor",
  "automation_type": "online_eval",
  "condition": {
    "id": "cond-12345",
    "condition_slug": "success_logs",
    "name": "Successful Requests",
    "condition_type": "single_log"
  },
  "evaluator_ids": [
    "eval-quality-uuid",
    "eval-safety-uuid"
  ],
  "evaluator_details": [
    {
      "id": "eval-quality-uuid",
      "name": "Response Quality Evaluator",
      "evaluator_slug": "response_quality",
      "eval_class": "keywordsai_custom_evaluator"
    },
    {
      "id": "eval-safety-uuid",
      "name": "Safety Check",
      "evaluator_slug": "safety_check",
      "eval_class": "keywordsai_custom_evaluator"
    }
  ],
  "is_enabled": true,
  "configuration": {
    "evaluator_ids": [
      "eval-quality-uuid",
      "eval-safety-uuid"
    ],
    "sampling_rate": 0.1
  },
  "created_at": "2025-01-15T10:30:00Z",
  "updated_at": "2025-01-15T10:30:00Z"
}

Response Fields

FieldTypeDescription
idstringUnique automation identifier
automation_slugstringURL-friendly identifier
namestringDisplay name of the automation
automation_typestringType of automation ("online_eval")
conditionobjectCondition object with details
evaluator_idsarray[string]List of evaluator UUIDs
evaluator_detailsarray[object]Detailed information about each evaluator
is_enabledbooleanWhether the automation is active
configurationobjectConfiguration including sampling rate
created_atstringISO timestamp of creation
updated_atstringISO timestamp of last update

How It Works

  1. Condition Evaluation: Incoming logs are evaluated against the specified condition
  2. Sampling: Logs that match the condition are sampled based on sampling_rate
  3. Async Evaluation: Sampled logs are queued for evaluation (doesn’t block logging pipeline)
  4. Score Creation: Evaluators run and scores are saved to the database
  5. Dashboard Visibility: Scores appear in the existing scores dashboard at /api/scores/

Important Notes

  • Evaluation runs asynchronously and doesn’t impact logging latency
  • Sampling reduces evaluation costs while maintaining statistical significance
  • Multiple evaluators can run on the same log
  • Scores are accessible via the standard Scores API
  • Online eval automations do not send notifications (use regular automations for alerts)

Error Responses

400 Bad Request

{
  "evaluator_ids": [
    "Evaluators not found: invalid-uuid-123"
  ]
}

400 Bad Request - Invalid Sampling Rate

{
  "configuration": {
    "sampling_rate": [
      "Sampling rate must be between 0.0 and 1.0"
    ]
  }
}

400 Bad Request - Empty Evaluator List

{
  "evaluator_ids": [
    "At least one evaluator is required"
  ]
}

401 Unauthorized

{
  "detail": "Authentication credentials were not provided."
}

404 Not Found - Condition

{
  "condition": [
    "Condition not found: cond-invalid"
  ]
}