Skip to main content
PATCH
/
api
/
evaluators
/
{evaluator_id}
Update evaluator
curl --request PATCH \
  --url https://api.keywordsai.co/api/evaluators/{evaluator_id}/ \
  --header 'Authorization: Bearer <token>'

Documentation Index

Fetch the complete documentation index at: https://docs.keywordsai.co/llms.txt

Use this file to discover all available pages before exploring further.

Updates specific fields of an evaluator. Supports partial updates of configuration fields.

Authentication

All endpoints require API key authentication:
Authorization: Bearer YOUR_API_KEY

Path Parameters

ParameterTypeDescription
evaluator_idstringThe unique ID of the evaluator to update

Request Body

New Format: You can now update score_config, passing_conditions, llm_config, and code_config fields to add or modify automation for any evaluator type.
You can update any of the following fields. Only provide the fields you want to update:
FieldTypeDescription
namestringDisplay name for the evaluator
descriptionstringDescription of what the evaluator does
score_configobjectNew: Score type configuration (min/max, choices, etc.)
passing_conditionsobjectNew: Passing conditions using universal filter format
llm_configobjectNew: LLM automation config
code_configobjectNew: Code automation config
configurationsobjectLegacy type-specific configuration settings
categorical_choicesarrayLegacy choices (use score_config.choices in new format)
custom_required_fieldsarrayAdditional required fields
starredbooleanWhether the evaluator is starred
tagsarrayTags for organization
Note: Configuration fields are merged with existing values. Non-null values take precedence over existing null values.

Examples

New Format: Add LLM Config to Existing Evaluator

This example shows adding LLM automation to any evaluator type, demonstrating the decoupling of annotation method from evaluator type.
import requests

evaluator_id = "0f4325f9-55ef-4c20-8abe-376694419947"
url = f"https://api.keywordsai.co/api/evaluators/{evaluator_id}/"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

# Add LLM automation to an existing evaluator
data = {
    "llm_config": {
        "model": "gpt-4o-mini",
        "evaluator_definition": "Rate the quality:\n<input>{{input}}</input>\n<output>{{output}}</output>",
        "scoring_rubric": "1=Poor, 5=Excellent",
        "temperature": 0.1,
        "max_tokens": 200
    },
    "passing_conditions": {
        "primary_score": {
            "operator": "gte",
            "value": 3
        }
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

New Format: Add Code Config to Existing Evaluator

# Add code automation to an existing evaluator
data = {
    "code_config": {
        "eval_code_snippet": "def main(eval_inputs):\n    output = eval_inputs.get('output', '')\n    return len(str(output)) > 10"
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

New Format: Update Score Config

# Update score configuration
data = {
    "score_config": {
        "min_score": 0,
        "max_score": 10,
        "choices": [
            {"name": "Poor", "value": 0},
            {"name": "Average", "value": 5},
            {"name": "Excellent", "value": 10}
        ]
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Legacy Format: Update LLM Evaluator Configuration

import requests

evaluator_id = "0f4325f9-55ef-4c20-8abe-376694419947"
url = f"https://api.keywordsai.co/api/evaluators/{evaluator_id}/"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

# Update scoring rubric and passing score
data = {
    "configurations": {
        "scoring_rubric": "Updated: 1=Very Poor, 2=Poor, 3=Fair, 4=Good, 5=Excellent",
        "passing_score": 4.0
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Update Name and Description

data = {
    "name": "Enhanced Response Quality Evaluator",
    "description": "Advanced evaluator for response quality assessment with updated criteria"
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Update Categorical Choices

# For categorical evaluators
categorical_evaluator_id = "cat-eval-123"
url = f"https://api.keywordsai.co/api/evaluators/{categorical_evaluator_id}/"

data = {
    "categorical_choices": [
        { "name": "Outstanding", "value": 5 },
        { "name": "Very Good", "value": 4 },
        { "name": "Good", "value": 3 },
        { "name": "Fair", "value": 2 },
        { "name": "Poor", "value": 1 }
    ]
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Update Code Evaluator

# For code evaluators
code_evaluator_id = "bool-eval-456"
url = f"https://api.keywordsai.co/api/evaluators/{code_evaluator_id}/"

data = {
    "name": "Enhanced Length Checker",
    "configurations": {
        "eval_code_snippet": "def evaluate(llm_input, llm_output, **kwargs):\n    '''\n    Enhanced length checker with word count\n    Returns True if response has >= 10 words, False otherwise\n    '''\n    if not llm_output:\n        return False\n    \n    word_count = len(llm_output.strip().split())\n    return word_count >= 10"
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Update Tags and Starred Status

data = {
    "starred": True,
    "tags": ["quality", "assessment", "production"]
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Update LLM Engine and Model Options

data = {
    "configurations": {
        "llm_engine": "gpt-4o",
        "model_options": {
            "temperature": 0.2,
            "max_tokens": 300
        }
    }
}

response = requests.patch(url, headers=headers, json=data)
print(response.json())

Response

Status: 200 OK Returns the updated evaluator object with all current field values:
{
  "id": "0f4325f9-55ef-4c20-8abe-376694419947",
  "name": "Enhanced Response Quality Evaluator",
  "evaluator_slug": "response_quality_v1",
  "type": "llm",
  "score_value_type": "numerical",
  "eval_class": "",
  "description": "Advanced evaluator for response quality assessment with updated criteria",
  "configurations": {
    "evaluator_definition": "Rate the response quality based on accuracy, relevance, and completeness.\n<llm_input>{{llm_input}}</llm_input>\n<llm_output>{{llm_output}}</llm_output>",
    "scoring_rubric": "Updated: 1=Very Poor, 2=Poor, 3=Fair, 4=Good, 5=Excellent",
    "llm_engine": "gpt-4o",
    "model_options": {
      "temperature": 0.2,
      "max_tokens": 300
    },
    "min_score": 1.0,
    "max_score": 5.0,
    "passing_score": 4.0
  },
  "created_by": {
    "first_name": "Keywords AI",
    "last_name": "Team",
    "email": "admin@keywordsai.co"
  },
  "updated_by": {
    "first_name": "Keywords AI",
    "last_name": "Team",
    "email": "admin@keywordsai.co"
  },
  "created_at": "2025-09-11T09:43:55.858321Z",
  "updated_at": "2025-09-11T10:15:22.123456Z",
  "custom_required_fields": [],
  "categorical_choices": null,
  "starred": true,
  "organization": 2,
  "tags": ["quality", "assessment", "production"]
}

Configuration Update Rules

Partial Configuration Updates

When updating configurations, the system performs a merge operation:
  • Existing configuration fields are preserved unless explicitly overridden
  • New fields are added to the configuration
  • Setting a field to null removes it from the configuration
  • Nested objects (like model_options) are completely replaced, not merged

Example Configuration Merge

Original Configuration:
{
  "evaluator_definition": "Original prompt",
  "scoring_rubric": "Original rubric",
  "llm_engine": "gpt-4o-mini",
  "min_score": 1.0,
  "max_score": 5.0
}
Update Request:
{
  "configurations": {
    "scoring_rubric": "Updated rubric",
    "passing_score": 3.0
  }
}
Resulting Configuration:
{
  "evaluator_definition": "Original prompt",
  "scoring_rubric": "Updated rubric",
  "llm_engine": "gpt-4o-mini",
  "min_score": 1.0,
  "max_score": 5.0,
  "passing_score": 3.0
}

Error Responses

400 Bad Request

{
  "configurations": [
    "Configuration validation failed: llm_engine 'invalid-model' is not supported"
  ]
}

401 Unauthorized

{
  "detail": "Your API key is invalid or expired, please check your API key at https://platform.keywordsai.co/platform/api/api-keys"
}

403 Forbidden

{
  "detail": "You do not have permission to update this evaluator."
}

404 Not Found

{
  "detail": "Not found."
}

422 Unprocessable Entity

{
  "categorical_choices": [
    "This field is required when score_value_type is 'categorical'."
  ]
}