Documentation Index
Fetch the complete documentation index at: https://docs.keywordsai.co/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Experiments API allows you to design, execute, and analyze experiments to test different prompts, models, or configurations. This enables data-driven decision making and systematic improvement of your AI applications.Key Features
- Experiment Design: Create structured experiments with multiple variants
- A/B Testing: Compare different prompts, models, or configurations
- Statistical Analysis: Get statistically significant results
- Performance Tracking: Monitor key metrics and outcomes
- Result Analysis: Detailed insights and recommendations
Quick Start
Available Methods
Synchronous Methods
create()- Create a new experimentlist()- List experiments with filteringget()- Retrieve a specific experimentupdate()- Update experiment configurationdelete()- Delete an experimentstart()- Start running an experimentstop()- Stop a running experimentget_results()- Get experiment results and analysis
Asynchronous Methods
All methods are also available in asynchronous versions usingAsyncKeywordsAI.
Experiment Structure
An experiment typically contains:id: Unique identifiername: Human-readable namedescription: Experiment descriptionvariants: List of experiment variants to testmetrics: Key metrics to trackstatus: Current status (draft, running, completed, stopped)traffic_split: How traffic is distributed between variantsstart_date: When the experiment startedend_date: When the experiment endedresults: Statistical results and analysis
Experiment Lifecycle
- Design: Create experiment with variants and metrics
- Configure: Set traffic split and success criteria
- Start: Begin collecting data
- Monitor: Track progress and early results
- Analyze: Review statistical significance
- Conclude: Stop experiment and implement winner
Common Use Cases
- Prompt Optimization: Test different prompt variations
- Model Comparison: Compare different AI models
- Feature Testing: Test new features or configurations
- Performance Optimization: Optimize for specific metrics
- User Experience: Test different interaction patterns
Best Practices
- Define clear success metrics before starting
- Ensure sufficient sample size for statistical significance
- Run experiments for appropriate duration
- Avoid multiple simultaneous experiments on same traffic
- Document experiment hypotheses and learnings
Error Handling
Next Steps
- Create an Experiment - Learn how to design experiments
- Run Experiments - Begin running your experiments
- Analyze Results - Understand experiment outcomes
- Manage Experiments - Browse and organize