Creating an inference job
Each inference job uses one model and can process multiple audio files:Python
Inference configuration
| Parameter | Default | Range | Description |
|---|---|---|---|
threshold | 0.5 | 0.0-1.0 | Minimum confidence score to report |
merge_window_ms | 200 | 0-5000 | Merge detections within this time window |
min_duration_ms | 50 | 0-1000 | Ignore detections shorter than this |
Threshold
Controls the sensitivity of detection:- Lower threshold (0.3-0.5): More detections, including uncertain ones
- Higher threshold (0.7-0.9): Fewer detections, higher confidence
Python
Merge window
Adjacent detections of the same type are merged if they’re within this window:Python
Minimum duration
Filters out very short detections that may be noise:Python
Uploading audio for inference
Upload audio files using presigned URLs (same pattern as dataset uploads):Python
Batch inference
Upload multiple files for parallel processing:Python
Getting results
Poll job status
Python
Get job details with files
Python
Get single file results
Python
Detection output format
Each detection includes:| Field | Type | Description |
|---|---|---|
artifact_type | string | Type of detected artifact |
start_ms | integer | Start time in milliseconds |
end_ms | integer | End time in milliseconds |
confidence | float | Model confidence (0.0-1.0) |
File statuses
| Status | Description |
|---|---|
pending | Upload not yet confirmed |
queued | Waiting for processing |
processing | Running inference |
completed | Finished, detections available |
failed | Processing failed (check error_message) |
Listing inference jobs
Python
Cancelling inference
Cancel a pending or processing job:Python
Downloading processed audio
Download the original audio file:Python
Production patterns
Continuous inference pipeline
For production use, create a worker that processes audio as it arrives:Python
Webhook-style results
Poll efficiently with exponential backoff:Python
