# Import and Export

Scorable evaluators are portable. You can export any evaluator as a YAML file, commit it to your own git repository, and import it back at any time in any organization, on any Scorable account, or without Scorable at all.

```mermaid
flowchart LR
      subgraph repo["My git repo"]
          Y[".scorable/evaluators/*.yaml"]
      end

      subgraph scorable["Scorable"]
          E[Evaluator]
      end

      Y -->|"Import via GitHub app / CLI"| E
      E -->|"Export YAML"| Y

      E -->|"Run evaluations"| R[Results]
```

## The YAML format

Every evaluator serializes to a single human-readable YAML file:

```yaml
name: Response Quality
objective:
  intent: "Checks whether the response directly and completely answers the user's question."
scoring_criteria: |-
  You are evaluating whether the response answers the question directly and completely.

  Score from 0 to 1:
  1.0 — Fully answers the question, nothing missing.
  0.5 — Partially answers the question.
  0.0 — Does not address the question.

  Response: {{ response }}
  Question: {{ request }}
model: gpt-5.2
demonstrations:
  - request: "What is the capital of France?"
    response: "Paris."
    score: 1.0
    justification: "Direct and correct."
  - request: "What is the capital of France?"
    response: "I'm not sure, maybe Lyon?"
    score: 0.0
    justification: "Incorrect answer."
```

| Field              | Required | Description                                                                  |
| ------------------ | -------- | ---------------------------------------------------------------------------- |
| `name`             | Yes      | Evaluator name                                                               |
| `objective.intent` | Yes      | One-line description of what the evaluator measures                          |
| `scoring_criteria` | Yes      | The scoring prompt; use `{{ response }}` and `{{ request }}` as placeholders |
| `model`            | No       | LLM to use for scoring (defaults to Scorable's recommended model)            |
| `demonstrations`   | No       | Few-shot examples that guide the LLM on how to assign scores                 |
| `calibration`      | No       | Test cases used to validate the evaluator's scoring consistency              |

Both `demonstrations` and `calibration` are lists of objects with the same shape:

| Sub-field       | Required | Description                                                                       |
| --------------- | -------- | --------------------------------------------------------------------------------- |
| `response`      | Yes      | The LLM output being evaluated                                                    |
| `score`         | Yes      | Expected score in `[0, 1]`                                                        |
| `request`       | No       | The input that was evaluated (omit if the evaluator does not use `{{ request }}`) |
| `justification` | No       | Explanation for the assigned score                                                |

The format is stable. Files produced today will import correctly in future versions of Scorable.

## Export an evaluator

### Web UI

Open an evaluator → action menu (⋮) → **Download YAML**.

### CLI

```bash
# Print YAML to stdout
scorable evaluator export-yaml <evaluator-id>

# Save to a file
scorable evaluator export-yaml <evaluator-id> --output my-evaluator.yaml
```

## Import an evaluator

### Web UI (from GitHub)

1. Open the **Evaluators** page.
2. Click **GitHub** in the top bar.
3. Install the Scorable GitHub App on your account or organization (one-time). You choose which repositories to grant access to.
4. Enter the owner and repository name, then click **Load repository**.
5. Evaluators found in the `.scorable/evaluators/` directory are listed — click **Import** next to any of them.

### CLI

```bash
scorable evaluator import-yaml --file .scorable/evaluators/response-quality.yaml

# Overwrite if an evaluator with the same name already exists
scorable evaluator import-yaml --file .scorable/evaluators/response-quality.yaml --overwrite
```

## Store evaluators in git

The convention is to keep evaluator YAML files under `.scorable/evaluators/` in your git repository:

```
my-repo/
└── .scorable/
    └── evaluators/
        ├── response-quality.yaml
        ├── factual-accuracy.yaml
        └── tone-consistency.yaml
```

**Export all evaluators and commit:**

```bash
# Fetch IDs
scorable evaluator list

# Export each one
scorable evaluator export-yaml <id-1> --output .scorable/evaluators/response-quality.yaml
scorable evaluator export-yaml <id-2> --output .scorable/evaluators/factual-accuracy.yaml

git add .scorable/
git commit -m "chore: snapshot evaluator definitions"
git push
```

**Restore from git in a new environment:**

```bash
for f in .scorable/evaluators/*.yaml; do
  scorable evaluator import-yaml --file "$f" --overwrite
done
```

## API reference

### Export — `GET /v1/evaluators/export/{id}/`

Returns the evaluator as a `text/yaml` file download.

**Authentication:** API key required (`Authorization: Api-Key <key>`)

```bash
curl -H "Authorization: Api-Key $SCORABLE_API_KEY" \
  https://api.scorable.ai/v1/evaluators/export/<id>/ \
  -o my-evaluator.yaml
```

### Direct YAML import — `POST /v1/evaluators/import-yaml/`

Import an evaluator from a YAML string. Handles demonstrations and calibration dataset creation in one request.

**Authentication:** API key required (`Authorization: Api-Key <key>`)

**Request body:**

```json
{
  "yaml": "<yaml string>",
  "overwrite": false
}
```

**Response:** The created evaluator object (201).

```bash
curl -X POST https://api.scorable.ai/v1/evaluators/import-yaml/ \
  -H "Authorization: Api-Key $SCORABLE_API_KEY" \
  -H "Content-Type: application/json" \
  -d "{\"yaml\": \"$(cat my-evaluator.yaml)\", \"overwrite\": false}"
```

## Open format

Scorable evaluators are fully defined by their YAML: a name, an intent, and a scoring prompt. You can:

* Keep the YAML in your own version-controlled repository.
* Recreate any evaluator from the YAML without a Scorable account.
* Run the prompt directly against any LLM if you want to bypass Scorable entirely.

The YAML format is open, documented here, and will not change in a breaking way.

## Schema and editor support

A machine-readable JSON Schema is published at `https://api.scorable.ai/schema/evaluator.json`.

Add the following comment to any evaluator YAML to enable autocomplete and validation in VS Code (with the [YAML extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml)) and most other editors:

```yaml
# yaml-language-server: $schema=https://api.scorable.ai/schema/evaluator.json
name: My Evaluator
...
```

Or configure your whole workspace once in `.vscode/settings.json`:

```json
{
  "yaml.schemas": {
    "https://api.scorable.ai/schema/evaluator.json": ".scorable/evaluators/*.yaml"
  }
}
```
