# agent-evaluation

Use when testing prompt effectiveness, validating context engineering choices, or measuring agent improvement quality.

**Evaluation Approaches:**

* **LLM-as-Judge** - Direct scoring, pairwise comparison, rubric-based
* **Outcome-Focused** - Judge results, not exact paths (agents may take valid alternative routes)
* **Multi-Level Testing** - Simple to complex queries, isolated to extended interactions
* **Bias Mitigation** - Position bias, verbosity bias, self-enhancement bias

**Multi-Dimensional Evaluation Rubric:**

| Dimension             | Weight | What It Measures         |
| --------------------- | ------ | ------------------------ |
| Instruction Following | 0.30   | Task adherence           |
| Output Completeness   | 0.25   | Coverage of requirements |
| Tool Efficiency       | 0.20   | Optimal tool selection   |
| Reasoning Quality     | 0.15   | Logical soundness        |
| Response Coherence    | 0.10   | Structure and clarity    |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://cek.neolab.finance/plugins/customaize-agent/agent-evaluation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
