do-in-parallel
Quality Assurance
Pattern: Parallel Orchestration with Judge Verification
Phase 1: Parse Input and Identify Targets
│
Phase 2: Task Analysis with Zero-shot CoT
┌─ Task Type Identification ─────────────────┐
│ (transformation, analysis, documentation) │
├─ Per-Target Complexity Assessment ─────────┤
│ (high/medium/low) │
├─ Independence Validation ──────────────────┤
│ CRITICAL: Must pass before proceeding │
└────────────────────────────────────────────┘
│
Phase 3: Model and Agent Selection
Is task COMPLEX? → Opus
Is task SIMPLE/MECHANICAL? → Haiku
Otherwise → Opus (default for balanced work)
│
Phase 3.5: Dispatch Meta-Judge (ONCE)
Single sadd:meta-judge agent (Opus)
→ Evaluation Specification YAML
(Reused for ALL targets — not re-run per target)
│
Phase 4: Construct Per-Target Prompts
[CoT Prefix] + [Task Body] + [Self-Critique Suffix]
(Same structure for ALL agents, customized per target)
│
Phase 5: Parallel Dispatch and Judge Verification
┌─ Agent 1 (target A) ─→ Judge 1 (+meta-spec) ─┐
├─ Agent 2 (target B) ─→ Judge 2 (+meta-spec) ─┼─→ Concurrent
└─ Agent 3 (target C) ─→ Judge 3 (+meta-spec) ─┘
│
Each target: Implement → Judge (with meta-spec) → Retry (max 3)
│
Phase 6: Collect and Summarize Results
Aggregate outcomes, report failures, suggest remediationExecution Flow
Usage
Advanced Options
When to Use
Meta-Judge and Judge Verification
Aspect
Details
Scoring Scale
Score
Meaning
Frequency
Quality Enhancement Techniques
Technique
Phase
Purpose
Context Isolation Best Practices
Error Handling
Failure Type
Description
Recovery Action
Theoretical Foundation
Last updated