QA Learning Dashboard
This page turns reviewer overrides and anchored notes into calibration signals. It helps QA leads see where the automated scorecard disagrees with reviewers, which competencies need tuning, and which correction reasons appear most often.
This dashboard is the learning layer for your QA workflow. The report page helps you review one call at a time, while this page helps you understand what reviewers keep correcting across many calls. In simple terms, this page shows where the automated scorecard is strong, where it is weak, and what should be improved next.
1. Start With The Top Cards
The top summary cards tell you how much reviewer feedback exists in the system.
Score Overrides shows how many competency scores were manually changed by reviewers.
Anchored Notes shows how many reviewer notes were captured with useful context.
Reviewed Reports shows how many saved calls contain reviewer scoring feedback.
Avg Score Delta shows the average gap between the AI score and the reviewer score.
2. Use The Filters First
The filters help you narrow the dashboard to one reason, one competency, or one call category.
Filter by reason when you want to study a problem like timing issues or transcript errors.
Filter by competency when you want to evaluate one scorecard item deeply.
Filter by category when you want to compare calibration quality across call types.
3. Read The Main Analytics
The middle of the page explains where disagreement is happening and why.
Most Corrected Competencies highlights which scorecard items are most often adjusted by humans.
Override Reasons explains why reviewers are making those changes.
Override Volume Over Time helps you see whether model quality is improving or drifting.
Call Categories With Highest Review Load shows which types of calls need more tuning.
4. Understand Trust And Calibration
These sections help you decide how much confidence to place in the automated scorecard.
Trust Signals marks risky competencies and categories where disagreement stays high.
Reviewer Calibration shows whether reviewer behavior itself is affecting patterns.
Calibration Table gives a compact view of AI average, reviewer average, and disagreement gap.
5. Use The Improvement Queue
The model improvement queue is your short operational backlog for the scoring engine.
Each row explains what competency needs work next.
Priority is based on override volume and disagreement size.
The recommendation explains what kind of change is probably needed.
6. Drill Down To Evidence
If a chart, row, or reason looks important, click it. The drill-down review panel shows the exact overrides and related notes behind the trend.
Open the original report in V3 directly from the drill-down panel.
Jump back to the exact transcript segment or competency when context is available.
Use this when you want to validate whether a dashboard signal is real before changing the model.
Best practice: Use this page as a decision tool, not just a report. If one competency keeps getting overridden for the same reason, that usually means the scoring rule, evidence extraction, or transcript confidence logic needs to be improved. The goal is not only to count reviewer corrections, but to turn them into a clear improvement plan.
Most Corrected Competencies
The competencies below receive the most reviewer intervention and are the best candidates for heuristic tuning.
Override Reasons
Reason tags show why reviewers change scores and reveal the biggest calibration pain points.
Override Volume Over Time
Use this to detect whether scoring drift is getting worse or whether improvements reduce reviewer effort.
Call Categories With Highest Review Load
Some categories naturally create more score disagreement. This helps prioritize which call types need specialized rules.
Trust Signals
Trust signals turn disagreement patterns into practical QA risk indicators.
Reviewer Calibration
Shows which reviewers create the most overrides and what reasons they most often use.
Recent Competency Overrides
Latest reviewer corrections with AI-vs-reviewer deltas.
Recent Anchored Notes
Recent reviewer notes that can later become training evidence and coaching examples.
Calibration Table
A practical list of the most corrected competencies with average AI score, average reviewer score, and average disagreement gap.
Model Improvement Queue
A prioritized shortlist of scorecard areas that deserve heuristic tuning or prompt refinement next.
Drill-Down Review
Click a competency row or a reason tag to inspect exact overrides, anchored notes, and jump back into the original report evidence.
Select a competency or reason to inspect matching reviewer feedback.