Skip to content

Tips for Best Results

  • Use high-quality recordings — Clear audio produces better transcriptions, which leads to more accurate analysis
  • Ensure speaker separation — The system needs to distinguish between agent and customer to evaluate agent behavior
  • Review transcriptions first — Check a few transcriptions for accuracy before running analysis on a large batch
  • See the Tips for Best Results in the Transcriptions Guide for more recommendations on audio quality
  • Be specific in descriptions — Detailed criterion descriptions help the AI evaluate more accurately. Instead of “Good greeting”, write “Agent greets the customer by name, identifies themselves and the company, and offers assistance”
  • Use appropriate point weights — Assign higher point values to criteria that have more impact on call quality
  • Limit criteria count — 5–15 criteria per rubric is ideal. Too many criteria can dilute scoring precision
  • Use sections for organization — Group related criteria (e.g. “Opening”, “Problem Solving”, “Closing”) to make results easier to interpret
  • Reserve for non-negotiable items — Critical errors should only flag truly mandatory requirements (e.g. identity verification, legal disclosures)
  • Don’t overuse them — If too many criteria are marked as critical, most calls will score 0%, making the evaluation less useful
  • Document clearly — Write clear descriptions for critical criteria so the evaluation is consistent
  • Start with a small batch — Test your rubric on 5–10 calls first to validate the scoring before evaluating hundreds of calls
  • Use additional context wisely — The context field in the Analysis Wizard can provide useful background (e.g. “These are technical support calls for a software product”)
  • Combine analysis types — You can run both Calls Analysis and COPC Evaluation together to get both qualitative insights and quantitative scores
  • Look at criterion averages first — The Dashboard’s “Score by Criterion” chart quickly reveals which areas agents struggle with most
  • Use the score range — A wide gap between min and max scores may indicate inconsistent agent performance
  • Review critical errors individually — Expand each failed call to understand what triggered the critical error
  • Export for deeper analysis — Use Excel exports to filter, sort, and create custom reports