Be specific in descriptions — Detailed criterion descriptions help the AI evaluate more accurately. Instead of “Good greeting”, write “Agent greets the customer by name, identifies themselves and the company, and offers assistance”
Use appropriate point weights — Assign higher point values to criteria that have more impact on call quality
Limit criteria count — 5–15 criteria per rubric is ideal. Too many criteria can dilute scoring precision
Use sections for organization — Group related criteria (e.g. “Opening”, “Problem Solving”, “Closing”) to make results easier to interpret
Start with a small batch — Test your rubric on 5–10 calls first to validate the scoring before evaluating hundreds of calls
Use additional context wisely — The context field in the Analysis Wizard can provide useful background (e.g. “These are technical support calls for a software product”)
Combine analysis types — You can run both Calls Analysis and COPC Evaluation together to get both qualitative insights and quantitative scores