# Performance Management Tools

#### <mark style="color:blue;">Performance Management provides features that assess the quality of your agent’s interactions and identify areas for improvement.</mark>

Performance Management provides supervisors and administrators with the tools to systematically evaluate contact center interactions and identify areas for improvement through scorecards. Once the scorecard infrastructure is built within Quality Management, supervisors can review conversations as part of a scheduled assessment or on a spontaneous (ad-hoc) basis, with the flexibility to assign multiple reviewers to the same interaction to help ensure scoring consistency.

### Scorecards

#### <mark style="color:blue;">Scorecards are a list of user-defined questions used by a supervisor to score agent performance during consumer interactions</mark>

Within Quality Management, consumer interactions are evaluated through scorecards containing a list of user-defined questions intended to score an agent’s performance. Each scorecard template must be built by a Quality Management supervisor or administrator with sufficient [role permissions](/business-services/zoom-quality-management/quality-management-explainer/account-management.md#role-types) before it can be applied to a conversation.

#### <mark style="color:blue;">**Scorecards support three question types—yes or no, single choice, and a rating scale—and can be used in any mixture**</mark>

A Quality Management administrator may choose one of three question types for each question on a scorecard: yes or no, single choice, or a rating scale. Each scorecard may use any mixture of these question types as desired.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXd3V2BIWtHk5lbW1i5uy10kdHkzvSfGj_rcGKrwjQLL_zdQfqvUlf7tPzP4RHreP-t49wHX8cfrwDRefxUy_pZbtNMWc59MP-uCAzWvyR4R-1actKFMIt3GLAH2NxqZT2PG4N_CRVQyiXtFNQMexpGSBs9Q?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="563"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">**Scorecard questions support weighted values, cumulative scoring, evaluator comments, required responses, and automatic failures**</mark>

Scorecard questions support a multitude of features to improve the evaluation process. Features available for scorecard questions include:

* **Weighted Values:** Questions within a section may contribute more or less to a passing grade based on the importance of the question.
* **Cumulative Scoring:** Questions may be assigned a value that contributes to a cumulative evaluation score. Depending on the agent's actions, their cumulative score may not meet a passing grade if they failed to follow proper procedure.
* **Evaluator Comments:** The scorecard evaluator may leave comments or notes for a specific question.
* **Dispute Resolution:** Agents can challenge evaluations through a formal dispute process. Evaluators receive email notifications and can respond by re-evaluating or re-affirming scores.
* **Agent Acknowledgment:** Account admins can require agents to acknowledge completed evaluations. Agents receive email notifications and can acknowledge or dispute within a set timeframe.
* **Required Responses:** The scorecard question **must** be answered and cannot be left blank.
* **Automatic Failure:** If an agent fails a question, the entire scorecard is marked with a failing grade.

#### <mark style="color:blue;">Scorecards support percentage-based and points-based scoring</mark>

When designing a scorecard, the Quality Management administrator may use either a percentage-based or points-based system. In a percentage-based system, the evaluation is expressed as a percentage (90%, for example). In a points-based system, it is expressed as a numerical score (90/100 or 133/150, for example). The only significant difference between these two systems is that percentage-based scoring enables weighted scoring by section, whereas points-based scoring is exclusively cumulative.

#### <mark style="color:blue;">Scorecards support both evaluation-level and section-level automatic failure</mark>&#xD;

Specific answer options can be designated as automatic failures that set the entire evaluation score to zero.

Additionally, admins can configure section-level auto-fail: a designated answer triggers a zero score for that section only, while the scores of remaining sections are not impacted. The final evaluation score is then recalculated from the unaffected sections.

#### <mark style="color:blue;">Default answers streamline evaluation workflows</mark>&#xD;

Pre-selected default answers can be configured for any scorecard question, including rating scale questions. Evaluators see these answers pre-selected when opening a new evaluation and can change them as needed. This reduces repetitive manual entry for common or expected responses.

#### <mark style="color:blue;">A running total score is visible in real time during evaluation</mark>&#xD;

Evaluators can see a continuously updated total score while completing an evaluation. The score recalculates as answers are changed or selected, with unanswered questions counted as zero. This gives evaluators immediate insight into an agent's performance before submitting.

#### <mark style="color:blue;">Scorecards can be exported and imported via CSV</mark>&#xD;

Admins can export any scorecard to CSV and import a modified CSV to create a new scorecard or update an existing one. All scorecard elements are captured in the export, including questions, descriptions, answers, sections, and settings. A CSV template is available for download.

#### <mark style="color:blue;">All elements of a published scorecard can be modified</mark>&#xD;

Supervisors and admins can modify any element of a published scorecard — adding or removing questions, adjusting point values, changing scoring format, or adjusting section weighting. When non-cosmetic changes are made, the system automatically creates a new scorecard version while preserving all historical evaluations completed under the prior version.

#### <mark style="color:blue;">Response breakdown analytics now include section-level data</mark>&#xD;

The Response Breakdown by Question report now provides analytics at the section level in addition to question-level and overall score views, enabling supervisors to identify which areas of a scorecard are driving performance trends.

### **Evaluations**

#### <mark style="color:blue;">An evaluation occurs when a supervisor applies a scorecard to an interaction</mark>

An evaluation occurs when a Quality Management supervisor or manager applies a scorecard to an agent’s conversation. To perform an assessment, a user with sufficient [role permissions](/business-services/zoom-quality-management/quality-management-explainer/account-management.md#role-types) navigates to an interaction, selects the **Performance** tab, clicks **Evaluate**, and selects the appropriate scorecard.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcclTVxGpj37L4qY06zs5Zd5LrrwZXjckB3Z4wTrEvy8usB3m1Pq_jr6cHVxvxrtSBEvgRznHRXaa5rpxMnAJYBaT_SxuJghCsfIj3QPvIls_vfNhCBoRHcuTxlmjJPp2SxGz1FQTcN1belGnQtq2wJawCi?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="563"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">Evaluations can be assigned to other supervisors and managers</mark>

In addition to spontaneous (ad-hoc) evaluations, a manager or user with sufficient role permissions can assign evaluations to other Quality Management supervisors or managers.

**Calibrations** are intended to test the consistency of a scorecard's application within an organization. **Offline Interaction Management** allows supervisors and managers to manually add offline interactions for scoring within Quality Management to apply consistent scorecard criteria across all interaction types. **Comments and Feedback** allow supervisors and managers to leave comments at specific timestamps within each interaction, tag agents or supervisors for notifications, and create public or private comments. **Moments** are highlights of recordings — short clips within interactions — that can be shared as examples for others.

When an evaluation is assigned, the assessor receives an email with a direct link to the conversation, the scorecard to be used, and a due-by date.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXesGPy8rmyohC6Qwqmtkars_JllFY-cuRL3TNH5pYf8fGNP23fUS7ZskkbHXvWy2J3PaaIehjeLBooXG3u9dV_s1M6kPWffF58pFYoavhEFdSapZ09gYbWuoid0fapn4RyiMVqz6dBUhaGs17_LKMyuqeY?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="563"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">Agents are notified of completed evaluations and may acknowledge or dispute</mark>

After an evaluation is performed, users are notified via email. From there, users can review their evaluation for feedback and reflection, with the option to acknowledge or dispute the evaluation.

If an evaluation is disputed, the evaluator receives an email notification and may respond by re-evaluating the interaction or re-affirming the score. Supervisors can also access disputed evaluations from the Quality Management web portal.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdy3OQ-HA-36FZeohFg6AidxdMZhIfd8EoITJvIUx391B9NuoqQtGeG_kOJ_W13vms_Mmna5g0HaHpNL7tCoSWjWWYxLAUPHVZCpPmoUh0GynvHgIMFqP_F-yGqALO3fD2coR0EIJdSB0xqA8osmFqk3SSp?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="375"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">**Supervisors can require agents to acknowledge evaluations**</mark>

Supervisors can require agents to acknowledge completed evaluations. If an agent does not acknowledge or dispute the evaluation within the account's configured timeframe, it is automatically acknowledged by the system.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXezIEj06-ws9FMCYveNBhvA8UE1FfiEhRv-P8h_5cg9zv7jZK-O6ZtEc0uII9j23VbJAJzLbTYNH0QWyzgdqxi0mtNs-I3iExmSIuLc1gzh6UDHJgzM_BJBXEhe1C4oCcHM9naW8fxiKZnL3CS9rqXKPh3f?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="563"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">Agents can dispute specific question responses, not just the entire evaluation</mark>&#xD;

When submitting a dispute, agents can select the specific question responses they disagree with, eliminating ambiguity and reducing reviewer effort. Supervisors and evaluators can focus their review on the disputed responses directly.

#### <mark style="color:blue;">Evaluation goals offer flexible configuration options</mark>&#xD;

Evaluation goals can be edited while active, without needing to delete and recreate them. Access to view or modify goals is controlled via dedicated role permissions, with view and edit access granted to admins and supervisors by default. Goals can be configured with daily, weekly, monthly, or one-time recurrence, and can be set to randomly sample interactions across the agent base to balance evaluator workload. When configured with random sampling, only recent, unevaluated interactions are surfaced.

#### <mark style="color:blue;">"In" and "not in" condition operators reduce complexity when building goals</mark>&#xD;

When creating evaluation goals or automation conditions, users can select "in" and "not in" operators for conditions such as agent name, channel, disposition, indicator, language, team, and topics. These operators replace multiple OR rows with a single condition, simplifying goal and automation setup.

#### <mark style="color:blue;">The Evaluations page shows completion timestamps for each evaluation</mark>&#xD;

A **Completed** column in the Evaluations table displays both the creation and completion timestamp for each evaluation, removing the need to open individual evaluations to retrieve this information.

#### <mark style="color:blue;">Automated evaluations can be permanently deleted</mark>&#xD;

Users with appropriate permissions can delete an automated evaluation from the interaction detail page. Once deleted, the evaluation is permanently removed from the system and excluded from all analytics, reports, and metrics.

### Calibrations

#### <mark style="color:blue;">Calibrations test scorecard consistency across multiple assessors</mark>

Calibrations test the consistency of a scorecard's application within an organization. To perform a calibration, a user with sufficient [role permissions](/business-services/zoom-quality-management/quality-management-explainer/account-management.md#role-types) creates a calibration session in which multiple assessors are selected to apply a common scorecard to the same conversation. When an assessor is assigned to a calibration session, they receive an email similar to a routine evaluation notification. A successful calibration reveals a generally consistent score between assessors, with little to no variance across the scorecards.

The following image provides an example of a calibration assignment pop-up.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXerFYbqPG8APv3jNlgUiOAzyyISMviIijknYPZJSmCNGMTElWuojj9pzBNGsuL7y1rNkc7A8uq9CmhFEhy0GQgjQwhB6gN1cdn3v7NpMrvBXg6x528fmDuY4HOfZGIS3AP-n7oKRFzy092wsolXmX0cXuxd?key=A6JppSKd1EpGYzJGvNEH8g" alt="" width="563"><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">Calibration sessions now support names, completion tracking, and score averages</mark>&#xD;

Admins and supervisors can name calibration sessions for easier lookup. The session view shows the average score across completed evaluations and a completion percentage indicating how many assigned evaluators have finished. When reviewing an evaluation during calibration, supervisors can see whether the interaction is in an active calibration session.

### Automation

#### <mark style="color:blue;">Automation rules trigger evaluation assignment based on configurable conditions</mark>&#xD;

Administrators can create automation rules to automatically assign evaluation tasks when interactions meet defined conditions. Supported condition types include: agent, channel, disposition, indicator mention, indicator absence, language, team, topic, and duration-based metrics. Conditions support AND/OR logic, as well as "in" and "not in" operators.

#### <mark style="color:blue;">Automation priority is determined by card order, which is configurable</mark>

Admins can drag automation cards on the Automations page to reorder them. The first matching automation is applied when evaluating an interaction, so card order determines priority.

#### <mark style="color:blue;">Automation values can be configured before they appear in Quality Management data</mark>

Admins can configure automations for new dispositions or queues immediately after creating them in Zoom Contact Center, without waiting for an interaction with that value to be analyzed by Quality Management first.

#### <mark style="color:blue;">Language is a supported automation condition</mark>

Admins and supervisors can use the language spoken in an interaction as a condition when building or modifying automation rules, enabling language-specific evaluation assignments.

### Coaching Sessions&#xD;

#### <mark style="color:blue;">Supervisors can create coaching sessions to develop agent improvement strategies</mark>

Coaching modules can be created by supervisors to address previously identified performance opportunities or ongoing improvement initiatives. Sessions can be scheduled as Zoom video meetings, in-person meetings, or asynchronous/offline sessions.

#### <mark style="color:blue;">WFM integration enables optimal coaching session scheduling</mark>

For users licensed in both Quality Management and Zoom Workforce Management, the system automatically queries WFM schedules when creating a coaching session to suggest optimal low-volume time slots within a specified date range and duration. Upon confirmation, coaching sessions appear as scheduled activities on agent WFM calendars.

### Screen Recording&#xD;

#### <mark style="color:blue;">Administrators can enable screen recording to capture agent desktop activity during interactions</mark>

When screen recording is enabled, Quality Management captures agent desktop activity during voice interactions in Zoom Contact Center. Up to four monitors can be recorded simultaneously if the agent is using multiple displays. Screen recording requires Zoom Workplace desktop app version 6.7.0 or later.

#### <mark style="color:blue;">Screen recordings can be downloaded alongside audio files</mark>

Quality Management users can download the screen recording and the audio file for an interaction together, subject to their role permissions. The combined download provides a more complete picture of the interaction for coaching and compliance review.

{% hint style="info" %}

**Note**

Screen recording captures activity on the Zoom Workplace desktop application only. It does not record browser activity or other applications running on the agent's machine.
{% endhint %}

### Moments

#### <mark style="color:blue;">Moments are short, recorded segments — or clips — of an interaction and can be shared with others</mark>

Within Quality Management, Moments are short recordings that highlight important moments from a conversation and can be shared with others for viewing or learning opportunities.

For example, if an agent handles a difficult situation notably well, a supervisor or manager can create a Moment highlighting what the agent did well and share the recording link with others, including users external to the account.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeNquHMOqcmBKbb1hlOsNuKoGZMPa6jk1iqpZXf2I24FbnthLaCiPp_n48jDAaEXyW5FIqWb2amghTMJkoxOPQkXC-60Pa8bzaV3gXX_6T1KARtwdWXuBGd-udiz5VPxSN-dd8z_81E7n2q5mAgbFgXn7SJ?key=A6JppSKd1EpGYzJGvNEH8g" alt=""><figcaption></figcaption></figure></div>

#### <mark style="color:blue;">Moment sharing now supports password protection and expiration dates</mark>

Account owners and admins can configure sharing restrictions at the account level, requiring password protection and setting expiration dates for shared moment links. These controls can be scoped by user role, allowing organizations to enforce governance over how sensitive interaction clips are shared externally.

### Additional Features

{% hint style="info" %}
**Note**

These features vary by role and permissions.
{% endhint %}

#### <mark style="color:blue;">Supervisors and managers can also add manual or “offline” interactions to score conversations not captured by their Contact Center integration</mark>

Supervisors can manually add offline interactions to score conversations within Quality Management that were not captured by a Contact Center integration, helping ensure a consistent set of criteria is used across all interactions.

#### <mark style="color:blue;">Supervisors and managers can leave comments on specific sections of a transcript for discussion, feedback, or follow up</mark>

Supervisors and managers can leave comments on specific sections of a transcript for discussion, feedback, or follow-up. Commenters can tag an agent or supervisor, who will receive a notification with a link to the comment. Comments can be designated as public or private.

The following image provides an example of a comment left within a conversation.

<div data-with-frame="true"><figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXet85vz47ImBOz_NQKkVVX6_1Hnt9RzvCwT_U34W5THvJ_9ZVVV16C4XAemeKYY6sMS5CmVdvvAw3FsxMZ7s7PlIxuxIzc7OCrGjXHFppCOt4BJcBRBRKmTjzWcExz3xu5n_NVOZ_J4qdD43x8V8r2LX_E_?key=A6JppSKd1EpGYzJGvNEH8g" alt=""><figcaption></figcaption></figure></div>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://library.zoom.com/business-services/zoom-quality-management/quality-management-explainer/performance-management-tools.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
