Model Behavior and Accuracy
Zoom AI Companion is powered by a combination of proprietary and third-party large language models (LLMs), designed to provide contextual intelligence across Zoom’s products. The following sections describe Zoom’s practices for training data, hallucination management, and system performance tuning.
Zoom does not use customer content for model training
Zoom does not use communications-like customer content—such as audio, chat, screen share, whiteboards, or reactions—for training any Zoom or third-party models.
Zoom trains its models using:
Public domain data
Purchased third-party datasets
Zoom-created training materials
Zoom reviews the datasets to determine whether they were obtained lawfully and whether the license is applicable to Zoom’s proposed use. Note that we also use third party model providers, such as OpenAI and Anthropic, as part of our federated model. Please refer to any information they provide on training data.
Generative AI may hallucinate
As with any generative model, AI Companion may generate outputs that are factually incorrect or irrelevant (hallucinations). Zoom recommends reviewing outputs carefully. Zoom reduces hallucinations by:
Testing models against real use cases
Improving context via retrieval-augmented generation (RAG)
Enhancing language support with translation pipelines (e.g., English-to-Spanish)
AI Companion performance is monitored and tuned
Zoom regularly monitors model performance, tracks quality metrics, and updates internal systems to improve accuracy and transparency. While explainability is limited by model design, performance regressions are addressed through testing and update cycles.
Last updated
Was this helpful?