Speaker detection – also known as speaker separation or diarization – engines in the Veritone cognitive engine ecosystem distinguish between multiple speakers in audio and video by partitioning recordings and streams into multiple segments according to speaker.
Speaker detection determines when speakers change and possibly which speakers are the same person, but it does not identify the speakers, like speaker recognition. Speaker detection and recognition engines can be used in concert.
Speaker Detection Features:
Speaker Separated Transcripts
Export spoken word recordings as text transcripts in plain text, Microsoft Word, Timed Text Markup Language (TTML), WebVTT, and SubRip text formats via Veritone applications.
Assign and Edit Speakers Detected
Assign labels to speakers, edit existing speaker labels, and delete labels and spoken words for specific speakers in speaker detection transcripts via Veritone applications.
Identify audio segments where individuals of interest are speaking quickly with searchable speaker detection engine output via API and Veritone applications.
Near Real-Time Processing
Process audio and video files in near real-time for use cases requiring quick speaker detection turnaround.
Files or Stream Support
Detect speakers in short-form or long-form audio in audio and video recordings, streamed recordings, or live data streams.
Deploy in a new or integrate into an existing application in the cloud via aiWARE GraphQL APIs, or with a subset that can be deployed on-premise via a Docker container. Learn more.
Powered by an AI Ecosystem
Leverage advanced speaker detection machine learning algorithms from the Veritone managed cognitive engine ecosystem — including algorithms from Veritone, niche providers, and industry giants.