Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Speaker Recognition Evaluation

The goal of the NIST Speaker Recognition Evaluation (SRE) series is to contribute to the direction of research efforts and the calibration of technical capabilities of text independent speaker recognition.

Summary

The goal of the NIST Speaker Recognition Evaluation (SRE) series is to contribute to the direction of research efforts and the calibration of technical capabilities of text independent speaker recognition. The overarching objective of the evaluations has always been to drive the technology forward, to measure the state-of-the-art, and to find the most promising algorithmic approaches. To this end, NIST has been coordinating Speaker Recognition Evaluations since 1996. Since then over 70 organizations have participated in our evaluations. Each year new researchers in industry and academia are encouraged to participate. Collaboration between universities and industries is also welcomed. Each evaluation begins with the announcement of the official evaluation plan, which clearly states the tasks, data, performance metric, and participation rules involved with the evaluation. The evaluation culminates with a follow-up workshop, where NIST reports the official results along with analyses of performance, and researchers share and discuss their findings with NIST and one another.

SRE24 Schedule

  • Evaluation Plan Published

  • Registration Period

  • Dev/Training data available

  • Evaluation period

  • System output and system descriptions due to NIST

  • Evaluation results release

  • Post-evaluation workshop

Contact Us

Please send questions to: sre_poc@nist.gov

For sre24 discussion please visit our Google Group.

NIST 2024 Speaker Recognition Evaluation

Summary

The 2024 Speaker Recognition Evaluation (SRE24) is the next in an ongoing series of speaker recognition evaluations conducted by the US National Institute of Standards and Technology (NIST) since 1996. The objectives of the evaluation series are (1) to effectively measure system-calibrated performance of the current state of technology, (2) to provide a common framework that enables the research community to explore promising new ideas in speaker recognition, and (3) to support the community in their development of advanced technology incorporating these ideas. The evaluations are intended to be of interest to all researchers working on the general problem of text-independent speaker recognition. To this end, the evaluations are designed to focus on core technology issues and to be simple and accessible to those wishing to participate.


SRE24 will be organized similar to SRE21, focusing on speaker detection over conversational telephone speech (CTS) and audio from video (AfV). It will again offer cross-source (i.e., CTS and AfV) and cross- lingual trials, thanks to a multimodal and multilingual (i.e., with multilingual subjects) corpus collected outside North America. However, it will also introduce two new features as compared to previous SREs, including enrollment segment duration variability and shorter duration test segments.

SRE24 will offer both fixed and open training conditions to allow uniform cross-system comparisons and to understand the effect of additional and unconstrained amounts of training data on system perfor- mance. Similar to SRE21, SRE24 will consist of three tracks: audio-only, visual-only, and audio-visual, which involves automatic person detection using audio, image, and video materials. System submission is required for the audio and audio-visual tracks, and optional for the visual track.


For more information about SRE24 please see the SRE24 Evaluation Plan or send questions to sre_poc@nist.gov

SRE 2024 Tentative Schedule

Milestone Date
Evaluation plan published Jun
Training data available Jul
Scoring code release Jul
Registration period Jul - Sep
Development data available to participants Jul
Evaluation Period Opens Aug
Fixed condition submissions due to NIST Oct
Open condition submissions due to NIST Oct
System description due to NIST Oct
Official results released Nov
Workshop registration period Nov
Post-evaluation workshop Dec 3-4, 2024

Contact Us

Please send questions to: sre_poc@nist.gov

For the CTS Challenge discussion please visit our Google Group. https://groups.google.com/a/list.nist.gov/forum/#!forum/cts-challenge

Summary

Following the success of the 2019 Conversational Telephone Speech (CTS) Speaker Recognition Challenge, which received 1347 submissions from 67 academic and industrial organizations, NIST organized a second CTS Challenge, which has been ongoing since 2020.


The basic task in the CTS Challenge is speaker detection, i.e., determining whether a specified target speaker is speaking during a given segment of speech. The CTS Challenge is a leaderboard-style challenge, offering an open/unconstrained training condition, but using CTS recordings extracted from multiple data sources containing multilingual speech


For more information about CTS-Challenge please visit the announcement page or send questions to sre_poc@nist.gov.

Disclaimer:

Participants are allowed to publish the leaderboard results unaltered, but they must not make advertising claims about their standing/ranking in the evaluation, or winning the evaluation, or claim NIST or the U.S. Government endorsement of their system(s) or commercial product(s). See the evaluation plan for more details regarding the participation rules in the NIST CTS Challenge.


SRE24-CTS Challenge

Updated: 2024-10-08 08:32:21 -0400
RANK TEAM SET TIMESTAMP EER [%] MIN_C ACT_C
1 Neurotechnology Progress 20240902-043858 3.86 0.132 0.143
2 SAR_ Progress 20240918-112439 3.00 0.080 1.000
2 LIA_ Progress 20240809-174417 8.55 0.410 1.000
2 TEAM-CERE-91 Progress 20240901-203209 14.93 0.509 1.000
2 LIBRA Progress 20240908-031838 30.81 0.995 1.000