TY - JOUR AU - Liu, Yi AU - He, Liang AU - Liu, Jia AU - Johnson, Michael T. PY - 2019 DA - 2019/12/05 TI - Introducing phonetic information to speaker embedding for speaker verification JO - EURASIP Journal on Audio, Speech, and Music Processing SP - 19 VL - 2019 IS - 1 AB - Phonetic information is one of the most essential components of a speech signal, playing an important role for many speech processing tasks. However, it is difficult to integrate phonetic information into speaker verification systems since it occurs primarily at the frame level while speaker characteristics typically reside at the segment level. In deep neural network-based speaker verification, existing methods only apply phonetic information to the frame-wise trained speaker embeddings. To improve this weakness, this paper proposes phonetic adaptation and hybrid multi-task learning and further combines these into c-vector and simplified c-vector architectures. Experiments on National Institute of Standards and Technology (NIST) speaker recognition evaluation (SRE) 2010 show that the four proposed speaker embeddings achieve better performance than the baseline. The c-vector system performs the best, providing over 30% and 15% relative improvements in equal error rate (EER) for the core-extended and 10ā€‰sā€“10ā€‰s conditions, respectively. On the NIST SRE 2016, 2018, and VoxCeleb datasets, the proposed c-vector approach improves the performance even when there is a language mismatch within the training sets or between the training and evaluation sets. Extensive experimental results demonstrate the effectiveness and robustness of the proposed methods. SN - 1687-4722 UR - https://doi.org/10.1186/s13636-019-0166-8 DO - 10.1186/s13636-019-0166-8 ID - Liu2019 ER -