pTSE-T: Presentation Target Speaker Extraction using Unaligned Text Cues


Abstract

Target Speaker Extraction (TSE) aims to extract the clean speech of the target speaker in an audio mixture, eliminating irrelevant background noise and speech. While prior work has explored various auxiliary cues including pre-recorded speech, visual information, and spatial information, the acquisition and selection of such strong cues are infeasible in many practical scenarios. Differently, in this paper, we condition the TSE algorithm on semantic cues extracted from limited and unaligned text contents, such as condensed points from a presentation slide. This method is particularly useful in scenarios like meetings, poster sessions, or lecture presentations, where acquiring other cues in real time may be challenging. To this end, we design two different networks. Specifically, our proposed Text Prompt Extractor Network (TPE) fuses audio features with content-based semantic cues to facilitate time-frequency mask generation to filter out extraneous noise. The experimental results show the efficacy in accurately extracting the target speaker's speech by utilizing semantic cues derived from limited and unaligned text, resulting in SI-SDRi of 12.16 dB, SDRi of 12.66 dB, PESQi of 0.830 and STOIi of 0.150. Dataset and source code will be publicly available.

Demos

2mix
text1: A person is discussing related work in separable convolution, specifically a figure representing separable convolution with an input tensor of shape H x W x 3, and mentioning depthwise convolution with a kernel size K x K x 3 x 3, followed by pointwise convolution of size 1 x 1 with a kernel size K x K, outputting an output tensor with the same spatial dimensions but possibly more channels. Other mentioned components include a standard convolution figure and parameters related to these convolutions.
text2: A person is discussing various methods for audio laughter synthesis, including Method 1 using samples from the AmuS dataset, Method 2 based on the HTS method from Speechlaughs, Method 3 utilizing a seq2seq model and trained data, and Method 4 enhancing Method 3 with an additional waveform correction technique.
mixture s1: s2: TPE-s1 TPE-s2 AudioSep-s1 AudioSep-s2 CLAPSep-s1 CLAPSep-s2
text1: A person is discussing Text-to-Speech (TTS) systems and their speech encoders, emphasizing that these systems can learn speaker information without the need for explicit speaker ID labels. Speech input is being transformed into text using an encoder, sometimes employing Automatic Speech Recognition (ASR) representations without human annotation for text input.
text2: A person is discussing speech command recognition performance, focusing on the Google Speech Commands dataset for v1 and v2, presenting various models, their accuracy, and model parameters, mentioning MatchboxNet, ResNet15, DenseNetBC100, Attention RNN, Harmonic Tensor 2DCNN, and Embedding Head Model, with reference to their respective papers and being close to the state of the art with fewer parameters for both versions of the dataset.
mixture s1: s2: TPE-s1 TPE-s2 AudioSep-s1 AudioSep-s2 CLAPSep-s1 CLAPSep-s2

References


  1. Liu, Xubo, et al. "Separate anything you describe." arXiv preprint arXiv:2308.05037 (2023).
  2. Ma, Hao, et al. "CLAPSep: Leveraging Contrastive Pre-trained Models for Multi-Modal Query-Conditioned Target Sound Extraction." arXiv preprint arXiv:2402.17455 (2024).