Target Speaker Extraction(TSE) aims to extract the clean speech of the target speaker in an audio mixture,
thereby eliminating irrelevant background noise and speech. While prior works have explored various speaker
cues including pre-recorded speech, visual information(e. g., lip motions and gestures), and spatial
information, the acquisition and selection of such strong cues are intricate in many practical scenarios.
Differently, in this paper, we condition the TSE algorithms on semantic cues based on limited and unaligned
text content, such as some condensed points on a presentation slide, that could be useful in many scenarios
such as in meetings, or poster and lecture presentations. We design two different networks. Specifically, our
proposed Prompt Text Extractor Network(PTE) fuses audio features with content-based semantic cues to
facilitate the generation of masks to filter out extraneous noise, while another proposal namely Text-Speech
Recognition Network(TSR) employs contrastive learning techniques to associate blindly separated speech signals
with semantic cues. The experimental results show the efficacy in accurately identifying the target speaker by
utilizing semantic cues derived from limited and unaligned text, leading to the SI-SDRi of 12.16dB, SDRi
of 12.66dB, PESQi of 0.830 and STOIi of 0.150.