Sepp, Tiit, juhendajaPalts, Tauno, juhendajaKaare, JohannaTartu Ülikool. Loodus- ja täppisteaduste valdkondTartu Ülikool. Arvutiteaduse instituut2025-10-202025-10-202025https://hdl.handle.net/10062/116866People with hearing impairments may not hear important sounds, which can make daily life more challenging. For this reason, they require sound detection tools. Before detecting sounds, it is necessary to separate the signal from background noise in audio clips and determine the minimum size of the training dataset needed for sound detection. In this work, six algorithms were developed for extracting signals from audio clips, categorized into amplitude-based and frequency-based methods. All of the developed algorithms performed better at extracting signals from audio clips than Google’s voice activity detection method WebRTC. Additionally, based on the Student’s t-test, it was found that the minimum number of audio files required to retrain a YAMNet-based sound detection model is 10.ethttps://creativecommons.org/licenses/by-nc-nd/4.0/Helide klassifitseerimineheliklipist signaali eraldamineminimaalne treenigandmestiku suurusAudio classificationsignal extraction from an audio clipminimum trainig set sizebakalaureusetöödinformaatikainfotehnoloogiainformaticsinfotechnologyHeliklipist signaali eraldamine ja minimaalse treeningandmestiku suuruse tuvastamine olmehelide klassifitseerimiseksThesis