Andmebaasi logo
Valdkonnad ja kollektsioonid
Kogu ADA
Eesti
English
Deutsch
  1. Esileht
  2. Sirvi märksõna järgi

Sirvi Märksõna "deep learning" järgi

Tulemuste filtreerimiseks trükkige paar esimest tähte
Nüüd näidatakse 1 - 20 34
  • Tulemused lehekülje kohta
  • Sorteerimisvalikud
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Asymmetric Deep Multi-Task Learning
    (Tartu Ülikool, 2024) Maharramov, Ali; Matiisen, Tambet; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Tehnoloogiainstituut
    Recent developments make deep neural networks a valuable asset for autonomous driving. They can be deployed as an end-to-end system or part of more complex systems for specific tasks. If a system needs several tasks by neural networks, using multi-task learning (MTL) introduces few benefits compared to deploying several single-task learning (STL) models, such as better time and space complexity on deployment and potentially increased generalization on the backbone network. However, MTL often faces unique challenges. Many existing MTL datasets have limited labels or lack the required labels for specific tasks, and generating labels for these tasks leads to resource and time consumption for researchers. Training the model on an asymmetric labeled dataset, a dataset where labels for specific tasks are unavailable for a subset, can cause a biased gradient, reflecting an unbalance in the accuracy of tasks. In this thesis, asymmetric MTL were investigated and compared to symmetric MTL and STL methods.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Automated segmentation of various features of glioblastoma in histopathological images
    (2022) Sedykh, Ekaterina
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Brain abnormality detection using statistical analysis of individual structural connectivity networks and EEG signals
    (2023-11-27) Avots, Egils; Anbarjafari, Gholamreza, juhendaja; Bachmann, Maie, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond
    Tipptasemel meditsiiniteaduse ja tehisintellekti uuringud on valmis ümber kujundama ajuhaiguste diagnoosimist. Käesolev doktoritöö “Statistilisel analüüsil põhinev aju ebanormaalsuste tuvastamine kasutades individuaalsete struktuuriliste ühenduvuste võrke ja EEG signaale” keskendub Alzheimeri tõvele ja kliinilisele depressioonile, kasutades tipptasemel tehnoloogiaid uuenduste esile toomiseks. Sünni, trauma, haiguse või muude asjaolude tagajärjel tekkinud ajuanomaaliad mõjutavad oluliselt inimese füüsilist ja vaimset tervist. Selles doktoritöös käsitletakse kaht peamist teemat: MRT kaudu diagnoositud Alzheimeri tõve ning EEG kaudu tuvastatud kliinilist depressiooni. Masinõppe algoritmid tõlgendavad ajuskanneeringu pilte, et tuvastada haigustele omaseid mustreid, nagu näiteks ajustruktuuri muudatusi Alzheimeri tõve puhul. Erinevatel pildimustritel põhinev andmete analüüs võimaldab haiguse olemasolu diagnoosida kiiremini ning täpsemalt. Kliinilise depressiooni puhul analüüsib masinõpe EEG salvestusi, et tuvastada ajutegevusega seotud muudatusi ning ennustada depressiooni esinemist. EEG kaudu on võimalik mõõta depressiooniga seotud ajutegevust ning masinõppe abil tuvastada haiguspilt. EEG mustrite analüüs võimaldab edukat patsientide klassifitseerimist ning seeläbi kiiremat diagnoosimist. Käesolev lõputöö toob esile inimese leidlikkust ning tehisintellekti potentsiaali tervishoiu paremaks muutmiseks. See viitab uuele ajastule ajuhäirete diagnoosimisel, kus on võimalik senisest kiiremini ning täpsemini Alzheimeri tõve ja kliinilist depressiooni tuvastada. Tulevikus on potentsiaalselt näha mitmeid paremaid tervishoiu lahendusi, mida taolised tehnoloogiad edendavad.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Causality Management and Analysis in Requirement Manuscript for Software Designs
    (Tartu Ülikool, 2023) Oluyide, Olumide Olugbenga; Gambo, Ishaya Peni, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    For software design tasks involving natural language, the results of a causal investigation provide valuable and robust semantic information, especially for identifying key variables during product (software) design and product optimization. As the interest in analytical data science shifts from correlations to a better understanding of causality, there is an equal task focused on the accuracy of extracting causality from textual artifacts to aid requirement engineering (RE) based decisions. This thesis focuses on identifying, extracting, and classifying causal phrases using word and sentence labeling based on the Bi-directional Encoder Representations from Transformers (BERT) deep learning language model and five machine learning models. The aim is to understand the form and degree of causality based on their impact and prevalence in RE practice. Methodologically, our analysis is centered around RE practice, and we considered 12,438 sentences extracted from 50 requirement engineering manuscripts (REM) for training our machine models. Our research reports that causal expressions constitute about 32% of sentences from REM. We applied four evaluation metrics, namely recall, accuracy, precision, and F1, to assess our machine models’ performance and accuracy to ensure the results’ conformity with our study goal. Further, we computed the highest model accuracy to be 85%, attributed to Naive Bayes. Finally, we noted that the applicability and relevance of our causal analytic framework is relevant to practitioners for different functionalities, such as generating test cases for requirement engineers and software developers and product performance auditing for management stakeholders.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    CDR-Based Trajectory Reconstruction Using Transformers
    (Tartu Ülikool, 2022) Bollverk, Oliver; Hadachi, Amnir, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    With the development of telecommunication technologies, mobile devices, and data collected via mobile services, it has become of great interest to predict the paths that individuals take in cities. With sparse mobility data, the goal of researchers is to build models that are able to fill the gaps, or in other words, to reconstruct the trajectory of an individual. Recent models proposed for this task utilize Call Detail Records (CDRs) produced when a mobile phone connects to the cellular network, using Monte Carlo or Hidden Markov Model (HMM) based approaches. In this thesis, a novel deep learning method for trajectory reconstruction from CDR data is introduced. GPS points are linked to roads on a road network constructed from the OpenStreetMap (OSM) database, and the resulting labels are used in training as ground truth. Drawing inspiration from prior work in matching GPS points to a network of roads using Transformer neural networks, we present a framework that involves using two Transformers sequentially with partially modified architectures. The final result is a trained Transformer, able to predict the road level path, knowing only the cell, in the area in which movement started. The accuracy of estimating the taken path was compared with that of prior approaches which use probabilistic modeling to predict the next location from CDR data.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Cell Cycle Phase Classification from Microscopy Images
    (Tartu Ülikool, 2025) Zeynalli, Ali; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Accurate classification of cell cycle phases is essential for cancer research and drug discovery. While fluorescence microscopy provides high-contrast, biomarker-specific signals that support precise classification, it relies on staining protocols that limit scalability and compromise cell viability. In contrast, bright-field microscopy offers a label-free, cost-effective alternative but poses challenges due to its lower contrast. This study compares five computational strategies for cell cycle phase classification using fluorescence and bright-field microscopy: traditional feature-based classification, segmentation-based classification, mask-guided classification via segmentation, nuclei patch classification, and nuclei patch classification via segmentation. Results show that fluorescence images support near-perfect classification performance across all methods. For bright-field images, the highest balanced accuracy of 0.770 was achieved using a nuclei patch classification approach with a ResNet-50 backbone, followed closely by mask-guided classification. These findings highlight the potential of deep learning models for accurate cell cycle classification in bright-field microscopy, advancing the potential for scalable applications in biomedical research.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Collecting and Using a Labeled Dataset of NATO Mission Task Symbols to Improve and Benchmark Detection Models
    (Tartu Ülikool, 2023) Açıkalın, Aral; Tampuu, Ardi, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Neural networks are commonly used for object detection tasks but require immense amounts of data to train. For the task of North Atlantic Treaty Organization (NATO) mission task symbol detection using object detection neural networks, it is not possible to meet the data requirements. Additionally, labeling mission task symbols is very time-consuming and costly. This thesis aims to collect and label a dataset of NATO mission task symbols, propose a part of it as a benchmark for our solutions and future solutions, and finally propose different methods to use a part of the scarce collected data to improve the performance of our object detection models. YOLOv5 neural network is selected and used to experiment with different ways of using the scarce collected data. As a result, 113 images were collected and labeled. Five performance metrics are proposed for the benchmark. Finally, it was discovered that when dataset size is limited, extracting information from the dataset and using it to generate artificial data improves performance compared to directly introducing the scarce dataset to symbol detection models.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Computer vision meets microbiology: deep learning algorithms for classifying cell treatments in microscopy images
    (Tartu Ülikool, 2023) Zeynalli, Ali; Fishman, Dmytro, juhendaja
    Cell classification is one of the most complex challenges in cellular research that has significant importance to personalised medicine, cancer diagnostics and disease prevention. The accurate classification of cells based on their unique characteristics provides valuable insights into a patient's health status and in guiding treatment decisions. Thanks to recent technological advancements, cellular research has experienced significant progress in the use of deep learning and has become a valuable tool for tackling complicated tasks such as cell classification. In this study, we explored the capability of state-of-the-art deep learning models such as ResNet, ViT and Swin Transformer to automatically classify brightfield and fluorescent microscopy images across single and multiple channels into four cell treatments: Palbociclib, MLN8237, AZD1152, and CYC116. The results have revealed that Swin Transformer surpasses the other models for cell treatment classification on multi-channel fluorescent and brightfield images, achieving the highest accuracy of 86% and 59%, correspondingly. However, the highest accuracy achieved on single-channel brightfield images was 61%, using the ResNet-50 model. The previous research has shown that combining multiple channels yields better performance which necessitates further investigation into the capacity of deep learning models for automating the cell treatment classification of single- and multi-channel brightfield microscopy images.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Deep Learning Based Automated Job Candidate Interview Screening
    (Tartu Ülikool, 2019) Aktas, Kadir; Anbarjafari, Gholamreza, supervisor; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Tehnoloogiainstituut
    Traditional way of recruitment process is challenging for both the candidate and the employer. To apply for a job, the candidate needs to prepare a CV. On the other hand, the employer needs to check all the submitted CVs and analyze the candidate data manually. These aspects can make the process very time consuming, especially when there are many candidates. Furthermore, the manual analysis of the candidate data is very open to human bias. The thesis proposes an automated video interview analysis system, which eliminates the problems mentioned above.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Deep learning methods for cell microscopy image analysis
    (2024-04-24) Ali, Mohammed Abdulhameed Shaif; Fishman, Dmytro, juhendaja; Parts, Leopold, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond
    Rakk on kõigi elusorganismide peamine ehitusüksus ja peaaegu kõigi bioloogiliste protsesside toimumiskoht. Rakud toodavad valke ja energiat, transpordivad materjale ja hävitavad jäätmeid. Nad saavad end mitoosi kaudu paljundada ja suhelda signaaliradade kaudu oma keskkonnaga. Ilma rakkudeta ei oleks elu olemas. Rakkude toimimise mõistmine võimaldab inimestel bioloogiliste protsesside, haiguste ja ravimite kohta rohkem teada saada. Rakud on väga väikesed. Tavalise imetaja rakudiameeter on umbes 10−20 μm, mis on ligikaudu viiendik väikseimast objektist, mida inimene palja silmaga näha on võimeline. Kuna inimese silmal puudub võime näha sellise suurusega objekte, rääkimata nende põhjalikust uurimisest, kasutavad teadlased erivarustust – mikroskoopi. Mikroskoop on seade, mis suurendab väikseid objekte, mida muidu oleks võimatu uurida. Antud doktoritöös analüüsime pilte, mis on saadud laialdaselt kasutatava valgusmikroskoobiga - see on instrument, mis kasutab ära valgust ja selle omadusi, et suurendada mikroskoopilisi objekte. Tänu mikroskoopia arengule on seda valdkonda suudetud viimasel ajal ka märkimisväärselt automatiseerida. Mikroskoopia andmete hulk on hoogsalt suurenenud, sest nüüd saab ühes eksperimendis automatiseeritud korras luua miljoneid pilte [1, 2, 3]. Selliste tohutute pildikogumite uurimiseks on hädavajalikud automatiseeritud ja täpsed analüüsiprotseduurid. Pildianalüüsi tehnikad on pidevalt arenenud. Eriti tõhusaks on osutunud masinõppe algoritmid ja seetõttu kasutatakse neid laialdaselt ka selles valdkonnas. Paljud algoritmid on kergesti kättesaadavad erinevate bioloogiliste piltide analüüsiks mõeldud tööriistade [4] kaudu - ImageJ/Fiji [5], CellProfiler [6], ja Ilastik [7]. Hoolimata selliste tööriistade laialdasest kasutuselevõtust, ei pruugi need vahendid alati anda piisavalt täpseid tulemusi, kuna traditsiooniliste masinõppe algoritmide kasutamine eeldab nii insener-tehnilisi oskusi kui ka valdkonnateadmisi [8]. Selliste lähenemisviiside stabiilsust ja usaldusväärsust mõjutavad ka signaali kvaliteedi varieeruvus ning erinevused pildistamismeetodites, mis on iseloomulikud kõrge läbilaskevõimega rakumikroskoopiale [9,10]. Erinevalt traditsioonilistest masinõppemeetoditest, mis nõuavad hoolikat sisendandmete eeltöötlust, eraldavad sügavõppe meetodid automaatselt asjakohaseid mustreid toorandmest, kasutades selleks mitmekihilisi arvutuslikke mudeleid. Üheks sügavõppe meetodite liigiks olevas konvolutsioonilises närvivõrgus liigutatakse filtreid üle pildi, et tuvastada pildi peamisi tunnuseid ja mustreid. Konvolutsioonilised närvivõrgud saavutavad tipptasemel tulemusi ülesannetes nagu pildi klassifitseerimine [11, 12], objekti tuvastamine [13, 14] ja segmenteerimine [15, 16]. Sügavõppe kiire areng on pidevalt loonud uusi teadmisi, mida saaks mikroskoopiaga seotud pildianalüüsis ära kasutada, kuid mida pole sellel eesmärgil veel täielikult rakendatud. Doktoritöö algas süvaõppe uusimate meetodite rakendamisega rakutuumade segmenteerimiseks, mis on sageli üks esimesi samme raku mikroskoopiapiltide töötlemise protsessides. Täpne rakutuumade segmenteerimine on ülioluline paljude bioloogiliste rakenduste kontekstis. Näiteks on rakutuuma struktuuri ja morfoloogia kõrvalekalded sageli seotud haigustega nagu vähk. Seega aitab tuumade segmenteerimine ja nende omaduste uurimine kaasa vähidiagnoosimisele ja haiguse kulu jälgimisele [17]. Tuumade tuvastamist rakendatakse ka rakkude jälitamiseks, mille abil on omakorda võimalik uurida rakusüsteemide käitumist ning selle muutumist erinevate ravimite toimel [18]. Peamiseks fookuseks olid helevälja mikroskoobipildid, kuna neid on suhteliselt lihtne toota, kuid raske kontrollida ja analüüsida. Me katsetasime tuumade segmenteerimiseks mitmeid tipptasemel mudeliarhitektuure ja lõime ka uue arhitektuuri, PPU-Net [19]. Hinnatud mudelid saavutasid varieeruvaid tulemusi tuumade segmenteerimisel heleväljapiltidelt - PPU-Net saavutas aga tollel hetkel olemasolevate tipptaseme mudelitega võrdseid tulemusi, kasutades selleks 20 korda vähem treenitavaid parameetreid, mis muutis selle konkurentidest kergemaks ja vähem keerukamaks. Samuti uurisime mudelite ebastabiilsete tulemuste põhjuseid erinevatel rakutüüpidel ja üksikutel piltidel, vajalike treeningpiltide arvu ja kõige sagedamaid vigade allikaid. PPU-Net segmenteerimisvigade põhjuste uurimisel leidsime, et anomaaliate olemasolu piltidel (signaal, mis ei kajasta oodatut) on suur veaallikas. Seega tahtsime paremini mõista anomaaliate probleemi. Uurisime erinevaid andmekogumeid ja leidsime, et anomaaliad esinevad erinevates kujudes ja suurustes ning need võivad oluliselt moonutada järgnevaid analüüse. Nende mõju leevendamise eesmärgil lõime raamistiku, mis määratleb ja eemaldab anomaaliad minimaalse inimtööga, tehes ainult pilditasemel märgendeid, mitte töömahukaimaid pikslitasemel märgendeid [20]. Kuna anomaaliad on oma olemuselt keerulised ja nende märgendamine pikslitasemel on töömahukas ja aeganõudev, siis pakkusime välja ainult pilditasemel märgendite kasutamise selle ülesande jaoks. Esiteks pakkusime välja ScoreCAMi nimelise meetodi kasutamise [21], mille eesmärk on tõlgendada sügavõppe pildiklassifitseerimise algoritme, tuues esile pildi osad või tunnused, mille põhjal langetab sügavõppe meetod oma otsuse. Kui sügavõppe meetod klassifitseerib, kas pildil esineb anomaaliaid või mitte, siis hüpoteesi kohaselt on sügavõppe mudeli otsuse tegemisel kõige mõjukamaks pildi osaks või tunnuseks just anomaalia - see hüpotees tõestati hiljem empiiriliselt. ScoreCAMi väljundit kasutati hiljem pseudo-märgenditena segmenteerimismudeli treenimiseks [20]. Sel moel ühendame pikslitasemel segmenteerimise kvaliteedi ja pilditasemel märgendite hankimise mugavuse. Nimetasime oma väljapakutud raamistiku ScoreCAM-U-Netiks ning näeme ette, et tulevikus muutub soovimatute objektide eemaldamine tõenäoliselt kõigi suuremahuliste mikroskoopiaeksperimentide standardprotsessi osaks. Lõpuks rakendasime omandatud teadmisi pärismaailma kontekstis. Uurisime sügavõppe meetodite kasutamise väärtust anomaaliate eemaldamiseks ja segmenteerimiseks ravimite avastamise uurimistöös. Selleks tegime koostööd keemikute ja bioloogidega, kes uurivad ühte silmapaistvamat rakumembraani retseptorit - M4. Vaatamata selle kasvavale tähtsusele, on osutunud keerukaks luua uusi ravimeid, mis oleksid suunatud sellele retseptorile [22]. Meie koostööpartnerid kasutasid kõrge afiinsusega fluorestseeruvaid ligande, et uurida sidumisinteraktsiooni M4 retseptoriga. Selleks on vaja eraldada rakud ja uurida nendes olevat fluorestsentssignaali, mille tugevus sõltub valgu ja ligandi vahelise afiinsuse tugevusest ning ligandi kontsentratsioonist. Fluorestseerivate ligandide poolt rakkudes tekitatud signaal ei osutunud piisavaks, et mudel saanuks eristada rakku taustast. Seega otsustasime segmenteerida rakke keerukamast, kuid fluorestseerivusest sõltumatutest heleväljapiltidest. Esiteks kasutasime sügavõpet rakukehade segmenteerimiseks heleväljapiltidelt. Järgmisena analüüsisime fluorestseeruvat signaali rakkudest, mis eraldati vastavatelt fluorestseeruvatelt piltidelt segmenteerimistulemustest saadud rakukoordinaatide abil. Seejärel uurisime anomaaliate eemaldamise mõju heleväljapiltide signaalile, mis tulenes retseptori-ligandi interaktsioonist. Näitasime, et anomaaliate eemaldamine muutis signaali rohkem erapooletutuks ja et meie loodud mudeli kasutamine nende eemaldamiseks andis peaaegu optimaalse tulemuse.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Development of EEG-Based BCI Application Using Machine Learning to Classify Motor Movement and Imagery
    (Tartu Ülikool, 2020) Roots, Karel; Muhammad, Yar, juhendaja; Muhammad, Naveed, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    A brain-computer interface (BCI) is a system that implements human-computer communication by interpreting brain signals. The signals can be recorded through different neuroimaging techniques that can read brain activity, such as electroencephalography (EEG). The goal of BCI technology is to enable the user to communicate with or control an external device using their mind. BCIs are widely used in medicine to help patients with limited motor abilities to communicate with their environment. However, there are many challenges faced when building a BCI capable of classifying the subject’s intention, such as the highly individualized nature of brain waves, which makes the development of a universal classifier difficult. This work aimed to develop a better electroencephalography (EEG) based machine learning classifier model capable of cross-subject motor movement and imagery classification and to build a BCI system to validate the performance of the developed classifier. The classifier was based on convolutional neural networks (CNN) with a multi-branch feature fusion approach. The classifier was developed using Tensorflow machine learning framework, the BCI system was developed in the Python programming language using the PyQT framework, and the Emotiv EPOC EEG device was used for signal collection. The resulting classifier was tested on a publicly available dataset of 103 subjects. The classifier achieved an accuracy of 84.1% when predicting executed left- or right-hand movement and an accuracy of 83.8% when predicting imagined left- or right-hand movement.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Every Click Counts: Deep Learning Models for Interactive Segmentation in Biomedical Imaging
    (Tartu Ülikool, 2024) Vaiciukevičius, Donatas; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Radioloogia valdkonnas valitseb erakordne nõudluse kasv, mis on suuresti tingitud elanikkonna vananemisest ja sellest tulenevast survest niigi puudulikule tööjõule. Suurenenud on vajadus tehnoloogiliste uuenduste järele, mis aitaks tulla toime aina suureneva töökoormusega. Mõnes valdkonnas on selle koormuse leevendamiseks rakendatud masinõpet, kuid mitmed kitsaskohad on siiski jäänud lahenduseta. Tähelepanuväärselt töö ja ajamahukas ülesanne on üks onkoloogilise diagnostika möödapääsmatu osa - kompuutertomograafiapiltidel tuvastatud kasvajate käsitsi mõõtmine. Käesolevas töös uuriti võimalusi interaktiivsete süvaõppemudelite kasutamiseks radioloogide abistamiseks ning seeläbi diagnostilise töövoo parandamiseks. Kompuutertomograafiapiltide analüüsiks katsetati erinevaid tehnikaid, näiteks RITM ja FocalClick. Nende meetodide uurimisel jõuti dünaamilise raadioplaadi kodeerimise kasutuselevõtuni, mis lisas klõpsupõhisele kasutajasisendile uue mõõtme ja suurendas oluliselt mudeli jõudlust. See uuendus vähendab vajadust korduvateks interaktsioonideks ja parandab segmenteerimise kvaliteeti vähemate klõpsude abil. Lisaks pakutakse välja täiustatud augmentatsioonistrateegia ja tutvustatakse uudset mõõtemeetodit interaktiivsete segmentatsioonimudelite hindamiseks. Meie saadud tulemused demonstreerivad interaktiivsete segmenteerimismeetodite ja nende dünaamilise raadoplaadi kodeerimisega kombineerimise tõhusust radioloogilise diagnostika täiustamisel. Tulemused kujutavad endast paljulubavat suunda edasisteks uuringuteks nende meetodite jätkuvaks optimeerimiseks kliinilise rakendamise eesmärgil.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Exploring Out-of-Distribution Detection Using Vision Transformers
    (Tartu Ülikool, 2022) Haavel, Karl Kaspar; Kull, Meelis, juhendaja; Leelar, Bhawani Shankar, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Current state-of-the-art artificial neural network (ANN) image classifiers perform well on input data from the same distribution that it was trained with, also known as in-distribution (InD), yet have worse results on out-of-distribution (OOD) samples. An input can be considered OOD for many reasons - such as an input with a new concept (e.g. new class), or the input has random noise generated by a sensor. Knowing if a new data point is OOD is necessary for deploying models in real-world safety-critical applications (e.g. self-driving cars, healthcare) to make safer decisions. For example, a self-driving car slows down when it detects an OOD object or gives the control back to the human. The primary method used for OOD detection is to utilise ANN as a feature extractor of embeddings to approximate where the new data point will be in the embedding space and compare it to trained embeddings using distance metrics. We use a Vision Transformer (ViT) as the ANN because there has been a push to use large-scale pre-trained Transformers to improve a range of OOD tasks. Improvements stem from ViT’s state-of-the-art performance as a feature extractor, which can be used out-of-the-box for OOD detection compared to convolutional neural networks (CNNs), which require custom training methods and exposure to OOD to reach similar results. In this thesis, a ViT was used as a feature extractor, and the performance of OOD detection was compared using various distance metrics to determine the robustness and choose the best distance metric in ViT’s embedding space. Three separate experiments were conducted with multiple datasets, methods, models and approaches. The experiments showed that ViT is capable of OOD detection out-of-the-box without any custom training methods or exposure to OOD. However, none of the distance metrics could noticeably improve the results of OOD detection obtained with the baseline Mahalanobis distance. Nonetheless, ViT has considerably better OOD detection performance in most datasets and is more generalisable than a similarly trained CNN. Furthermore, ViT is more robust to various distance metrics, proving that the features extracted from the model are good enough to discriminate between InD and OOD. Finally, it was shown that ViT with Mahalanobis distance has the best OOD detection performance when blending InD and OOD at various ratios. Future work can consider ensembling multiple distance metrics to utilise the properties of each distance metric and to apply the same methodology on other ANN architectures.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Exploring the Value of Weakly-Supervised Deep Learning Approaches for Artefact Segmentation in Brightfield Microscopy Images
    (Tartu Ülikool, 2021) Hollo, Kaspar; Ali, Mohammed, juhendaja; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Brightfield microscopy is of great importance as it offers researchers a relatively simple way to quantify cellular experiments. However, brightfield images often contain a variety of artefacts that should be segmented and thereafter neutralized so that they would not affect the quantitative measurements of cellular experiments. While fully-supervised deep learning models offer state-of-the-art performance in most segmentation tasks in computer vision, it is laborious to acquire the pixel-level labels needed to train these models. Alternatively, segmentation tasks can also be solved using more time- and cost-effective weakly-supervised deep learning models that use image-level labels for training. In this thesis, we compare the performances of fully- (e.g., U-Net) and weakly-supervised approaches (e.g., Score-CAM) to determine whether weakly-supervised approaches could be used as a cheaper but still well-performing solution for segmenting artefacts in brightfield images. Six separate experiments with various fully- and weakly-supervised approaches, image datasets and method ensembles are carried out. The results of the experiments showed that with the number of images and labels currently available, none of the weakly-supervised approaches were able to replicate the performance of the baseline fully-supervised approach. However, some of the weakly supervised approaches, like the combined Score-CAM and U-Net approach, showed promising segmentation results. Moreover, the same approach also showed better generalizability on an unseen dataset than the baseline fully-supervised approach. Future work is required to find the amount of weak supervision signal needed to match the performance of the fully-supervised approaches.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Fast Fourier Convolutions in Self-Supervised Neural Networks for Image Denoising
    (Tartu Ülikool, 2022) Ariva, Joonas; Papkov, Mikhail, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Quality of digital images depends on a multitude of environmental and equipment factors. In many cases our options for optimizing imaging conditions are limited, and the acquired images turn out to be corrupted with noise. Recently, denoising convolutional neural networks (CNN) have started to outperform classical denoising algorithms. If approached naively, these networks require a lot of pairs of noisy and clean images from the particular domain. In some fields (e.g. in biomedical imaging) it is hard to collect such data in abundance. This limitation has accelerated a research for self-supervised networks what can learn denoising just from noisy images alone. However, such networks’ performance could be constrained by the the limited receptive field of regular convolution. To mitigate this problem, a new modification for CNNs was proposed: Fast Fourier Convolution (FFC). Here, a global receptive field is achieved by using Fourier Transform and convolving spectral representation. Global perception field can help CNNs to better capture dependencies in image regions which are far apart. Given the ability of FFC to enhance multiple state-of-the-art classification neural networks, we hypothesize that denoising neural networks could also gain from its use. In this work, we design multiple approaches for incorporating FFC into self-supervised neural networks for image denoising. We evaluate these approaches on three diverse benchmark datasets and compare them with both supervised and self-supervised methods. We empirically show that FFC-enhanced denoising network achieves the state-of-theart results on character dataset and shows comparable level of performance for both grayscale and color natural images.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Gender bias in facial expression recognition
    (Tartu Ülikool, 2021) Domnich, Artem; Anbarjafari, Gholamreza, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Rapid development of artificial intelligence (AI) systems amplify many concerns in society. These AI algorithms inherit different biases from humans due to mysterious operational flow and because of that it is becoming adverse in usage. As a result, researchers have started to address the issue by investigating deeper in the direction towards Responsible and Explainable AI. Among variety of applications of AI, facial expression recognition might not be the most important one, yet is considered as a valuable part of human-AI interaction. Evolution of facial expression recognition from the feature based methods to deep learning drastically improve quality of such algorithms. This thesis aims to study a gender bias in deep learning methods for facial expression recognition by investigating six distinct neural networks, training them, and further analysed on the presence of bias, according to the three definition of fairness. The main outcomes show which models are gender biased, which are not and how gender of subject affects its emotion recognition. More biased neural networks show bigger accuracy gap in emotion recognition between male and female test sets. Furthermore, this trend keeps for true positive and false positive rates. In addition, due to the nature of the research, we can observe which types of emotions are better classified for men and which for women. Since the topic of biases in facial expression recognition is not well studied, a spectrum of continuation of this research is truly extensive, and may comprise detail analysis of state-of-the-art methods, as well as targeting other biases.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Impact of Input Dataset Size and Fine-tuning on Faster R-CNN with Transfer Learning
    (Tartu Ülikool, 2023) Zheng, Wei; Björklund, Tomas, juhendaja; Pinheiro, Victor Henrique Cabral, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Deep learning models are widely used for machine learning tasks such as object detection. The lack of available data to train these models is a common hindrance in many industrial applications, where data gathering/annotation and insufficient computational resources often impose a barrier to the financial feasibility of deep learning implementations. Transfer learning is a possible answer to this challenge by exploiting the information learned by a model from data in a different domain than that of the target dataset. This technique has been typically applied on the backbone network of a two-stage object detection pipeline. In this work, we investigate the association between the input dataset size and the proportion of trainable layers in the backbone. In particular, we show some interesting findings on Faster R-CNN ResNet-50 FPN, a state-of-the-art object detection model, and MS COCO, a benchmarking dataset. The outcomes of our experiments indicate that, although a model generally performs better when trained with more layers fine-tuned to the training data, such an advantage reduces as the input dataset becomes smaller, as unfreezing too many layers can even lead to a severe overfitting problem. Choosing the right number of layers to freeze when applying transfer learning not only allows the model to reach its best possible performance but also saves computational resources and training time. Additionally, we explore the association between the effect of learning rate decay and the input dataset size, and also discuss the advantage of using pre-trained weights when compared to training a network from scratch.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm
    (Licensee MDPI, 2023) Jayavel, Amudhavel; Gopinath, Shivasubramanian; Angamuthu, Praveen Periyasamy; Arockiaraj, Francis Gracy; Bleahu, Andrei; Xavier, Agnes Pristy Ignatius; Smith, Daniel; Han, Molong; Slobozhan, Ivan; Ng, Soon Hock; Katkus, Tomas; Rajeswary, Aravind Simon John Francis; Sharma, Rajesh; Juodkazis, Saulius; Anand, Vijayakumar
    Pattern recognition techniques form the heart of most, if not all, incoherent linear shift-invariant systems. When an object is recorded using a camera, the object information is sampled by the point spread function (PSF) of the system, replacing every object point with the PSF in the sensor. The PSF is a sharp Kronecker Delta-like function when the numerical aperture (NA) is large with no aberrations. When the NA is small, and the system has aberrations, the PSF appears blurred. In the case of aberrations, if the PSF is known, then the blurred object image can be deblurred by scanning the PSF over the recorded object intensity pattern and looking for pattern matching conditions through a mathematical process called correlation. Deep learning-based image classification for computer vision applications gained attention in recent years. The classification probability is highly dependent on the quality of images as even a minor blur can significantly alter the image classification results. In this study, a recently developed deblurring method, the Lucy-Richardson-Rosen algorithm (LR2A), was implemented to computationally refocus images recorded in the presence of spatio-spectral aberrations. The performance of LR2A was compared against the parent techniques: Lucy-Richardson algorithm and non-linear reconstruction. LR2A exhibited a superior deblurring capability even in extreme cases of spatio-spectral aberrations. Experimental results of deblurring a picture recorded using high-resolution smartphone cameras are presented. LR2A was implemented to significantly improve the performances of the widely used deep convolutional neural networks for image classification.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Improving Microscopy Image Segmentation with Object Detection
    (Tartu Ülikool, 2021) Urukov, Dmytro; Papkov, Mikhail, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Automated analysis of microscopy images is an essential part of modern biological research. Recent advances in deep learning have greatly improved its quality and helped decrease the amount of time-consuming manual work during the experiments. Biologists are interested not only in the accurate detection of various objects (whole cells, cell organelles, tissue structures, etc.) but also in the high-quality segmentation of their shape. In this work, we address the problem of obtaining realistic instance segmentation masks from images with high object density. We show that combining segmentation and detection methods into a single image analysis pipeline helps efficiently separate overlapping objects and improves the segmentation quality. To reduce the complexity of this pipeline, we propose a novel CenterUNet multi-task neural network architecture that simultaneously performs object detection and semantic segmentation. We evaluate the performance of this architecture across several microscopy image domains and conduct a thorough ablation study to identify the necessary and sufficient combination of detection subtasks to solve the segmentation problem. We believe that the results of our research provide valuable insights and can help individual practitioners as well as the image analysis industry. Our developed model may improve microscopy image segmentation pipelines at virtually zero computational cost and little integration efforts.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Kaugseire põhine loodusliku rohumaa ja põllumaa eristamine
    (Tartu Ülikool, 2025) Gagarina, Kelli; Sepp, Tiit, juhendaja; Ariva, Joonas, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    This work focused on developing a machine learning-based classification model to identify cropland and natural grassland in satellite images from 2008 to 2012, providing an overview of land cover. The workflow included selecting appropriate datasets and model architecture, data preprocessing, and training the classification model. The final model was based on a U-Net architecture and trained using Landsat 7 satellite data. The model achieved an overall precision of 79%.
  • «
  • 1 (current)
  • 2
  • »

DSpace tarkvara autoriõigus © 2002-2026 LYRASIS

  • Teavituste seaded
  • Saada tagasisidet