Browsing by Author "Fishman, Dmytro, juhendaja"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Automatic Road Boundaries Extraction for High Definition maps(Tartu Ülikool, 2021) Zabolotnii, Dmytro; Muhammad, Naveed, juhendaja; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutAutonomous Vehicles (AV) research moves forward and promises to create a safer and more efficient driving process than the vast majority of humans can achieve. However, just as humans, Autonomous Vehicles still rely on the maps of their surroundings to conduct most of their operational sub-tasks. These maps are enriched with a large quantity of additional information for a more accurate representation of the natural world, earning the common name of High Definition (HD) Map. The rapid increase of the field’s popularity also brings a great deal of attention to the HD maps creation and maintenance. Still, to this day, almost all HD maps are created using many human hours of expert labor, raising their cost and creating barriers to broader adoption. In this work, we research recent advancements of HD maps automatic creation and apply novel methods to extract road information, namely road boundaries. We strive to create an automatic system capable of extracting the necessary information from LIDAR data from vehicles deployed in urban conditions, with a high degree of accuracy and tolerance to externalities, such as weather conditions or road construction details. In order to evaluate the system, we use the publicly available Nuscenes dataset and compare automatically created road boundaries with provided manually drafted ground truth. The system achieves a precision score of 0.62 and a recall score of 0.31 at the distance tolerance of 40 cm.Item Change Detection in HD-Maps Using Camera Images for Autonomous Driving(Tartu Ülikool, 2021) Roshan, Navid Bamdad; Fishman, Dmytro, juhendaja; Muhammad, Naveed, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutSelf-driving vehicles have been an exciting field of research for both industry and academia in the last decade. The map is one of the challenging aspects of this research. It has been centuries that maps are used in transportation for routing. Accordingly, autonomous vehicles can use maps for routing as well. Some maps that can be used for autonomous driving are called high-definition (HD) maps. HD maps are more accurate than ordinary maps. Details of the HD maps are in centimeter-level accuracy. They also have more details compared to regular maps. For instance, HD-maps have information about the surrounding environment of the autonomous vehicle like details about streets, lanes, traffic rules, traffic signs, traffic lights, etc. This additional information assists autonomous vehicles in perceiving the environment better to move safer and more efficiently. Thus, autonomous vehicles need up-to-date details in HD maps all the time. So, in case of any changes in the environment, the changes must be detected, and HD maps must be updated accordingly. Therefore, it is essential to design an automatic solution for detecting the changes in the environment. This work proposes an automatic streets’ drivable area change detection pipeline. The proposed pipeline detects any changes that alter the drivable path of the streets. The detected changes can be used to update the HD maps later.Item Computer vision meets microbiology: deep learning algorithms for classifying cell treatments in microscopy images(Tartu Ülikool, 2023) Zeynalli, Ali; Fishman, Dmytro, juhendajaCell classification is one of the most complex challenges in cellular research that has significant importance to personalised medicine, cancer diagnostics and disease prevention. The accurate classification of cells based on their unique characteristics provides valuable insights into a patient's health status and in guiding treatment decisions. Thanks to recent technological advancements, cellular research has experienced significant progress in the use of deep learning and has become a valuable tool for tackling complicated tasks such as cell classification. In this study, we explored the capability of state-of-the-art deep learning models such as ResNet, ViT and Swin Transformer to automatically classify brightfield and fluorescent microscopy images across single and multiple channels into four cell treatments: Palbociclib, MLN8237, AZD1152, and CYC116. The results have revealed that Swin Transformer surpasses the other models for cell treatment classification on multi-channel fluorescent and brightfield images, achieving the highest accuracy of 86% and 59%, correspondingly. However, the highest accuracy achieved on single-channel brightfield images was 61%, using the ResNet-50 model. The previous research has shown that combining multiple channels yields better performance which necessitates further investigation into the capacity of deep learning models for automating the cell treatment classification of single- and multi-channel brightfield microscopy images.Item Deep learning methods for cell microscopy image analysis(2024-04-24) Ali, Mohammed Abdulhameed Shaif; Fishman, Dmytro, juhendaja; Parts, Leopold, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkondRakk on kõigi elusorganismide peamine ehitusüksus ja peaaegu kõigi bioloogiliste protsesside toimumiskoht. Rakud toodavad valke ja energiat, transpordivad materjale ja hävitavad jäätmeid. Nad saavad end mitoosi kaudu paljundada ja suhelda signaaliradade kaudu oma keskkonnaga. Ilma rakkudeta ei oleks elu olemas. Rakkude toimimise mõistmine võimaldab inimestel bioloogiliste protsesside, haiguste ja ravimite kohta rohkem teada saada. Rakud on väga väikesed. Tavalise imetaja rakudiameeter on umbes 10−20 μm, mis on ligikaudu viiendik väikseimast objektist, mida inimene palja silmaga näha on võimeline. Kuna inimese silmal puudub võime näha sellise suurusega objekte, rääkimata nende põhjalikust uurimisest, kasutavad teadlased erivarustust – mikroskoopi. Mikroskoop on seade, mis suurendab väikseid objekte, mida muidu oleks võimatu uurida. Antud doktoritöös analüüsime pilte, mis on saadud laialdaselt kasutatava valgusmikroskoobiga - see on instrument, mis kasutab ära valgust ja selle omadusi, et suurendada mikroskoopilisi objekte. Tänu mikroskoopia arengule on seda valdkonda suudetud viimasel ajal ka märkimisväärselt automatiseerida. Mikroskoopia andmete hulk on hoogsalt suurenenud, sest nüüd saab ühes eksperimendis automatiseeritud korras luua miljoneid pilte [1, 2, 3]. Selliste tohutute pildikogumite uurimiseks on hädavajalikud automatiseeritud ja täpsed analüüsiprotseduurid. Pildianalüüsi tehnikad on pidevalt arenenud. Eriti tõhusaks on osutunud masinõppe algoritmid ja seetõttu kasutatakse neid laialdaselt ka selles valdkonnas. Paljud algoritmid on kergesti kättesaadavad erinevate bioloogiliste piltide analüüsiks mõeldud tööriistade [4] kaudu - ImageJ/Fiji [5], CellProfiler [6], ja Ilastik [7]. Hoolimata selliste tööriistade laialdasest kasutuselevõtust, ei pruugi need vahendid alati anda piisavalt täpseid tulemusi, kuna traditsiooniliste masinõppe algoritmide kasutamine eeldab nii insener-tehnilisi oskusi kui ka valdkonnateadmisi [8]. Selliste lähenemisviiside stabiilsust ja usaldusväärsust mõjutavad ka signaali kvaliteedi varieeruvus ning erinevused pildistamismeetodites, mis on iseloomulikud kõrge läbilaskevõimega rakumikroskoopiale [9,10]. Erinevalt traditsioonilistest masinõppemeetoditest, mis nõuavad hoolikat sisendandmete eeltöötlust, eraldavad sügavõppe meetodid automaatselt asjakohaseid mustreid toorandmest, kasutades selleks mitmekihilisi arvutuslikke mudeleid. Üheks sügavõppe meetodite liigiks olevas konvolutsioonilises närvivõrgus liigutatakse filtreid üle pildi, et tuvastada pildi peamisi tunnuseid ja mustreid. Konvolutsioonilised närvivõrgud saavutavad tipptasemel tulemusi ülesannetes nagu pildi klassifitseerimine [11, 12], objekti tuvastamine [13, 14] ja segmenteerimine [15, 16]. Sügavõppe kiire areng on pidevalt loonud uusi teadmisi, mida saaks mikroskoopiaga seotud pildianalüüsis ära kasutada, kuid mida pole sellel eesmärgil veel täielikult rakendatud. Doktoritöö algas süvaõppe uusimate meetodite rakendamisega rakutuumade segmenteerimiseks, mis on sageli üks esimesi samme raku mikroskoopiapiltide töötlemise protsessides. Täpne rakutuumade segmenteerimine on ülioluline paljude bioloogiliste rakenduste kontekstis. Näiteks on rakutuuma struktuuri ja morfoloogia kõrvalekalded sageli seotud haigustega nagu vähk. Seega aitab tuumade segmenteerimine ja nende omaduste uurimine kaasa vähidiagnoosimisele ja haiguse kulu jälgimisele [17]. Tuumade tuvastamist rakendatakse ka rakkude jälitamiseks, mille abil on omakorda võimalik uurida rakusüsteemide käitumist ning selle muutumist erinevate ravimite toimel [18]. Peamiseks fookuseks olid helevälja mikroskoobipildid, kuna neid on suhteliselt lihtne toota, kuid raske kontrollida ja analüüsida. Me katsetasime tuumade segmenteerimiseks mitmeid tipptasemel mudeliarhitektuure ja lõime ka uue arhitektuuri, PPU-Net [19]. Hinnatud mudelid saavutasid varieeruvaid tulemusi tuumade segmenteerimisel heleväljapiltidelt - PPU-Net saavutas aga tollel hetkel olemasolevate tipptaseme mudelitega võrdseid tulemusi, kasutades selleks 20 korda vähem treenitavaid parameetreid, mis muutis selle konkurentidest kergemaks ja vähem keerukamaks. Samuti uurisime mudelite ebastabiilsete tulemuste põhjuseid erinevatel rakutüüpidel ja üksikutel piltidel, vajalike treeningpiltide arvu ja kõige sagedamaid vigade allikaid. PPU-Net segmenteerimisvigade põhjuste uurimisel leidsime, et anomaaliate olemasolu piltidel (signaal, mis ei kajasta oodatut) on suur veaallikas. Seega tahtsime paremini mõista anomaaliate probleemi. Uurisime erinevaid andmekogumeid ja leidsime, et anomaaliad esinevad erinevates kujudes ja suurustes ning need võivad oluliselt moonutada järgnevaid analüüse. Nende mõju leevendamise eesmärgil lõime raamistiku, mis määratleb ja eemaldab anomaaliad minimaalse inimtööga, tehes ainult pilditasemel märgendeid, mitte töömahukaimaid pikslitasemel märgendeid [20]. Kuna anomaaliad on oma olemuselt keerulised ja nende märgendamine pikslitasemel on töömahukas ja aeganõudev, siis pakkusime välja ainult pilditasemel märgendite kasutamise selle ülesande jaoks. Esiteks pakkusime välja ScoreCAMi nimelise meetodi kasutamise [21], mille eesmärk on tõlgendada sügavõppe pildiklassifitseerimise algoritme, tuues esile pildi osad või tunnused, mille põhjal langetab sügavõppe meetod oma otsuse. Kui sügavõppe meetod klassifitseerib, kas pildil esineb anomaaliaid või mitte, siis hüpoteesi kohaselt on sügavõppe mudeli otsuse tegemisel kõige mõjukamaks pildi osaks või tunnuseks just anomaalia - see hüpotees tõestati hiljem empiiriliselt. ScoreCAMi väljundit kasutati hiljem pseudo-märgenditena segmenteerimismudeli treenimiseks [20]. Sel moel ühendame pikslitasemel segmenteerimise kvaliteedi ja pilditasemel märgendite hankimise mugavuse. Nimetasime oma väljapakutud raamistiku ScoreCAM-U-Netiks ning näeme ette, et tulevikus muutub soovimatute objektide eemaldamine tõenäoliselt kõigi suuremahuliste mikroskoopiaeksperimentide standardprotsessi osaks. Lõpuks rakendasime omandatud teadmisi pärismaailma kontekstis. Uurisime sügavõppe meetodite kasutamise väärtust anomaaliate eemaldamiseks ja segmenteerimiseks ravimite avastamise uurimistöös. Selleks tegime koostööd keemikute ja bioloogidega, kes uurivad ühte silmapaistvamat rakumembraani retseptorit - M4. Vaatamata selle kasvavale tähtsusele, on osutunud keerukaks luua uusi ravimeid, mis oleksid suunatud sellele retseptorile [22]. Meie koostööpartnerid kasutasid kõrge afiinsusega fluorestseeruvaid ligande, et uurida sidumisinteraktsiooni M4 retseptoriga. Selleks on vaja eraldada rakud ja uurida nendes olevat fluorestsentssignaali, mille tugevus sõltub valgu ja ligandi vahelise afiinsuse tugevusest ning ligandi kontsentratsioonist. Fluorestseerivate ligandide poolt rakkudes tekitatud signaal ei osutunud piisavaks, et mudel saanuks eristada rakku taustast. Seega otsustasime segmenteerida rakke keerukamast, kuid fluorestseerivusest sõltumatutest heleväljapiltidest. Esiteks kasutasime sügavõpet rakukehade segmenteerimiseks heleväljapiltidelt. Järgmisena analüüsisime fluorestseeruvat signaali rakkudest, mis eraldati vastavatelt fluorestseeruvatelt piltidelt segmenteerimistulemustest saadud rakukoordinaatide abil. Seejärel uurisime anomaaliate eemaldamise mõju heleväljapiltide signaalile, mis tulenes retseptori-ligandi interaktsioonist. Näitasime, et anomaaliate eemaldamine muutis signaali rohkem erapooletutuks ja et meie loodud mudeli kasutamine nende eemaldamiseks andis peaaegu optimaalse tulemuse.Item Detection of changes in maps using LiDAR point clouds(Tartu Ülikool, 2020) Fediukov, Vladyslav; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutSelf-driving cars are one of the most vibrant fields of Robotics and applied Artificial Intelligence. At the current level of the industry development, maps are the main source of the vehicle’s knowledge of the surrounding environment. Providing basic knowledge for the motion planning and global navigation, they are an essential part of the self-driving car’s autonomy. Therefore, they should be kept up to date. Locations that have changed need to be visited again and remapped. Change detection can be done by comparing two point clouds obtained at the same spatial location, but at different points of time. For the pairwise comparison of the point clouds, first we should select the overlapping road parts, then the alignment is performed and finally the distance metrics are calculated. The aim of my thesis is to develop a pipeline that detects changes along the route of the autonomous car. The pipeline includes dynamic object filtering, point clouds alignment and evaluation of their difference. The developed pipeline is evaluated on the Oxford RobotCar dataset since it contains different routes through the same places for many month, which can provide a set of significant changes on the road. To our knowledge, there were no previous attempts to create an automated pipeline for the map’s change detection with the LiDAR point clouds. The results show that the constructed pipeline can detect significant changes with the LiDAR input data. The developed pipeline is the first step towards eventually ensuring real-time updates of the map for self-driving vehicles, which will help cars to operate in the city more optimally.Item Exploring the Value of Weakly-Supervised Deep Learning Approaches for Artefact Segmentation in Brightfield Microscopy Images(Tartu Ülikool, 2021) Hollo, Kaspar; Ali, Mohammed, juhendaja; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutBrightfield microscopy is of great importance as it offers researchers a relatively simple way to quantify cellular experiments. However, brightfield images often contain a variety of artefacts that should be segmented and thereafter neutralized so that they would not affect the quantitative measurements of cellular experiments. While fully-supervised deep learning models offer state-of-the-art performance in most segmentation tasks in computer vision, it is laborious to acquire the pixel-level labels needed to train these models. Alternatively, segmentation tasks can also be solved using more time- and cost-effective weakly-supervised deep learning models that use image-level labels for training. In this thesis, we compare the performances of fully- (e.g., U-Net) and weakly-supervised approaches (e.g., Score-CAM) to determine whether weakly-supervised approaches could be used as a cheaper but still well-performing solution for segmenting artefacts in brightfield images. Six separate experiments with various fully- and weakly-supervised approaches, image datasets and method ensembles are carried out. The results of the experiments showed that with the number of images and labels currently available, none of the weakly-supervised approaches were able to replicate the performance of the baseline fully-supervised approach. However, some of the weakly supervised approaches, like the combined Score-CAM and U-Net approach, showed promising segmentation results. Moreover, the same approach also showed better generalizability on an unseen dataset than the baseline fully-supervised approach. Future work is required to find the amount of weak supervision signal needed to match the performance of the fully-supervised approaches.Item "FiBar": a Tool for Automated Analysis of Complex Biomaterials from Microscopy Images(Tartu Ülikool, 2023) Moor, Marilin; Fishman, Dmytro, juhendaja; Putrinš, Marta, juhendaja; Kogermann, Karin, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutThe success or failure of many microbiological experiments depends on the image analysis of microscopy images, be it determining the livelihood of bacteria by measuring the fluorescence of individual cells or evaluating the quality of a fibrous mat by assessing the distribution of individual fiber diameters. Often a lot of image data is being generated from experiments, leading to a heightened demand of automated image analysis tools. This also holds true in the creation of complex biomaterials, which contain both fibrous textures and some other biocompound, like bacteria. Additionally, manual image analysis is deemed to be time inefficient and biased—both issues which this work aims to alleviate. This work presents the first version of "FiBar": a tool for the automated analysis of complex biomaterials. The tool consists of a fiber diameter measuring and bacteria analysis pipeline. "FiBar" was validated against other tools as well as manual measurements taken from microscopy images. The tool showed to be useful for speeding up the analysis while being relatively accurate.Item Improving Semantic Segmentation of Microscopy Images Using Rotation Equivariant Convolutional Networks(Tartu Ülikool, 2022) Türk, Marten; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutThe segmentation of the cell nuclei is one of the first steps in medical image analysis workflow. Organisations conducting experiments with image analysis are mainly pharmaceutical companies and biomedicine laboratories, which need to process vast amounts of data and quantify it. The goal of these experiments could be to produce new drugs or diagnose diseases. Due to advancements in deep learning, these processes of nuclei segmentation have been automated, and the level of accuracy is relatively high. However, new methods for improving the accuracy of the models are constantly proposed. One of these proposals uses rotation equivariant convolutional neural networks based on group theory. These networks can produce invariant predictions regardless of the rotation of the input object. This bachelor’s thesis shows that rotation equivariant convolutional neural networks improve the semantic segmentation of nuclei and increase the generalisation capabilities of a model trained on fluorescent images. Additionally, the work gives an overview of failed attempts with brightfield images, outlines the already existing rotation equivariant models on the internet and describes their implementation complexity.Item Measuring Testis Tubule Wall Thickness in Histopathology Images(Tartu Ülikool, 2023) Pällo, Arnel; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutOne of many causes of infertility is too thick tubule walls in male testis, locking in the sperm cells. In this thesis we have developed a machine-learning-powered software pipeline for analysing testis histopathology images. The software identifies the tubules and measures their wall thicknesses, allowing medical professionals to draw conclusions and/or perform additional follow-up analysis as needed. Our value proposition is in a clear focus on practical application. The software is designed and trained for usage on large-format (50 000 megapixels) testis tissue samples, measuring specific abnormalities. It is the author’s desire that the software pipeline could be used by medical facilities in Estonia on real patients, providing real value, actually helping people and making a difference.Item Predicting Respiratory Diseases from Lung Sounds Using Machine Learning(Tartu Ülikool, 2021) Annilo, Richard; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutRespiratory diseases are a leading cause of death worldwide. Using machine learning for diagnosis could significantly reduce costs and radiation exposure due to X-ray and CT scans, and improve accessibility to places with limited technology or less-experienced staff. While similar technologies have been successfully applied in the medical field before, sound signal analysis is still in its early stages with significant potential. This thesis’s goal was to create a codebase to help researchers enter and advance the field of respiratory sound analysis. In total, six experiments were conducted with four classical machine learning and one deep learning algorithm. The aim was to classify six classes (five respiratory diseases and one class for healthy patients) using a database of respiratory sounds and patient data. Test results, which used macro-averaged F1-scores as the primary evalua-tion metric, showed that SVM and decision tree models worked best (scores 0.62 and 0.54), while the convolutional neural network models performed worst (best score 0.3). The diffe-rences in the models’ performances were most likely affected by the dataset’s noisiness and umbalancedness. Further research and better data would be required for any conclusive re-sults. The source code for this thesis is publicly available in a Github repository [1].Item Reducing the Effect of Incomplete Annotations in Object Detection for Histopathology(Tartu Ülikool, 2023) Kaliuzhnyi, Denys; Papkov, Mikhail, juhendaja; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutHistopathology is a crucial component of clinical practice involving microscopic tissue examination. Typically, pathologists manually analyse tissue to locate and label structural units, cells, and organoids. The properties and quantity of these objects can indicate a patient’s condition, e.g., the presence of tumours. Recent advancements in artificial intelligence (AI) have created the potential to automate this process. However, AI methods either provide limited accuracy or require a lot of densely annotated data, which is prohibitively time-consuming and expensive in the histopathology domain due to high object density and labelling difficulty. In this study, we address the challenge of training object detection neural networks on histology data with incomplete annotations. We demonstrate that hyperparameter tuning can mitigate the negative effects of sparsely labelled data. Additionally, we propose a novel model component called the Generalised Background Recalibration Loss to further improve detection rates. It can be adapted to a broader class of object detection models than previous solutions. Our results should facilitate the development of object detection neural networks for histology images by demonstrating the efficient use of sparsely labelled data. Our method reduces the impact of missing annotations on detection rates and thereby eases the most time-consuming aspect of data preparation for neural network training.Item Table2Cell: generating realistic nuclei images from numeric properties for data compression(Tartu Ülikool, 2023) Lohvina, Anhelina; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutMicroscopy image analysis is the process of extracting quantitative information from images obtained from microscopes. It involves techniques and methods from computer vision, image processing and machine learning to identify and extract numerical features from images of biological samples such as cells. Modern software is capable of extracting a great number of these properties from images of cells. Provided that there are hundreds of cells per one image and thousands of images per experiment, the amount of data extracted becomes a significant computational burden. In this work, we address the problem of feature selection using image generation with neural networks. Here we show that by generating cell images from different sets of numeric characteristics and assessing the resulting image quality we can decide which input parameters are essential and which can be discarded, helping us to perform feature selection. We propose a novel Table2Cell model that can generate high-quality nuclei images from vectors of features. Our results demonstrate that the generated images have a high degree of similarity to real images, and that the Table2Cell model is responsive to variations in its input parameters. This study not only addresses the issue of feature selection, but also has broader implications for the field of image generation. We believe that the results of our research provide valuable insights for further research and development of this technology.