Sirvi Autor "Zeynalli, Ali" järgi
Nüüd näidatakse 1 - 2 2
- Tulemused lehekülje kohta
- Sorteerimisvalikud
listelement.badge.dso-type Kirje , Cell Cycle Phase Classification from Microscopy Images(Tartu Ülikool, 2025) Zeynalli, Ali; Fishman, Dmytro, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutAccurate classification of cell cycle phases is essential for cancer research and drug discovery. While fluorescence microscopy provides high-contrast, biomarker-specific signals that support precise classification, it relies on staining protocols that limit scalability and compromise cell viability. In contrast, bright-field microscopy offers a label-free, cost-effective alternative but poses challenges due to its lower contrast. This study compares five computational strategies for cell cycle phase classification using fluorescence and bright-field microscopy: traditional feature-based classification, segmentation-based classification, mask-guided classification via segmentation, nuclei patch classification, and nuclei patch classification via segmentation. Results show that fluorescence images support near-perfect classification performance across all methods. For bright-field images, the highest balanced accuracy of 0.770 was achieved using a nuclei patch classification approach with a ResNet-50 backbone, followed closely by mask-guided classification. These findings highlight the potential of deep learning models for accurate cell cycle classification in bright-field microscopy, advancing the potential for scalable applications in biomedical research.listelement.badge.dso-type Kirje , Computer vision meets microbiology: deep learning algorithms for classifying cell treatments in microscopy images(Tartu Ülikool, 2023) Zeynalli, Ali; Fishman, Dmytro, juhendajaCell classification is one of the most complex challenges in cellular research that has significant importance to personalised medicine, cancer diagnostics and disease prevention. The accurate classification of cells based on their unique characteristics provides valuable insights into a patient's health status and in guiding treatment decisions. Thanks to recent technological advancements, cellular research has experienced significant progress in the use of deep learning and has become a valuable tool for tackling complicated tasks such as cell classification. In this study, we explored the capability of state-of-the-art deep learning models such as ResNet, ViT and Swin Transformer to automatically classify brightfield and fluorescent microscopy images across single and multiple channels into four cell treatments: Palbociclib, MLN8237, AZD1152, and CYC116. The results have revealed that Swin Transformer surpasses the other models for cell treatment classification on multi-channel fluorescent and brightfield images, achieving the highest accuracy of 86% and 59%, correspondingly. However, the highest accuracy achieved on single-channel brightfield images was 61%, using the ResNet-50 model. The previous research has shown that combining multiple channels yields better performance which necessitates further investigation into the capacity of deep learning models for automating the cell treatment classification of single- and multi-channel brightfield microscopy images.