Andmebaasi logo
Valdkonnad ja kollektsioonid
Kogu ADA
Eesti
English
Deutsch
  1. Esileht
  2. Sirvi autori järgi

Sirvi Autor "Karamalla, Mahmoud Said Hosny Elsayed" järgi

Tulemuste filtreerimiseks trükkige paar esimest tähte
Nüüd näidatakse 1 - 1 1
  • Tulemused lehekülje kohta
  • Sorteerimisvalikud
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    FedCAPE: Federated Concept Alignment for Privacy-Preserving Explanations
    (Tartu Ülikool, 2025) Karamalla, Mahmoud Said Hosny Elsayed; El Shawi, Radwa Mohamed El Emam, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    Traditional machine learning pipelines are limited by their dependence on centralized data, making them unsuitable for privacy-sensitive domains and distributed real-world settings. Furthermore, these methods often lack concept-level interpretability and require labor-intensive manual annotation to identify and explain meaningful concepts in complex datasets. While recent automated approaches, such as [1] proposed by Ghorbani et al.,(NeurIPS 2019), have advanced the automation of concept discovery and explanation, they do not address the challenges of privacy preservation, data decentralization, or collaborative concept alignment across multiple participants. In this thesis, we propose Federated Concept Alignment for Privacy-Preserving Explanations (FedCAPE), a novel Framework designed to enable scalable, privacy-preserving, and fully decentralized concept discovery, alignment, and interpretability. FedCAPE leverages self-supervised learning (DINO) and the multimodal capabilities of OpenAI CLIP to automatically assign semantic meaning to image segments, thereby eliminating the need for manual annotation and introducing a semantic layer over the extracted features. Critically, FedCAPE employs federated K-means clustering to collaboratively align and refine discovered concepts across clients, ensuring that shared conceptual knowledge emerges without the need to exchange raw data. Through this federated approach, FedCAPE achieves end-to-end interpretability, improved concept alignment, and enhanced transparency of model predictions—surpassing both traditional and state-of-the-art automated approaches in terms of privacy preservation, scalability, and explainability. Experimental evaluation across multiple distributed clients demonstrated strong cross-client semantic consistency, with human evaluators preferring FedCAPE clusters over random baselines in 80–100% of cases. Quantitatively, FedCAPE achieved high TCAV scores for salient concepts, in some cases exceeding the centralized baseline while avoiding over-clustering and improving cluster purity. These results highlight FedCAPE’s potential to bridge the gap between interpretable AI and privacy-preserving machine learning in distributed environments.

DSpace tarkvara autoriõigus © 2002-2025 LYRASIS

  • Teavituste seaded
  • Saada tagasisidet