Sirvi Autor "Hasanova, Gandab" järgi
Nüüd näidatakse 1 - 1 1
- Tulemused lehekülje kohta
- Sorteerimisvalikud
listelement.badge.dso-type Kirje , FLBench - A Comprehensive Experimental Evaluation of Federated Learning Frameworks(Tartu Ülikool, 2024) Hasanova, Gandab; Awaysheh, Feras Mahmoud Naji, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituutFederated learning is an innovative approach to collaborative machine learning that allows several decentralized organizations to cooperatively train a common model without disclosing their own data. With increasingly tighter data privacy regulations like GDPR in force, Federated Learning has become one of the need-to-have techniques to adopt. For example, by employing FL, hospitals can train models on patient data from different hospitals to enhance diagnostic accuracy while keeping sensitive information away from a central repository. Similarly, it can improve financial systems' authenticity for security by making them more secure with customer data encrypted safely within their own database. However, the Federated Learning domain is rapidly changing, and many new frameworks are emerging as the landscape of open-source tools continues to grow. This growth makes even experienced researchers uncertain about the trade-off among these frameworks and when to use which framework. This study aims to resolve this issue by comprehensively examining six popular federated learning (FL) frameworks: NVIDIA FLARE, Flower, FedML, TensorFlow Federated (TFF), FEDn, and Substra. We aim to systematically compare and analyze these frameworks, with the help of Federated Averaging (FedAvg) on a Convolutional Neural Network (CNN) model trained with CIFAR-10 data. Performance analysis includes these metrics: loss, accuracy, total training time, CPU and RAM consumption, and network utilization during training. In order to buttress our claims with empirical proof and offer a complete view, we conducted experiments while running them on different client counts (1, 10, 50, 100), which helped us understand how each framework scales up. Key results of our research are: FedML achieved the highest accuracy with 91\% on 100 clients but had longer training times. Flower demonstrated a balance of high accuracy and the shortest training times which makes it suitable for production environments. NVIDIA FLARE showed high CPU utilization and good overall performance. TensorFlow Federated and Substra exhibited consistent performance on different client counts. FEDn had the lowest accuracy but showed potential for cases where limited computational resources are available. This review contributes to the literature by providing an overview and comparison of federated learning frameworks that can inform a choice for use-case-dependent priorities and resource availability constraints. Our results suggest that when picking a framework, we should consider its performance on general evaluation metrics and other factors such as scalability, customizability, or resource constraints in your application scenario. The study's results are designed to guide industry practitioners and researchers in making informed decisions when implementing a Federated Learning solution by providing insights into each framework's capabilities and trade-offs. The detailed research and comprehensive evaluation provided help in understanding the nuances and implications of federated learning frameworks in various use cases.