Résumé:
Federated Learning (FL) in healthcare offers a clear approach to collaboratively train machine learning algorithms, all while ensuring patient confidentiality and adhering to regulatory standards. Nonetheless, this decentralized framework introduces fresh security challenges, especially in the form of poisoning attacks. In these scenarios, malicious clients intentionally modify data or alter model updates, aiming to undermine overall performance or introduce harmful activities.
This thesis explores the challenges posed by poisoning attacks in healthcare-related feder- ated learning systems and introduces a new hybrid adaptive defense system that dynam- ically balances effectiveness and resilience in response to immediate threat assessments. Our approach combines various aggregation methods with anomaly detection to effec- tively address different attack scenarios while ensuring that system performance remains strong during regular operations.
We conducted thorough experiments on the PathMNIST dataset, simulating different ad- versarial poisoning attacks that included label flipping, backdoor methods, and composite strategies. The findings indicate that the proposed enhancements significantly boost de- fense efficiency, particularly regarding accuracy, F1-score, and false negative rate, while
maintaining a reasonable level of computational overhead. The hybrid system is designed to handle challenges gracefully and adjusts well to unexpected changes, which makes it a good fit for important healthcare settings. This work highlights the importance of flexi- ble security solutions in federated environments and lays a strong foundation for future advancements in trusted collaborative learning, especially in sensitive areas.
Keywords: Federated Learning; Healthcare; Machine Learning; Poisoning Attacks; Hy- brid Adaptive Defense; Aggregation Methods; Anomaly Detection; PathMNIST.