Abstract
The escalating dependence on artificial intelligence (AI) within the healthcare sector presents significant challenges pertaining to data privacy, regulatory adherence, and the pragmatic implementation of predictive models. This review meticulously examines the amalgamation of Federated Learning (FL) and Differential Privacy (DP) as a prospective framework to mitigate these issues within decentralized healthcare infrastructures. We conduct a comprehensive analysis of extant FL-DP frameworks, concentrating on their capacity to safeguard privacy, uphold performance standards, and function efficiently in real-time clinical settings. The review encompasses architectural advancements, edge computing methodologies, adaptive privacy budgets, and the contributions of blockchain and the Internet of Medical Things (IoMT) in facilitating secure data interchange. Comparative assessments and case studies are synthesized to evaluate model precision, scalability, and conformity with regulatory mandates. Notwithstanding significant advancements, we delineate critical deficiencies, including ethical dilemmas, algorithmic equity, data disparity, and obstacles to deployment. Our contributions consist of a benchmarking framework, the delineation of unresolved research inquiries, and actionable insights for the formulation of secure, just, and scalable FL-DP systems within the healthcare domain. This paper delineates a strategic framework for prospective research and the execution of privacy-preserving AI within clinical practice. The outcomes highlight significant potential for real-world clinical implementation, fostering enhanced patient care, supporting regulatory compliance, and enabling scalable, privacy-preserving AI adoption across diverse healthcare environments.
Keywords
Federated Learning Machine Learning Decentralized Systems Health care Differential Privacy Privacy-Preserving