The applications of AI technology in genomic research are increasing, and as centralized AI models need to be trained on aggregated raw genomic data, federated learning (FL), which is inherently a privacy-preserving AI approach, has evolved. Importantly, FL enables multiple institutions to jointly train machine learning models without transmitting raw genomic datasets out of the source institution. Nonetheless, it complies with the most intensive data protection laws, such as the GDPR, HIPAA, or PIPL. This paper recognizes FL in promoting the cybersecurity of genomic data and analyzes its potential for counteracting the non-compliance with individual genomic data privacy by first examining a fundamental optimization algorithm for fitting decentralized models, Federated Averaging. It also covers techniques for preserving privacy, such as Homomorphic Encryption and Differential Privacy. For one, HE is about performing encrypted computations on genomic data to mitigate the risk of adversarial inference attacks, while DP injects carefully designed noise into model updates to alleviate risks of data reconstruction and membership inference attacks. In addition, the paper comments on the ethical and regulatory aspects of genomic FL, such as data ownership, cybersecurity hazards of training on sensitive data, and the trustworthiness of the produced models. It also draws attention to critical future research areas: hybrid encryption methods, quantum-safe cryptographic protocols, and cross-compatibility standards to improve the effectiveness of FL over dissimilar genomic data. These developments represent the beginning of the potential for transforming genomic AI research while preserving privacy using FL.
Author
(s) Details
Alex Mathew
Bethany College, USA.
Hannah Alex
University of Pittsburgh, USA.
Please see the book here:- https://doi.org/10.9734/bpi/stda/v9/5116
No comments:
Post a Comment