IBM Research recently introduced their perspective on a machine learning paradigm (called Federated Learning) in which multiple parties can all participate in training a single model with a shared goal. You can use data that’s distributed between competitors, or even data distributed in one company across multiple geographies. They can participate in this securely without sharing their raw data, and consequently get models that are much more generalizable than they would otherwise be able to achieve on their own.
Links mentioned in this episode:
Private federated learning – Learn together without sharing data: https://community.ibm.com/community/user/datascience/blogs/nathalie-baracaldo1/2019/11/15/private-federated-learning-learn-together-without
Nathalie Baracaldo, IBM Research – AI Security & Privacy: https://community.ibm.com/community/user/datascience/blogs/christina-howell/2020/02/25/nathalie-baracaldo-ibm-research-ai-a