Rainer Mühlhoff: “Predictive Privacy, or the Risk of Secondary Use of Trained ML Models”
Rainer Mühlhoff explains the risks of secondary use of ML models regarding the privacy.
- Speaker: Rainer Mühlhoff, University of Osnabrück, Germany
- Moderator: Klaus Staudacher, bidt, Germany
About the Event
October 17, 2023
5:00 – 6:00 PM
Big data and artificial intelligence (AI) pose a new challenge for data protection when these techniques are used to make predictions about individuals. This could happen both to individuals who are not in the training data and in context of secondary use of trained models. In this talk I will use the ethical notion of “predictive privacy” to argue that trained models are the biggest blind spot in current data protection regimes and other regulatory projects concerning AI. I argue that the mere possession of a trained model constitutes an enormous aggregation of informational power that should be the target of regulation even before the application of the model to concrete cases. This is because the model has the potential to be used and reused in different contexts with few legal or technical barriers, even as a result of theft or covert business activities. The current focus of data protection on the input stage distracts from the - arguably much more serious - data protection issue related to trained models and, in practice, leads to a bureaucratic overload that harms the reputation of data protection by opening the door to the denigrating portrayal of data protection as an inhibitor of innovation.
Slides will be available for download after the event.
If you watch this video, data may be transmitted to third parties.