Towards Efficient Machine Unlearning via Incremental View Maintenance

Abstract

Recent laws such as the GDPR require machine learning applications to ‘unlearn’ parts of their training data if a user withdraws consent for their data. Current unlearning approaches accelerate the retraining of models, but come with hidden costs due to the need to reaccess training data and redeploy the resulting models. We propose to look at machine unlearning as an ‘incremental view maintenance’ problem, leveraging existing research from the data management community on efficiently maintaining the results of a query in response to changes in its inputs. Our core idea is to consider ML models as views over their training data, and express the training procedure as a differential dataflow computation, whose outputs can be automatically updated. As a consequence, the resulting models can be continuously trained over streams of updates and deletions. We discuss important limitations of this approach, and provide preliminary experimental results for maintaining a state-of-the-art sequential recommendation model.

Publication
Workshop on Challenges in Deploying and Monitoring ML Systems at the International Conference on Machine Learning (ICML)
Date
Links