HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning


Software systems that learn from user data with machine learning (ML) have become ubiquitous over the last years. Recent law such as the General Data Protection Regulation (GDPR) requires organisations that process personal data to delete user data upon request (enacting the right to be forgotten). However, this regulation does not only require the deletion of user data from databases, but also applies to ML models that have been learned from the stored data. We therefore argue that ML applications should offer users to unlearn their data from trained models in a timely manner. We explore how fast this unlearning can be done, under the constraints imposed by real world deployments, and introduce the problem of low-latency machine unlearning: maintaining a deployed ML model in place under the removal of a small fraction of training samples without retraining. We propose HedgeCut, a classification model based on an ensemble of randomized decision trees, which is designed to answer unlearning requests with low latency. We detail how to efficiently implement HedgeCut with vectorised operators for decision tree learning. We conduct an experimental evaluation on five privacy sensitive datasets, where we find that HedgeCut can unlearn training samples with a latency of around 100 microseconds and answers up to 36,000 prediction requests per second, while providing a training time and predictive accuracy similar to widely used implementations of tree-based ML models such as Random Forests.