Recent Publications

All Publications

(2021). Screening Native Machine Learning Pipelines with ArgusEyes. Conference on Innovative Data Systems Research (CIDR, abstract).

(2021). Understanding and Mitigating the Effect of Outliers in Fair Ranking. ACM International Conference on Web Search and Data Mining (WSDM).

(2021). Responsible Data Management. Communications of the ACM.

(2021). Efficiently Maintaining Next Basket Recommendations under Additions and Deletions of Baskets and Items. Workshop on Online Recommender Systems and User Modeling at ACM RecSys.


(2021). Understanding Multi-channel Customer Behavior in Retail. ACM Conference on Information and Knowledge Management (CIKM).

(2021). DuckDQ: Data Quality Assertions for Machine Learning Pipelines. Workshop on Challenges in Deploying and Monitoring ML Systems at the International Conference on Machine Learning (ICML).


(2021). Towards Efficient Machine Unlearning via Incremental View Maintenance. Workshop on Challenges in Deploying and Monitoring ML Systems at the International Conference on Machine Learning (ICML).


(2021). Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic Regression. ACM SIGKDD.



PhD Students
Mozhdeh Ariannezhad
with Maarten de Rijke)
Olivier Sprangers
with Maarten de Rijke)
Arezoo Sarvi
with Maarten de Rijke))
Barrie Kersbergen
with Maarten de Rijke)
Stefan Grafberger
with Paul Groth)
Research Engineers, Associated Researchers & Master Students
Shubha Guha
Till Doehmen
Dr. Ji Zhang
Benjamin Wang



Machine Learning applications are increasingly used to automate impactful decisions, and at the same time, can be very brittle with respect to their input data, which leads to concerns about the correctness, reliability, and fairness of such applications. mlinspect is a library that helps data scientists to diagnose and mitigate technical bias that arises during data preprocessing in an ML pipeline. mlinspect can instrument natively written ML code in Python using libraries such as pandas or scikit-learn, and will automatically apply several inspections to the intermediate results of preprocessing operations.




PGBM (‘Probabilistic Gradient Boosting Machines’) is a probabilistic gradient boosting framework in Python based on PyTorch, developed by Olivier Sprangers. It is aimed at users interested in solving large-scale tabular probabilistic regression problems, such as probabilistic time series forecasting. PGBM provides several advantages over existing frameworks, e.g., probabilistic regression estimates instead of only point estimates, auto-differentiation of custom loss functions, and native GPU-acceleration.




Snapcase is a research prototype for a recommender system that can instantly “forget” user data (e.g. in response to GDPR deletion requests) and update its recommendations accordingly. Snapcase models various recommendation algorithms via differential computation and is implemented in Rust on top of Differential Dataflow.


  • [Not released yet]




Scientific Career

Before joining University of Amsterdam, I have been a Faculty Fellow at the Center for Data Science at New York University, and a Senior Applied Scientist at Amazon Research in Berlin, where I worked on data management-related issues of machine learning applications, such as demand forecasting, metadata and provenance tracking of machine learning pipelines and automating data quality verification.

I received my Ph.D. with “summa cum laude” from TU Berlin in 2015, where I have been advised by Volker Markl, head of the database systems and information management group. My co-supervisors were Klaus-Robert Müller from the machine learning group at TU Berlin and Reza Zadeh from Stanford. During my studies, I have been interning with the SystemML group at IBM Research Almaden and the social recommendations team at Twitter in California.

Open Source

I am engaged in open source as an elected member of the Apache Software Foundation since 2012. I have been involved in the Apache Mahout, Apache Flink, Apache Giraph and the incubation of the Apache MXNet and Apache TVM projects. Besides that I co-created deequ, a library for ‘unit-testing’ large datasets with Apache Spark. Furthermore, I am a member of the Electronic Frontier Foundation since 2015.

Scientific Service

I am the founder and chair of the workshop series on Data Management for End-To-End Machine Learning (DEEM) at ACM SIGMOD, which started in 2017. I serve as the editor for two issues of the IEEE Data Engineering Bulletin in 2021 and 2022, as Associate Editor for PVLDB Volume 15, and as co-chair of the industry and applications track of EDBT 2022.

I regularly review submissions to top tier data management conferences. I have been on the program committee at SIGMOD 2017, 2019-2022, VLDB 2021, ICDE 2018-2021, EDBT 2017 & 2021, CIKM’20, the PhD Symposium at VLDB’21, the workshop on Exploiting Artificial Intelligence Techniques for Data Management at SIGMOD 2019, the Large-Scale Recommender Systems workshop at the ACM RecSys 2013-2015, the workshop on Applied AI for Database Systems and Applications at VLDB’20, and Provenance Week’20. Additionally, I have reviewed submissions to journals for IEEE TKDE, ACM TIST, IEEE TPDS, IEEE TNNLS, VLDB Journal, the VLDB Journal Special Issue on Data Science for Responsible Data Management, the journal track of ECML/PKDD and the open source track of JMLR. I am also part of the review board of the Journal of Systems Research (JSys), and have been a reviewer for the Amazon Research Awards.

At the University of Amsterdam, I coordinate the honors program for the bachelor AI.


I’m reachable via email at s.schelter[at] I’m also very actively using twitter as @sscdotopen. Most of the research code that I write is available under an open source license in my github account. Last but not least, I also have a profile in google scholar.