Recent Publications

All Publications

(2021). Data Distribution Debugging in Machine Learning Pipelines. The VLDB Journal — The International Journal on Very Large Data Bases (Special Issue on Data Science for Responsible Data Management).

(2021). Parameter Efficient Deep Probabilistic Forecasting. International Journal of Forecasting.

(2021). Screening Native Machine Learning Pipelines with ArgusEyes. Conference on Innovative Data Systems Research (CIDR, abstract).

PDF

(2021). Understanding and Mitigating the Effect of Outliers in Fair Ranking. ACM International Conference on Web Search and Data Mining (WSDM).

(2021). Responsible Data Management. Communications of the ACM.

(2021). Efficiently Maintaining Next Basket Recommendations under Additions and Deletions of Baskets and Items. Workshop on Online Recommender Systems and User Modeling at ACM RecSys.

PDF

(2021). Understanding Multi-channel Customer Behavior in Retail. ACM Conference on Information and Knowledge Management (CIKM).

PDF

(2021). DuckDQ: Data Quality Assertions for Machine Learning Pipelines. Workshop on Challenges in Deploying and Monitoring ML Systems at the International Conference on Machine Learning (ICML).

PDF

Team

PhD Students
Mozhdeh Ariannezhad
(co-supervised
with Maarten de Rijke)
Olivier Sprangers
(co-supervised
with Maarten de Rijke)
Arezoo Sarvi
(co-supervised
with Maarten de Rijke)
Barrie Kersbergen
(co-supervised
with Maarten de Rijke)
Stefan Grafberger
(co-supervised
with Paul Groth)
Research Engineers, Associated Researchers & Master Students
Shubha Guha
Till Doehmen
Dr. Ji Zhang
Benjamin Wang

Projects

mlinspect

Machine Learning applications are increasingly used to automate impactful decisions, and at the same time, can be very brittle with respect to their input data, which leads to concerns about the correctness, reliability, and fairness of such applications. mlinspect is a library that helps data scientists to diagnose and mitigate technical bias that arises during data preprocessing in an ML pipeline. mlinspect can instrument natively written ML code in Python using libraries such as pandas or scikit-learn, and will automatically apply several inspections to the intermediate results of preprocessing operations.

Code:

Publications:


PGBM

PGBM (‘Probabilistic Gradient Boosting Machines’) is a probabilistic gradient boosting framework in Python based on PyTorch, developed by Olivier Sprangers. It is aimed at users interested in solving large-scale tabular probabilistic regression problems, such as probabilistic time series forecasting. PGBM provides several advantages over existing frameworks, e.g., probabilistic regression estimates instead of only point estimates, auto-differentiation of custom loss functions, and native GPU-acceleration.

Code:

Publications:


Snapcase

Snapcase is a research prototype for a recommender system that can instantly “forget” user data (e.g. in response to GDPR deletion requests) and update its recommendations accordingly. Snapcase models various recommendation algorithms via differential computation and is implemented in Rust on top of Differential Dataflow.

Code:

  • [Not released yet]

Publications:

Collaborations

CV

Scientific Career

Before joining University of Amsterdam, I have been a Faculty Fellow at the Center for Data Science at New York University, and a Senior Applied Scientist at Amazon Research in Berlin, where I worked on data management-related issues of machine learning applications, such as demand forecasting, metadata and provenance tracking of machine learning pipelines and automating data quality verification.

I received my Ph.D. with “summa cum laude” from TU Berlin in 2015, where I have been advised by Volker Markl, head of the database systems and information management group. My co-supervisors were Klaus-Robert Müller from the machine learning group at TU Berlin and Reza Zadeh from Stanford. During my studies, I have been interning with the SystemML group at IBM Research Almaden and the social recommendations team at Twitter in California.

Open Source

I am engaged in open source as an elected member of the Apache Software Foundation since 2012. I have been involved in the Apache Mahout, Apache Flink, Apache Giraph and the incubation of the Apache MXNet and Apache TVM projects. Besides that I co-created deequ, a library for ‘unit-testing’ large datasets with Apache Spark. Furthermore, I am a member of the Electronic Frontier Foundation since 2015.

Scientific Service

I am the founder and chair of the workshop series on Data Management for End-To-End Machine Learning (DEEM) at ACM SIGMOD, which started in 2017. I serve as the editor for two issues of the IEEE Data Engineering Bulletin in 2021 and 2022, as Associate Editor for PVLDB Volume 15, and as co-chair of the industry and applications track of EDBT 2022.

I regularly review submissions to top tier data management conferences. I have been on the program committee at SIGMOD 2017, 2019-2022, VLDB 2021, ICDE 2018-2021, EDBT 2017 & 2021, CIKM’20, the PhD Symposium at VLDB’21, the workshop on Exploiting Artificial Intelligence Techniques for Data Management at SIGMOD 2019, the Large-Scale Recommender Systems workshop at the ACM RecSys 2013-2015, the workshop on Applied AI for Database Systems and Applications at VLDB’20, and Provenance Week’20. Additionally, I have reviewed submissions to journals for IEEE TKDE, ACM TIST, IEEE TPDS, IEEE TNNLS, VLDB Journal, the VLDB Journal Special Issue on Data Science for Responsible Data Management, the journal track of ECML/PKDD and the open source track of JMLR. I am also part of the review board of the Journal of Systems Research (JSys), and have been a reviewer for the Amazon Research Awards.

At the University of Amsterdam, I coordinate the honors program for the bachelor AI.

Contact

I’m reachable via email at s.schelter[at]uva.nl. I’m also very actively using twitter as @sscdotopen. Most of the research code that I write is available under an open source license in my github account. Last but not least, I also have a profile in google scholar.