My teaching activities


Designing efficient computer systems for emerging AI, ML and quantum workloads in terms of speed, accuracy, energy, size and different costs became intolerably time consuming, ad-hoc, costly and error prone due to rapidly evolving software and hardware, vast design and optimization space, and very complex interactions between all components. There is also a lack of a common methodology and tools to reproduce empirical results from numerous papers which often do not even include all code and data!

During my academic research (1999-2009, 2012-2014) I developed open-source technology and methodology to enable self-optimizing computer systems by combining machine learning, autotuning, run-time adaptation, experiment crowdsourcing and knowledge management. Since 2008 I worked with the community to prepare the foundations for collaborative, reproducible and trustable computer systems research, and introduce a new publication model where experimental results and all related artifacts (code, data, models and experimental workflows) are continuously shared, validated, improved and built upon. To set up an example, I share all my code, data, models and experimental results in a reusable form using cTuning framework (2008-2010), Collective Mind (2012-2014) and cKnowledge.io (2015-2019.)

I prepared and taught an advanced MS course on related technology and methodology for Paris South University where I was a guest lecturer in 2007-2010:

In 2015 I started developing an open-source Collective Knowledge framework (CK) to automate sharing and reuse of research artifacts and workflows. You can find related educational resources here and a CK-based interactive article (collaboration with RPi foundation).

As a community service, I continue helping conferences, journals and digital libraries to set up artifact evaluation and enable reproducible research.