Researcher in AI Ethics
Researcher at University Paris-Saclay and co-founder of Giskard AI, my research activities are in the field of technology ethics and more particularly of artificial intelligence (AI Ethics). I am a teacher at Sciences Po on Algorithms & Public Policies and at PSL University in Mathematics and Probability.
In the past, I worked as a Data Scientist and AI engineer after graduating in Mathematics (Mines and ENSAE), Economics (Sciences Po / Polytechnique and PSE) and Philosophy (Sorbonne and ICP).
I work on the ethical issues created by the implementation of artificial intelligence. My research questions are as follows:
- How to root AI ethics in existing philosophical theories?
➡What are the formal metrics for measuring the biases and discriminations induced by artificial intelligence? What are their limits ?
➡ How can we experimentally measure the impacts of so-called ethical tools in AI?
➡ How to consider the ethical expectations expressed by citizens on AI to integrate them in the design of algorithms?
Data Science & AI
My master thesis at ENSAE focused on distributed Machine Learning and the application of the stochastic gradient descent method on distributed file (Hadoop / HDFS). You will find on my Github account my various algorithmic developments as a Data Scientist, in particular the implementation of AI fairness metrics applied to credit scoring, graph mining, topic modeling, blog scraping, etc.
I was interested inepistemological issues around artificial intelligence as part of my Master thesis in philosophy. My work has focused on:
➡ The phenomenological arguments put forward by Dreyfus against the possibility of a formal AI;
➡ The epistemological foundations of interceptability in Machine Learning;
➡ The historical epistemology of the discipline of artificial intelligence.
Political science :
- Algorithms & Public Policies, M2, Master in Public Policies, School of Public Affairs:
2019/2020, and 2020/2021
- Quantitative methods for the social sciences, L1, University College:
- Analysis and evaluation of public policies, M2, Master of Public Policies, School of Public Affairs:
University Paris Dauphine:
- Corporate Strategy, Bachelor third-year:
2021 / 2022
- Mathematics for Artificial Intelligence, Bachelor, first-year
- AI Ethics, 2022-2023
Paris Sciences & Lettres University (PSL):
- Mathematics & Probability, Multidisciplinary Cycle of Higher Studies (CPES), 2nd year:
2016/2017, 2017/2018, 2018/2019 and 2019/2020
- Statistics, Multidisciplinary Cycle of Higher Studies (CPES), 2nd year:
2016/2017, 2017/2018, 2018/2019 and 2019/2020
Paris Sorbonne University:
- Econometrics, L3, Bachelor of Economics:
- Statistics, L3, Bachelor of Economics:
BFM Business intervention on AI and Giskard
Sciences Po Paris, 16th November 2021, Jean-Marie John-Mathews
Training for public administrations on the Machine Learning test
Interministerial Directorate for Digital, Etalab, November 8, 2022, Jean-Marie John-Mathews
Summary : Why is it so important to test machine learning models? What are the challenges and difficulties? What are the different methods? In this training dedicated to public administrations of the various ministries, we address the theories and practices of Machine Learning testing techniques.
Conférence: "Comment mesurer les biais des données de l'IA ?"
DAMA Seminar, November 23, 2022, Jean-Marie John-Mathews
Summary : The incidents and risks generated by AI are numerous today: drop in performance, discrimination, security breach, altered responsibility model, etc. Faced with these incidents, data management has a key role in managing risks from the upstream phase of AI projects. This involves setting up AI bias tests. In this webinar, we will present in detail a methodology for testing AI data. We will rely on the Giskard solution to present a demo on practical cases.
How to test ML products?
AI Engineering Meet-up, August 1, 2022, Jean-Marie John-Mathews
Summary : How to test ML products? Jean-Marie John-Mathews from Giskard AI presents the difficulties posed by ML Testing, the theoretical solutions and a practical implementation of these tests with integration into a CI/CD pipeline
Conference: "Why is regulating AI difficult?"
Bank of France, 18 February 2022, Jean-Marie John-Mathews
Summary : Regulation in AI is difficult because finding stable normative categories is much harder than in traditional software. This is what I tried to experiment and theorize in my research. One way to overcome this issue is to jointly combine two movements: (1) Open up the AI: collect the feedback, errors, and criticism. This step is essential to trigger a reflexive attitude from the AI constructor. (2) Materialize the new associations: structure the feedback, build metrics and test it. This is essential to stabilize the new findings brought by the opening step. At Giskard AI, we help AI constructors to collect feedback to turn them into actionable tests. It's the concretization of academic research into the operational world.
Conference: "A Critical Empirical Study of Black-box Explanations in AI"
International Conference in Information Systems, 14th December 2021, Jean-Marie John-Mathews
Summary : In recent years, many applications such as autonomous cars, virtual assistants, recommendation systems, facial or voice recognition use Machine Learning, a sub-discipline of artificial intelligence (AI). These uses are the subject of debate and today pose many ethical problems of a varied nature such as acts of discrimination, infringement of individual freedoms, the autonomy of subjects or even the responsibility of algorithms given their opacity. In this lecture, we will present these different concerns from a technical point of view. A growing community of researchers and engineers in artificial intelligence are dealing with this ethical subject by offering so-called responsible tools in AI. From the presentation of these so-called responsible tools and methods, we will expose their theoretical assumptions as well as their limits and contradictions.This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of black-box model tend to give partial and biased information on the underlying mechanism of the algorithm and can be subject to manipulation or information withholding by diverting users’ attention. Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability. This paper contributes to shedding new light on the actual theoretical debate between intrinsically transparent AI models and post-hoc explanations of black-box complex models – a debate which is likely to play a highly influential role in the future development and operationalization of AI systems.
Seminar: "The ethics of artificial intelligence in practice: challenges and limits"
Sciences Po Paris, November 16, 2021, Jean-Marie John-Mathews
Summary : While Artificial Intelligence (AI) is increasingly criticized because of the ethical issues it poses, a set of tools and methods have emerged in recent years to standardize it, such as debiasing algorithms, fairness metrics, generation of explanations, etc. These methods, known as responsible AI, must adapt to algorithms that feed on increasingly granular, voluminous and behavioral data. In this research work, we describe how AI pretends to calculate the world itself without the use of the normative categories that we usually use to formulate criticisms. How to normalize an AI that claims to position itself below the standards?
Conference: "Is interpretability a solution to address discrimination in Machine Learning?"
Good In Tech Webinar, 12 mai 2021, Jean-Marie John-Mathews
Summary : We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents.
Conference: "At the heart of algorithms. Scientific approach to automation, ethical, legal and societal issues"
IImagine Institute, Ethics Space, February 6, 2020, Jean-Marie John-Mathews
Video conference link
Round Table videolink
Conférence : "L'intelligence artificielle : pourquoi l'IA pose-t-elle problème ?"
Université de droit et économie de Metz, 17 janvier 2020, Jean-Marie John-Mathews
Summary : Ces dernières années, de nombreuses applications telles que la voiture autonome, les assistants virtuels, les systèmes de recommandation, la reconnaissance faciale ou vocale utilisent du Machine Learning, une sous-discipline de l'intelligence artificielle (IA). Ces usages font débat et posent aujourd'hui de nombreux problèmes éthiques de nature variée tels que les faits de discrimination, l'atteinte aux libertés individuelles, à l'autonomie des sujets ou encore la responsabilité des algorithmes étant donné leur opacité. Dans cette conférence, nous présenterons ces différentes préoccupations à partir d'un point de vue technique. Une communauté grandissante de chercheurs et ingénieurs en intelligence artificielle traitent en effet ce sujet éthique en proposant des outils dits responsables en IA. À partir de la présentation de ces outils et méthodes dits responsables, nous exposerons leurs présupposés théoriques ainsi que leurs limites et contradictions.
Interview : Une plongée dans les réseaux de neurones
IT for Business, October 2019
Conference: "The mathematical engineer in a world of artificial intelligence"
Paris Diderot University, February 18, 2019, Jean-Marie John-Mathews
Summary : In recent years, many applications such as autonomous cars, virtual assistants, recommendation systems, facial or voice recognition claim to "use artificial intelligence". What does that mean ? A more precise vision of the models within these technologies makes it possible to identify their common denominator which is none other than a discipline within applied mathematics: machine learning. What place for the mathematical engineer in a world of artificial intelligence? Has his role in society changed? We will see in this conference how the design of algorithms put the engineer at the heart of the ethical and societal issues induced by artificial intelligence.