Data Scientist & AI Ethics
Data Scientist and co-founder of Giskard AI, my research activities are in the field of technology ethics and more particularly of artificial intelligence (AI Ethics). I am a teacher at Sciences Po on Algorithms & Public Policies and at PSL University in Mathematics and Probability.
In the past, I worked as a Data Scientist and AI engineer after graduating in Mathematics (Mines and ENSAE), Economics (Sciences Po / Polytechnique and PSE) and Philosophy (Sorbonne and ICP).
I work on the ethical issues created by the implementation of artificial intelligence. My research questions are as follows:
- How to root AI ethics in existing philosophical theories?
➡ What are the formal metrics for measuring the biases and discriminations induced by artificial intelligence? What are their limits ?
➡ How can we experimentally measure the impacts of so-called ethical tools in AI?
➡ How to consider the ethical expectations expressed by citizens on AI to integrate them in the design of algorithms?
Data Science & AI
My master thesis at ENSAE focused on distributed Machine Learning and the application of the stochastic gradient descent method on distributed file (Hadoop / HDFS). You will find on my Github account my various algorithmic developments as a Data Scientist, in particular the implementation of AI fairness metrics applied to credit scoring, graph mining, topic modeling, blog scraping, etc.
I was interested inepistemological issues around artificial intelligence as part of my Master thesis in philosophy. My work has focused on:
➡ The phenomenological arguments put forward by Dreyfus against the possibility of a formal AI;
➡ The epistemological foundations of interceptability in Machine Learning;
➡ The historical epistemology of the discipline of artificial intelligence.
Political science :
- Algorithms & Public Policies, M2, Master in Public Policies, School of Public Affairs:
2019/2020, and 2020/2021
- Quantitative methods for the social sciences, L1, University College:
- Analysis and evaluation of public policies, M2, Master of Public Policies, School of Public Affairs:
University Paris Dauphine:
- Corporate Strategy, Bachelor third-year:
2021 / 2022
- Mathematics for Artificial Intelligence, Bachelor, first-year
2021 / 2022
Paris Sciences & Lettres University (PSL):
- Mathematics & Probability, Multidisciplinary Cycle of Higher Studies (CPES), 2nd year:
2016/2017, 2017/2018, 2018/2019 and 2019/2020
- Statistics, Multidisciplinary Cycle of Higher Studies (CPES), 2nd year:
2016/2017, 2017/2018, 2018/2019 and 2019/2020
Paris Sorbonne University:
- Econometrics, L3, Bachelor of Economics:
- Statistics, L3, Bachelor of Economics:
Conférence: "Pourquoi réguler l'IA est-il difficile ?"
Sciences Po Paris, 16th November 2021, Jean-Marie John-Mathews
Summary : While Artificial Intelligence (AI) is raising more and more ethical issues (fairness, explainability, privacy, etc.), a set of regulatory tools and methods have emerged over the past few years, such as fairness metrics, explanation procedures, anonymization methods, and so on. When data are granular, voluminous, and behavioral, these “responsible tools” have trouble regulating AI models without using the normative categories usually applied to formulate critiques. How to normalize AI that pretend to compute the world without categories? To answer this question, we have developed, using the technical literature from AI ethics, an algorithmic method to regulate AI regarding ethical issues, such as discrimination, opacity, and privacy. We then formulate four empirical and theoretical critiques to highlight the limitations of technical tools to address ethics. First, we pinpoint the limitations of the methods that generate post-hoc explanations of "black box" models. Second, we show that we cannot only rely on transparency to address ethical issues, since explanations cannot always reveal discriminations. We then demonstrate, using concepts from Boltanski's pragmatic sociology, that the methods for fighting discrimination tend towards a “complex domination system” in which AI constantly modifies the contours of reality without offering any outlet for criticism. Finally, we show that AI is more generally part of a movement of extension that dilutes the role of institutions. These four empirical and theoretical criticisms finally allow us to adjust our first proposal to normalize AI. Starting from a technical tool, we finally propose an open and material inquiry, allowing to constantly update the question of ends and means within AI collectives.Regulation in AI is difficult because finding stable normative categories is much harder than in traditional software. This is what I tried to experiment and theorize in my research. One way to overcome this issue is to jointly combine two movements: (1) Open up the AI: collect the feedback, errors, and criticism. This step is essential to trigger a reflexive attitude from the AI constructor. (2) Materialize the new associations: structure the feedback, build metrics and test it. This is essential to stabilize the new findings brought by the opening step. At Giskard AI, we help AI constructors to collect feedback to turn them into actionable tests. It's the concretization of academic research into the operational world.
Conference: "A Critical Empirical Study of Black-box Explanations in AI"
International Conference in Information Systems, 14th December 2021, Jean-Marie John-Mathews
Summary : In recent years, many applications such as autonomous cars, virtual assistants, recommendation systems, facial or voice recognition use Machine Learning, a sub-discipline of artificial intelligence (AI). These uses are the subject of debate and today pose many ethical problems of a varied nature such as acts of discrimination, infringement of individual freedoms, the autonomy of subjects or even the responsibility of algorithms given their opacity. In this lecture, we will present these different concerns from a technical point of view. A growing community of researchers and engineers in artificial intelligence are dealing with this ethical subject by offering so-called responsible tools in AI. From the presentation of these so-called responsible tools and methods, we will expose their theoretical assumptions as well as their limits and contradictions.This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of black-box model tend to give partial and biased information on the underlying mechanism of the algorithm and can be subject to manipulation or information withholding by diverting users’ attention. Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability. This paper contributes to shedding new light on the actual theoretical debate between intrinsically transparent AI models and post-hoc explanations of black-box complex models – a debate which is likely to play a highly influential role in the future development and operationalization of AI systems.
Séminaire : "L'éthique de l'intelligence artificielle en pratique : enjeux et limites"
Sciences Po Paris, 16 novembre 2021, Jean-Marie John-Mathews
Résumé : Alors que l’Intelligence Artificielle (IA) est de plus en plus critiquée en raison des enjeux éthiques qu’elle pose, un ensemble d’outils et de méthodes ont émergé ces dernières années pour la normer, tels que les algorithmes de débiaisement, les métriques d’équité, la génération d’explications, etc. Ces méthodes, dites de l’IA responsable, doivent s’adapter à des algorithmes qui s’alimentent de données de plus en plus granulaires, volumineuses et comportementales. Dans ce travail de recherche, nous décrivons comment l’IA prétend calculer le monde lui-même sans l’usage des catégories normatives que nous utilisons habituellement pour formuler des critiques. Comment normaliser une IA qui prétend justement se positionner en deçà des normes ?
Conference: "Is interpretability a solution to address discrimination in Machine Learning?"
Good In Tech Webinar, 12 mai 2021, Jean-Marie John-Mathews
Summary : We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents.
Conference: "At the heart of algorithms. Scientific approach to automation, ethical, legal and societal issues"
IImagine Institute, Ethics Space, February 6, 2020, Jean-Marie John-Mathews
Video conference link
Round Table videolink
Conférence : "L'intelligence artificielle : pourquoi l'IA pose-t-elle problème ?"
Université de droit et économie de Metz, 17 janvier 2020, Jean-Marie John-Mathews
Summary : Ces dernières années, de nombreuses applications telles que la voiture autonome, les assistants virtuels, les systèmes de recommandation, la reconnaissance faciale ou vocale utilisent du Machine Learning, une sous-discipline de l'intelligence artificielle (IA). Ces usages font débat et posent aujourd'hui de nombreux problèmes éthiques de nature variée tels que les faits de discrimination, l'atteinte aux libertés individuelles, à l'autonomie des sujets ou encore la responsabilité des algorithmes étant donné leur opacité. Dans cette conférence, nous présenterons ces différentes préoccupations à partir d'un point de vue technique. Une communauté grandissante de chercheurs et ingénieurs en intelligence artificielle traitent en effet ce sujet éthique en proposant des outils dits responsables en IA. À partir de la présentation de ces outils et méthodes dits responsables, nous exposerons leurs présupposés théoriques ainsi que leurs limites et contradictions.
Interview : Une plongée dans les réseaux de neurones
IT for Business, October 2019
Conference: "The mathematical engineer in a world of artificial intelligence"
Paris Diderot University, February 18, 2019, Jean-Marie John-Mathews
Summary : In recent years, many applications such as autonomous cars, virtual assistants, recommendation systems, facial or voice recognition claim to "use artificial intelligence". What does that mean ? A more precise vision of the models within these technologies makes it possible to identify their common denominator which is none other than a discipline within applied mathematics: machine learning. What place for the mathematical engineer in a world of artificial intelligence? Has his role in society changed? We will see in this conference how the design of algorithms put the engineer at the heart of the ethical and societal issues induced by artificial intelligence.