Home » Shaping Futures » The Seven Tools of Causal Inference, with Reflections on Machine Learning

The Seven Tools of Causal Inference, with Reflections on Machine Learning

The dramatic success in machine learning has led to an explosion of AI applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations, however, have met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability or robustness. Machine learning researchers have noted that current systems lack the capability of recognizing or reacting to new circumstances they have not been specifically programmed or trained for. Another obstacle is explainability, that is, “machine learning models remain mostly black boxes” [Ribeiro et al. 2016] unable to explain the reasons behind their predictions or recommendations, thus eroding users trust and impeding diagnosis and repair. A third obstacle concerns the understanding of cause-effect connections.

According to Professor Judea Pearl from UCLA Computer Science, causal reasoning is an indispensable component of human thought that should be formalized and algorithimitized toward achieving human-level machine intelligence. He has explicated some of the impediments toward that goal in the form of a three-level hierarchy including Association (level 1), Intervention (level 2) and Counterfactual (level 3), and showed that inference to levels 2 and 3 require a causal model of one’s environment.

In addition, Professor Pearl has also described seven cognitive tasks that require tools from those two levels of inference and demonstrated how they can be accomplished in the Structural Causal Models (SCM) framework including:

  • Tool 1: Encoding Causal Assumptions – Transparency and Testability
  • Tool 2: Do-calculus and the control of cofounding
  • Tool 3: The Algorithmization of Counterfactuals
  • Tool 4: Mediation Analysis and the Assessment of Direct and Indirect Effects
  • Tool 5: Adaptability, External Validity and Sample Selection Bias
  • Tool 6: Recovering from Missing Data
  • Tool 7: Causal Discovery

It is important to note that the models used in accomplishing these tasks are structural (or conceptual), and requires no commitment to a particular form of the distributions involved. On the other hand, the validity of all inferences depend critically on the veracity of the assumed structure. If the true structure differs from the one assumed, and the data fits both equally well, substantial errors may result, which can sometimes be assessed through a sensitivity analysis. It is also important to keep in mind that the theoretical limitations of model-free machine learning do not apply to tasks of prediction, diagnosis and recognition, where interventions and counterfactuals assume a secondary role.

However, the model-assisted methods by which these limitations are circumvented can nevertheless be applicable to problems of opacity, robustness, explainability and missing data, which are generic to machine learning tasks. Moreover, given the transformative impact that causal modeling has had on the social and medical sciences, it is only natural to expect a similar transformation to sweep through the machine learning technology, once it is enriched with the guidance of a model of the data-generating process. Professor Pearl expected this symbiosis to yield systems that communicate with users in their native language of cause and effect and, leveraging this capability, to become the dominant paradigm of next generation AI.

It is also useful to note that Professor Pearl won the Turing Award in 2011 for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning.” In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, AIWS.net is working with Professor Pearl to develop Decision Making System based on Causal Inference Methodology. This AIWS.net system will be a significant contribution to Democratic Alliance on Digital Governance, which is a part of Social Contract 2020 – A New Social Contract in the Age of AI.

The original article can be found here.