Artificial Intelligence

What Is Counterfactual Analysis?

Exploring causal relationships by asking ‘what if?’.

Counterfactual analysis (or counterfactual thinking) explores outcomes that did not actually occur, but which could have occurred under different conditions. It’s a kind of what if? analysis and is a useful way for testing cause-and-effect relationships.

Consider deciding which road to take driving home. You take Right Ave and encounter lots of traffic. But you could have taken Left Ave and had less traffic.

The outcome—less traffic—did not actually occur but could have occurred if you had taken a different road.

This is an example of a counterfactual, and in this case helps to test the causal relationship between the choice of road (Right Ave) and the amount of traffic (outcome).

Counterfactual analysis use cases

Counterfactual analysis has a number of practical uses.

An example is recommender systems, where counterfactuals can be used to supplement missing information.

The data provided by a recommender system, for instance, is limited by its observations. A typical output would be the number of recommended articles a user had downloaded.

With counterfactual analysis, you can estimate the number of articles the user would have downloaded if they had been given a different set of recommendations. This can be helpful in improving the system’s future recommendations.

Another example is in historical analysis. Here, counterfactual analysis can be used to evaluate a historical event’s causal significance1. By exploring how historical events may have unfolded under small changes in circumstances, historians can assess the importance of factors that may have caused the event.

Counterfactual analysis is even useful in macroeconomics. A recent study2 explored the effectiveness of US monetary policy using a counterfactual approach. When combined with simple modeling, the authors consider this to be more reliable than using complex models with incomplete information.

Counterfactual analysis and XAI

An emerging application of counterfactual analysis is in explainable artificial intelligence (XAI). Here, counterfactual thinking can help to better understand and explain complex AI systems.

Many AI systems are difficult to understand and have black-box inner workings. When these systems are used to make important decisions—those that affect peoples’ lives—the decisions need to be explained.

Consider a loan application system. AI systems can assess the credit-worthiness of an applicant, and based on this may decide to deny a loan. If this happens, the applicant may wish to understand why they were denied.

AI systems such as these are not easy to explain, but counterfactual thinking can help.

Recent research3 has revealed how people think counterfactually in their daily lives. This has led to six useful observations:

  1. People tend to use counterfactuals to think creatively, rather than logically, about solving problems
  2. People tend to use counterfactuals to imagine how outcomes might have been better, rather than worse
  3. Counterfactuals help people identify cause-and-effect relationships for events
  4. Counterfactuals lead people to assign blame to actions that may have caused events
  5. People imagine, through counterfactual thinking, how outcomes may have been different if controllable events were changed
  6. Counterfactual thinking helps people to simulate, or imagine, multiple possibilities

Why does all this matter?

These tendencies can be exploited to design counterfactual approaches that people can relate to. In this way, when used to help explain complex AI systems, counterfactual analysis can help to build trust.

In summary

  • Counterfactual analysis explores what if? scenarios to assess outcomes that did not occur, but could have occurred under different conditions
  • Counterfactual analysis is useful in testing cause-and-effect relationships
  • Use cases for counterfactual thinking include recommender systems, historical analysis and explainable AI
  • By better understanding how humans think counterfactually, through ongoing research, counterfactual analysis can be used to improve explainable AI


[1] Ruth M. J. Byrne, Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning, Proceedings from the Twenty-Eighth International Joint Conference on Artificial Intelligence, Aug 10-16, 2019.

[2] M. H. Pearson and R. P. Smith, Counterfactual analysis in macroeconometrics: An empirical investigation into the effects of quantitative easing, Research in Economics, Volume 70, Issue 2, pp. 262-80, June 2016.

[3] Alexander Maar, Possible uses of counterfactual thought experiments in history, Principia, volume 18(1):87-113, 2014.

Similar Posts