
The 2021 Nobel prize in economic sciences went to three researchers who used causal inference to ask questions such as whether a higher minimum wage leads to lower employment, or what effect an extra year of schooling has on future income. Credit: Mark HopkinsĬausal inference has long been used by economists and epidemiologists to test their ideas about causation. The potential solution to this problem lies in something known as causal inference - a formal, mathematical way to ascertain whether one variable affects another.Ĭomputer scientist Rohit Bhattacharya (back) and his team at Williams College in Williamstown, Massachusetts, discuss adapting machine learning for causal inference. But a lack of understanding of causality meant that it was also possible that the treatment was affecting the gene expression - or that another, hidden factor was influencing both. In Bhattacharya’s case, it was possible that some of the genes that the system was highlighting were responsible for a better response to the treatment. “If you’re a robot, you want to know what will happen when you take a step here with this angle or that angle, or if you push an object,” Kocaoglu says. Incorporating models of cause and effect into machine-learning algorithms could also help mobile autonomous machines to make decisions about how they navigate the world. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.” “Anything beyond prediction requires some sort of causal understanding,” he says. But without that understanding, any differences in how such markings are drawn or positioned could be enough to steer a machine down the wrong path.įor computers to perform any sort of decision making, they will need an understanding of causality, says Murat Kocaoglu, an electrical engineer at Purdue University in West Lafayette, Indiana.

It is obvious, to a person at least, that there is no causal relationship between the style and placement of the letter ‘R’ on an X-ray and signs of lung disease. AI programs trained to spot disease in a lung X-ray, for example, have sometimes gone astray by zeroing in on the markings used to label the right-hand side of the image 3. They lack a common-sense understanding of how the world works that people have just from living in it. But when it comes to cause and effect, machines are typically at a loss. And computers can use those patterns to make predictions - for instance, that a spot on a lung X-ray indicates a tumour 2. Computers can be trained to spot patterns in data, even patterns that are so subtle that humans might miss them.

Part of Nature Outlook: Robotics and artificial intelligenceīhattacharya was stymied by the age-old dictum that correlation does not equal causation - a fundamental stumbling block in artificial intelligence (AI). “I couldn’t say that this specific pattern of binding, or this specific expression of genes, is a causal determinant in the patient’s response to immunotherapy,” he explains. He could identify patterns of genes that correlated to immune response, but that wasn’t sufficient 1. Bhattacharya’s idea was to create neural networks that could profile the genetics of both the tumour and a person’s immune system, and then predict which people would be likely to benefit from treatment.īut he discovered that his algorithms weren’t up to the task.

This form of treatment helps the body’s immune system to fight tumours, and works best against malignant growths that produce proteins that immune cells can bind to. When Rohit Bhattacharya began his PhD in computer science, his aim was to build a tool that could help physicians to identify people with cancer who would respond well to immunotherapy.
