Tuesday, 29 December 2020

Practice causal inference: Conventional supervised learning can't do inference

Domino OR-gate (Wikipedia)
Preamble

A trained model may provide predictions about input values it may never seen before but it isn't an inference, at least for 'classical' supervised learning. In reality it provides an interpolation from the training-set, i.e., via function approximation: 

Interpolation doesn't mean to have all predictions within  convex-hull of the training set but interpolation as in numerical procedure of using training data only.

What does inference means? 

By inference, "we imply going beyond training data", reference to distributional shift, compositional learning or similar type of learning, should have been raised. This is specially apparent, for example, a human infant may learn  how 3 is similar to 8, but without labels supervised learning in naïve setting can't establish this, i.e., MNIST set without 8 can not learn what is 8 in plain form.

In the case of ontology inference, ontology being a causal graph, that is a "real" inference as it symbolically traverse a graph of causal connections. 

Outlook

We might be able to directly transfer that to regression scenario but probably it is possible with altering our models with SCMs and hybrid symbolic-regression approach. 

Postscript
  • Looper repo provides a resource list for causal inference looper 
  • Thanks to Patrick McCrae for invoking ontology inference comparison.
Cite

 @misc{suezen20pci, 
     title = {Practice causal inference: Conventional supervised learning can't do inference}, 
     howpublished = {\url{https://memosisland.blogspot.com/2020/12/practice-causal-inference-conventional.html}, 
     author = {Mehmet Süzen},
     year = {2020}
}
  

(c) Copyright 2008-2024 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License