Saturday, 28 March 2020

Book review: A tutorial introduction to the mathematics of deep learning

Preamble
Artificial Intelligence Engines:
An introduction to the Mathematics
of Deep Learning
by Dr James V. Stone
the book and Github repository.
(c) 2019 Sebtel Press
Deep learning and associated connectionist approaches are now applied routinely in industry and academic research from image analysis to natural language processing and areas as cool as reinforcement learning. As practitioners, we use these techniques and utilise them from well designed and tested reliable libraries like Tensorflow or Pytorch as shipped black-boxed algorithms. However, most practitioners lack mathematical foundational knowledge and core algorithmic understanding. Unfortunately, many academic books and papers try to make an impression of superiority show subliminally and avoid a simple pedagogical approach. In this post we review, a unique book trying to fill this gap with a pedagogical approach to the mathematics of deep learning avoiding showing of mathematical complexity but aiming at conveying the understanding of how things work from the ground up. Moreover, the book provides pseudo-codes that one can be used to implement things from scratch along with a supporting implementation in Github repo. Author Dr James V. Stone, a trained cognitive scientist and researcher in mathematical neuroscience provides such approaches with other books many years now, writing for students, not for his peers to show off. One important note that this is not a cookbook or practice tutorial but an upper-intermediate level academic book.

Building associations and classify with a network

The logical and conceptual separation of associations and classification tasks are introduced in the initial chapters. It is ideal to start with from learning one association with one connection to many via gentle introduction to Gradient descent in learning the weights before going to 2 associations and 2 connections. This reminds me of George Gamow's term 1, 2 and infinity as a pedagogical principle. Perceptron is introduced later on how classification rules can be generated via a network and the problems it encounters with XOR problem.

Backpropagation, Hopfield networks and Boltzmann machines

Detail implementation of backpropagation is provided from scratch without too many cluttering index notation in such clarity. Probably this is the best explanation I have ever encountered. Following chapters introduced Hopfield networks and Boltzmann machines from the ground up to applied level. Unfortunately, many modern deep learning books skip these two great models but Dr Stone makes these two models implementable for a practitioner by reading his chapters.  It is very impressive. Even though I am a bit biased in Hopfield networks as I see them as an extension to Ising models and its stochastic counterparts, but I have not seen anywhere else such explanations on how to use Hopfield networks in learning and in a pseudo-code algorithm to use in a real task.

Advanced topics

Personally, I see the remaining chapters as advanced topics: Deep Boltzmann machines, variational encoders, GANs and introduction to reinforcement learning. Probably exception of deep backpropagation in Chapter 9. I would say what is now known as deep learning now was the inception of the architectures mentioned in sections 9.1 till 9.7.

Glossary, basic linear algebra and statistics

Appendices provide a fantastic conceptual introduction to jargon and basics to main mathematical techniques. Of course, this isn't a replacement to fully-fledged linear algebra and statistics book but it provides immediate concise explanations.

Not a cookbook: Not import tensorflow as tf book

One other crucial advantage of this book is that it is definitely not a cookbook. Unfortunately, almost all books related to deep learning are written in a cookbook style. This book is not. However, it is supplemented by full implementation in a repository supporting each chapter, URL here.

Conclusion

This little book archives so much with down to earth approach with introducing basic concepts with a respectful attitude, assuming the reader is very smart but inexperience in the field. If you are a beginner or even experienced research scientist this is a must-have book.  I still see this book as an academic book and can be used in upper-undergraduate class as the main book in an elective such as
"Mathematics of Deep Learning".

Enjoy reading and learning from this book. Thank you, Dr Stone, for your efforts on making academic books more accessible.

Disclosure: I received a review copy of the book but I have bought another copy for a family member. 

Sunday, 22 March 2020

Computational Epidemiology and Data Scientists: Don't post analysis on outbreak arbitrarily

Summary

Many data scientist are trained or experienced in using tools to do statistical modelling, forecasting or machine learning solutions, this doesn't necessarily mean that they should just jump out and do an ad-hoc analysis on the available public data on the covid19 outbreak and draw policy conclusions and publish them in their blogs or the other medium.  Rule of thumb of doing such thing you should have at least one published paper, article or software solution related to outbreaks appeared before December 2019. Please be considerate, epidemiological modelling is not merely fitting exponential distribution.

Conclusion
Please refrain on posting a blog or similar posts on infection modelling and giving advice out of your ad-hoc data analysis you did over your lunch-break, if you have not worked on computational epidemiology before. There is a vast academic literature on computational epidemiology. Let people experts in those fields express their modelling efforts first. Let us value expertise in an area.

Appendix: Computational epidemiology introductory resources

Here we provide, limited pointers to computational epidemiology literature. Google Scholar is your friend to find many more resources.

  • Computational Epidemiology
    Madhav Marathe, Anil Kumar S. Vullikanti
    Communications of the ACM, July 2013, Vol. 56 No. 7, Pages 88-96 doi
  • Broadwick: a framework for computational epidemiology
    O’Hare, A., Lycett, S.J., Doherty, T. et al. Broadwick
    BMC Bioinformatics 17, 65 (2016).
    doi
  • Mathematical Tools for Understanding Infectious Disease Dynamics
    (Princeton Series in Theoretical and Computational Biology)
    Odo Diekmann, Hans Heesterbeek, and Tom Britton
    Princeton Press
  • Agent-Based Simulation Tools in Computational Epidemiology
    Patlolla P., Gunupudi V., Mikler A.R., Jacob R.T. (2006)
    doi
  • DIMACS 2002-2011 Special Focus on Computational and Mathematical Epidemiology Rutgers working group
  • Containment strategy for an epidemic based on fluctuations in the SIR model
    Philip Bittihn, Ramin Golestanian
    Oxford/Max Planck team
    arXiv:2003.08784

Saturday, 22 February 2020

A simple and interpretable performance measure for a binary classifier

Kindly reposted to KDnuggets by Gregory Piatetsky-Shapiro 


Preamble


The core application of machine learning models is a binary classification task. This appears in polyhedra of areas from medicine for diagnostic tests to credit risk decision making for consumers.  Techniques in building classifiers vary from simple decision trees to logistic regression and lately super cool deep learning models that leverage multilayered neural networks. However, they are mathematically different in construction and training methodology, when it comes to their performance measure, things get tricky. In this post, we propose a simple and interpretable performance measure for a binary classifier in practice. Some background in classification is assumed. 

Why ROC-AUC is not intepretable? 


Varying threshold produces different
confusion matrices (Wikpedia) 
De-facto standard in reporting classifier performance is to use Receiver Operating Characteristic (ROC) - Area Under Curve (AUC) measure. It originates from the 1940s during the development of Radar by the US Navy, in measuring the performance of detection.  There are at least 5 different definitions of what does ROC-AUC means and even if you have a PhD in Machine Learning, people have an excessively difficult time to explain what does AUC means as a performance measure. As AUC functionality is available in almost all libraries and it becomes almost like a religious ritual to report in Machine Learning papers as a classification performance. However, its interpretation is not easy, apart from its absurd comparison issues, see hmeasure.  AUC measures the area under the True Positive Rate (TPR) curve as a function of the False Positive Rate (FPR) that are extracted from confusion matrices with different thresholds.

$$f(x)=y$$
$$\int_{0}^{1} f(x) dx = AUC$$

Whereby, y is TPR and x is FPR. Apart from a multitude of interpretations and easy to have confusions, there is no clear purpose of taking the integral over FPR. Obviously, we would like to have perfect classification by having FPR zero, but the area is not mathematically clear. Meaning that what is it as a mathematical object is not clear.

Probability of correct classification (PCC)

A simple and interpretable performance measure for a binary classifier would be great for both highly technical data scientist and non-technical stakeholders. The basic tenant in this direction is that the purpose of a classifier technology is the ability to differentiate two classes. This boils down to a probability value, Probability of correct classification (PCC). An obvious choice is so-called balanced accuracy (BA). This is usually recommended for unbalanced problems, even by SAS; though they used multiplication of probabilities. Here we will call BA as PCC and use addition instead, due to statistical dependence:
$PCC = (TPR+TNR)/2$

$TPR=TP/(ConditionPositive)=TP/(TP+FN)$
$TNR=TN/(ConditionNegative)=TN/(TN+FP)$.

PCC tells us how good the classifier in detecting either of the class and it is a probability value, $[0,1]$. Note that, using total accuracy over both positive and negative cases are misleading, even if our training data is balanced in production, batches we measure the performance may not be balanced so accuracy alone is not a good measure.

Production issues
Immediate question would be, how to choose the threshold in generating confusion matrix? One option would be to chose a threshold that maximizes PCC for production on the test set. To improve the estimation of PCC, resampling on the test set can be performed to get a good uncertainty.

Conclusion

We try to circumvent in reporting AUCs by introducing PCC, or balanced accuracy as a simple and interpretable performance measure for a binary classifier. This is easy to explain to a non-technical audience. An improved PCC, that takes into account better estimation properties can be introduced, but the main interpretation remains the same as probability of correct classification. 

Sunday, 15 December 2019

Bringing back Occam's razor to modern connectionist machine learning:

A simple complexity measure based on statistical physics
Cascading Periodic Spectral Ergodicity (cPSE)

Kindly reposted to KDnuggets by Gregory Piatetsky-Shapiro with the title Applying Occam's razor to Deep Learning 

Kindly reviewed by Cornelius Weber

Preamble: Changing concepts in machine learning due to deep learning

Occam's razor or principle of parsimony has been the guiding principle in statistical model selection. In comparing two models, which they provide similar predictions or description of reality, we would vouch for the one which is less complex. This boils down to the problem of how to measure the complexity of a statistical model and model selection. What constitutes a model, as discussed by McCullagh (2002) in statistical models context is a different discussion, but here we assume a machine learning algorithms are considered as a statistical model.  Classically, the complexity of statistical models usually measured with Akaike information criterion (AIC) or similar. Using a complexity measure, one would choose a less complex model to use in practice, other things fixed.
Figure 1: Arbitrary architecture, each node represents
a layer for a given deep neural network, such as
convolutions or set of units.  Süzen-Cerd
à-Weber (2019)


The surge in interest in using complex neural network architectures, i.e., deep learning due to their unprecedented success in certain tasks,  pushes the boundaries of "standard" statistical concepts such as overfitting/overtraining and regularisation

Now Overfitting/overtraining is often used as an umbrella term to describe any unwanted performance drop off a machine learning model Roelofs et. al. (2019) and nearly anything that improves generalization is called regularization, Martin and Mahoney (2019).

A complexity measure for connectionist machine learning

Deep learning practitioners rely on choosing the best performing model and do not practice Occam's razor.  The advent of Neural Architecture Search and new complexity measures  
that take the structure of the network into account gives rise the possibility of practising Occam's razor in deep learning. Here, we would cover one of the very practical and simple measures called cPSE, i.e., cascading periodic spectral ergodicity. This measure takes into account the depth of the neural network and computes fluctuations of the weight structure over the entire network,  Süzen-Cerdà-Weber (2019) Figure 1. It is shown that the measure is correlated with the generalisation performance almost perfectly, see Figure 2.


Practical usage of cPSE

Figure 2: Evolution of PSE, periodic spectral ergodicity,
it is shown that complexity measure cPSE saturates
after a certain depth,  Süzen-Cerdà-Weber (2019)
The cPSE measure is implemented in Bristol python package, starting from version 0.2.6. If a trained network wrapped into a PyTorch model object, cPSE can be used to compare two different architectures. If two architectures give similar test performance, we would select the one with higher cPSE value. Lower the cPSE, more complex the model.

An example of usage requires a couple of lines, example measurements for VGG and ResNet are given in Süzen-Cerdà-Weber (2019):

from bristol import cPSE
import torchvision.models as models
netname = 'vgg11'
pmodel = getattr(models, netname)(pretrained=True)
(d_layers, cpse) = cPSE.cpse_measure(pmodel)




Conclusion and take-home message

Using a less complex deep neural network that would give similar performance is not practised by the deep learning community due to the complexity of training and designing new architectures. However, quantifying the complexity of similarly performing neural network architecture would bring the advantage of using less computing power to train and deploy such less complex models into production. Bringing back the Occam's razor to modern connectionist machine learning is not only a theoretical and philosophical satisfaction but the practical advantages for environment and computing time is immense.

(c) Copyright 2008-2020 Mehmet Suzen (suzen at acm dot org)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License