Geoffrey E. Hinton, the Godfather of Machine Learning, best known for pioneering advances in deep learning and backpropogation, recently shook up the world of Machine Learning by publishing findings on a new approach to the image classification problem.

Hinton is challenging the world he’s helped to create, to entirely rethink its approach. The limitations of CNNs (Convolutional Neural Networks), the primary tool in image classification, have been understood for years. CNNs are effective for classifying images when they are placed in a consistent position.

 

CNNs run into issues when classifying images which are positioned differently, either upside down or at an angle. Feature detection also loses accuracy when an image is shrunk or enlarged.

Capsule Networks to the Rescue

Capsule Networks use nested layers in order to deliver and evaluate instances of invariance, effectively adding upside down or angled versions of the same image to training datasets.

Capsule Networks or CapsNet, has already performed better than CNNs on the MNIST dataset, a performance benchmark for accuracy in image recognition.

This breakthrough inches computer vision closer and closer to human vision, which is part of true Artificial Intelligence, strong AI, rather than just another narrow Machine Learning use case.

The implications are hugely important for incumbent insurers and startups who use computer vision for classifying images. 

For example, using CapsNet, Shift Technology, who focuses on claims handling, will improve on their capability to classify damage to cars, or detect fraud more accurately, especially given that pictures of claim events aren’t always positioned perfectly.

 

  

It’s very early days for Capsule Networks. CapsNet was somewhat proven on the MNIST dataset, but still needs to go through trials on larger datasets and more classes. CapsNet is also much more computationally expensive than existing CNN libraries, and will continue to be for quite some time.