The algorithm will see you now
Deep Learning and the upcoming automation revolution
Turn on the radio or browse social media today and you’ll likely be bombarded with a slew of terms like big data, algorithms and predictive analytics. Amidst the frenzy of buzz words and hyperbole are a spectrum of opinions on the potential of such technologies, ranging from critical reviews of existing applications all the way to apocalyptical predictions of machines enslaving humanity.
At the time of writing, the hottest topic is an area of artificial intelligence called machine-learning. This is a process where computers can take a set of data, learn something from it, and then change their behaviour accordingly. In other words, they can act in ways without being explicitly told how.
Within the field of machine-learning, the area grabbing the headlines is that of deep learning, a technology proving itself in a number of fields with incredible performance on challenges that were once intractable to anything but the human mind.
Learning to See
The first sign of deep learning’s power came in 2012, when the winning team in an image recognition competition[1] used it to beat the previous accuracy record by a wide margin. Since then, deep learning has been unleashed on problems from self-driving cars[2] and language translation[3] to finding signals from extra-terrestrials[4].
In healthcare, deep learning is already making inroads into several areas, including drug discovery[5], treatment recommendations and personalised medicine[6]. However, perhaps the biggest area of impact so far has been in computer vision, with several studies now showing human-like levels of performance (if not better) in spotting disease in medical scans. Dozens of papers can be found in the literature, dealing with radiographs[7], CT scans[8], MRI scans[9] and cytology images[10], across applications such as disease detection and classification, to organ segmentation.
Under the Hood
The core concept underlying deep learning’s ability is something called a neural network. This is a mathematical model that takes a number of inputs along with a known outcome, combines them in certain ways and learns how much emphasis or weight to place on each input throughout the network in order to get the correct result.
As an example, take predicting whether or not a patient will respond to a certain type of treatment. The inputs could be things like their age, sex, blood pressure, etc. The deep learning network would be given training data comprised of many (typically thousands at a minimum) examples of patient information along with whether or not they responded to the treatment. The neural network would then attempt to incrementally adjust the weights for each input until it minimised the number of outcomes it classified incorrectly. This model, once trained, could then make predictions on future cases.
The Future
The industrial revolution took jobs away from people that involved manual labour. Deep learning is set to do the same for jobs involving mental labour. Exactly what this means for the future of any industry is unclear, but the best way forwards has to be for as many people as possible to be up to speed with the basic workings of whatever disruptive technology comes their way.
As a consequence, professionals such as vets can be on the inside of such events; understanding, reacting and guiding the process as much as the pace of change allows.
In short, deep learning is here to stay and the future of healthcare will have a level of systemic automation undreamt of today. The question is, what skills should the next generation of vets be focussing on that are the least likely to be affected by such a change?
Rob Harrand (pictured) is a data scientist at Avacta Animal Health, where he works on general data analysis, the development of machine learning algorithms, and helps the wider team to make better use of their data. He uses the R programming language and data management best practices to create an environment of reproducible research at the company. His background is in physics and engineering, with degrees from the universities of York and Leeds.
References
- ImageNet Large Scale Visual Recognition Challenge (ILSVRC) http://www.image-net.org/challenges/LSVRC/
- End-to-End Deep Learning for Self-Driving Cars https://devblogs.nvidia.com/deep-learning-self-driving-cars/
- Google Neural Machine Translation https://en.wikipedia.org/wiki/Google_Neural_Machine_Translation
- Artificial Intelligence Helps Find New Fast Radio Bursts https://www.seti.org/press-release/artificial-intelligence-helps-find-new-fast-radio-bursts
- The rise of deep learning in drug discovery https://www.sciencedirect.com/science/article/pii/S1359644617303598
- IBM Watson for Oncology https://www.ibm.com/us-en/marketplace/ibm-watson-for-oncology
- Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks https://pubs.rsna.org/doi/10.1148/radiol.2017162326
- Classification of CT brain images based on deep learning networks https://www.sciencedirect.com/science/article/pii/S0169260716305296
- Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis https://www.sciencedirect.com/science/article/pii/S2213158216301899
- DeepPap: Deep Convolutional Networks for Cervical Cell Classification https://ieeexplore.ieee.org/document/7932065
Hmmm. This was written I take it before the BBC Horizon programme?
Lest anyone reading this get too excited about machine learning, be aware of a couple of aspects. Currently, any “Artifical Intelligence” be it algorithm or deep learning or neural network has to be programmed, not least with an end point. Even the fab machine that won at Go against the top human was told what the end point was. This shouldn’t be a problem as long as the programmer is not of a nefarious mind. It also should be borne in mind that mathematical modelling gets less applicable the more aspects it has to model. Disease modelling is a good example, simple disease can be accurately modelled. More complex situations cannot. FMD was an example of a failed model, because they used incorrect assumptions and it was too complex for the maths. In some cases the mathematics has not yet been developed to cope with multiple complex problems – such as diagnosis. Counting pixels (eg for. a scan) for pattern recognition isn’t really intelligence.
Thanks for your comment.
I agree that the term ‘artificial intelligence’ carries far too many connotations (thanks, perhaps, to sci-fi) that makes it sounds like there is genuine intelligence lurking inside your PC. Of course, there isn’t.
However, deep learning in particular is a different animal compared to traditional mathematical modelling thanks to the volume of data and computing power available today. Systems of old (‘expert systems’) operated on hand-crafted rules, requiring immense amounts of time and expertise to create them. Deep learning, in contrast, learns what a disease looks like (for example) by the features it finds when comparing, say, healthy and diseased radiographs. These features tend to be general and based upon not just counting pixels, but on their shape and surroundings. Crucially, these system are flexible, can learn quickly from mistakes, and can be built by people outside the particular domain in question. So yes, they are absolutely given an end point, but they’re not told *how* to reach that end point. That’s the ‘learning’ in deep learning. Take a look here for a recent paper on the human side … https://stanfordmlgroup.github.io/projects/chexnext/