Accessibility Links

Could deep learning transform the medical sector?

Posted by: Robert Stokes
01/12/2016

 

The concept of deep learning is decades old. First developed at the birth of artificial intelligence in 1956, it has finally arrived due to both the wider availability of faster, more powerful computers and the infinite accessibility to various data including images, video and music. To get an understanding of where deep learning fits, picture AI as three concentric circles made up of three aspects - AI, machine learning and deep learning. AI is the overarching idea whereas machine learning is the ability to learn without being explicitly programmed, the machine learns and grows from being exposed to new data. Deep learning is the much more complex subset of machine learning. Essentially, deep learning is composed of algorithms that allow software to train itself to perform a task, such as voice and image recognition. It does this by exposing multilayered neural networks to vast amounts of data.

The technology industry is very excited by this next step in AI because deep learning has the ability to go further than machine learning and to find patterns within a much wider range of data such as video, voice, music, images and sensor data. Deep learning is what allows your Smartphone to recognise and respond to you with a basic level of intelligence.

Deep learning or sometimes referred to as deep neural networks, attempts to mimic the activity in the layers of neurons in the neo cortex of the human brain. Computer scientist Geoffrey Hinton developed a new way to teach individual layer information. His first layer learns primitive features of an image or sound such as the edges of a picture or the tiniest unit of noise. Once that first layer recognises those features, the next layer is trained to recognise a more complex feature, and so on until multiple layers have built up an accurate recognition of the sound or object. Due to improvements in mathematical formulas and powerful computers, we can now model many more layers of virtual neurons than ever before. These multi-layers allow for more precise training and thus we are able to produce huge advances in speech and image recognition.

This improved image recognition in robotics is stirring exciting developments within the medical sector. So much of what doctors do requires some form of image recognition, from radiology, dermatology, to ophthalmology and across many other “ologies.” This new technology has paved the way for many start-ups such as Enlitic, who are using deep learning to analyse radiographs as well as CT and MRI scans. Enlitic’s technology outperformed four radiologists in detecting and classifying lung nodules as benign or malignant. However, it is worth noting that this technology has not yet been FDA approved or reviewed by peers. Atomwise, a San Fransico start-up is using deep learning to accelerate drug discovery. The neural networks have been used to examine 3D images of thousands of molecules that may serve as drug candidates and predicting their suitability for blocking the mechanism of a pathogen. A third start-up Freenome, is aiming to use deep learning to diagnose cancer from blood samples. The AI examines DNA fragments in the bloodstream which have been spewed out by the cells as they die off. Deep learning is then used to find the correlation between cell-free DNA and some cancers. Gabriel Otte who created Freenome has said “we’re seeing novel signatures that haven’t even been characterised by cancer biologists yet.” Otte’s technology successfully identified 5 blind samples sent by an investor, two normal and three cancerous. Needless to say, Otte got the investment. 

Quite simply, due to the millions of images that a computer can process in a short space of time in comparison to a radiologist who may see thousands over their whole career, it is not unreasonable to imagine that this type of job can be done better by computers. Once standardised, this kind of technology could change the face of healthcare and it could benefit millions of patients.

AI is finally getting intelligent; however we are only scratching the surface. There are still many limitations to overcome, the main one currently is implementing “unsupervised learning” where machines are able to identify unlabelled data and therefore able to teach themselves about the world independent of humans. Currently, almost every deep learning product uses “supervised learning” where the neural net is trained by humans with labelled data, which essentially tells the machine what the image is in the first place and then it identifies other similar images by itself.

Machines are getting better at recognising objects and translating speech, however developing beyond this will require much more conceptual and software breakthroughs, not to mention the advances needed on processing power. Further, deep learning is not able to reason like a human so I don’t think doctors need to fear for their jobs just yet.



Add new comment
*
*
*