Machine learning is changing the world, there is no doubt about this. Despite the fact that the NHS is famous for being backward in its tech, it’s only a matter of time before it adopts machine learning. The reception machine learning gets from clinicians is mixed. There is a palpable fear amongst some of them. From time to time, news articles tell us about how machine learning is advancing diagnostics. However, I find a lot of this fear seems to stem from a misunderstanding of how machine learning works, where it can be used, and what can it be used for. Here are four concepts every clinician should have a basic understanding on. Not only should you not see it as a threat, you should see the uses and possibilities of machine learning.
How it works
The misunderstanding leads to fear. I’ve spoken to many clinicians who will have strong opinions of machine learning without knowing anything about it. This goes for tech in general. Multiple clinicians shake their head muttering that the computer scientists just don’t understand medicine. I would agree with them, I have met my fair share of tech heads who just don’t understand the realities of practicing medicine. However, it goes both ways. Tech isn’t easy either and there are numerous clinicians who have proposed some crazy ideas because they just don’t understand tech. So how does machine learning work? It’s actually a fairly simple premise. It’s the execution that’s hard. You start off with an equation that has a number of variables. These variables have weights. The smaller the weight, the less effect the variable has on the outcome. Let’s say we are measuring 3 variables. We guess the weights and come up with a prediction. Of course, it’s going to be wrong. The machine learning algorithm then tests against data and it’s outcomes (training data). It then alters the weights via a small step and repeats the process. It stops when there is little to no improvement in the accuracy.
To understand how this works a fair amount of math is needed and you’re going to have to be comfortable with datasets that have dimensions that are higher than three. Hundreds of variables can be processed at the same time and the techniques such as neural networks can get fairly complicated. However, the end result is that you end up with an equation that not only predicts the outcomes based on a load of inputs, but it also has quantified the effect each variable has on the outcome. This is why machine learning algorithms are forwarding medical academia, finding correlations between steroid use and heart attacks, and quantifying the effect of mental health on heart attacks. I know my fair share of medical academics who simply look for correlations themselves and then speculate on them. For these traditional medical academics, the only thing that’s protecting them is the fact that the NHS is so backward tech wise. There are medical academics who are ahead of the curve. Here I interview a medical academic at Imperial College London who is using computational approaches to unearthing the secrets of medicine. He is breaking new ground with ease as his approach is just so superior to traditional academic medicine [link]. It’s going to get exciting for medics who embrace the computational side of medicine.
It doesn’t have to think like you
Another fallacy is when clinicians compare their way of thinking to machine learning. They can point out that correlation doesn’t equate to causation. The fallacy here is that their way of thinking is the only way to solve a problem. This close minded thinking is similar to when people think about aliens. They often imagine them to have human forms. However, if aliens do exist, what are the chances they will look like us? Fairly unlikely. They will most probably be like something that we haven’t comprehended. We have an example of this in history, the journey to man flying. Bird wings are more intricate and complex than the wings we make. The initial steps towards flight were pitiful. A bird could easily outmatch the attempts. What’s more is that we look back at the attempts where they tried to mimic birds wings and flapping and conclude, what were they thinking? The successful path to flight didn’t mimic this, instead, we went for simple wings, and compensated by strengthening the wings, and maxing out the propulsion power with jet engines. Now the airplane can outstrip a bird in length of flight, speed, capacity etc. Like human flight, the human way of thinking and solving a problem isn’t the only way. A machine learning algorithm is simpler than a human brain, like the plane wing and the bird wing. However, we can compensate by correlating thousands of variables in the same network with lightening fast computational speeds, having flawless memory, and relentless 24 / 7 number crunching and refining.
used for more than just predicting
When people think of machine learning they instantly jump to data analytics, predicting market trends, and calculating the spread of disease. However, this isn’t all it can be used for. Think back to the first section. Machine learning is very good at finding correlations and quantifying them. It’s also been used for robotics, stabilizing modes for drones, self-driving cars and much more. For instance, my MSc project is coding a 3D mapping platform for surgical robotics. Below is sensor data of 25 flashing lights in a grid. The sensor is looking at it at an angle.
25 clusters appeared as there is light scatter and error in the sensor. However, for the 3D mapping, I needed points of interest. So this became a statistical problem. So in my code in included a machine learning algorithm that identified the clusters and calculated the center of each cluster. Theses are depicted in red. My surgical robotics project would be a lot harder without machine learning. I hear some clinicians say that this machine learning thing is over hyped and we are yet to see the benefits. They seem to be unaware that machine learning has revolutionized whole areas already. It’s more of a case of where is its limit is.
It’s becoming easier to implement
One of the things I love about math, physics, engineering and computer science is that it strives to build and simplify. You can really sum up an academic department on the way it treats information. If the field excessively uses long words to sound impressive and doesn’t spend time simplifying their advancements, they are going to get overtaken by another department. Tech, in general, is constantly automating itself. However, because of these people in tech know the deal and excel because of it. It used to be really hard to get your website on a server, however, techies worked around the clock to simplify it. Now, because of this, developers are spending their time coding dynamic websites with a lot more functionality. The result, top websites are way richer than the top websites in the 1990s. Basic machine learning algorithms have already been compiled into modules that can be imported and implemented with a few lines of code. The result, more people can now use machine learning to advance their solutions, and there are groups who are now using multiple different machine learning algorithms to solve real world problems. These machine learning modules don’t even cost anything, you can download them for free [link]. Tech has a great success story of simplifying so future techies can and conquer even more complex problems.
I help clinicians get to grips with coding and tech, I also code for a financial tech firm