4 simple tricks to beat machine learning and AI

Unless you’ve been living under a rock, you would have heard from TV, film, books, or the news that machine learning is making major advances and will be taking over many jobs in the not so distant future. The conversations I have with people who understand machine learning varies, however, people who don’t mainly fall into two camps. One camp smiles, accepts their fate and concludes that it’s a scary future. The other camp strongly disagrees, they state that a computer cannot do what they do, and they are usually shaking their head and interrupting you before you can even finish a sentence. In reality like most things, it’s somewhere in the middle, but the paradox is that the ones who smile and conclude it’s true are usually the ones who are least likely to be replaced by machine learning. The thing is, machine learning doesn’t pose a risk because it’s so good, it poses a risk because some people are so bad at learning. Here are 4 simple yet demanding steps you must take if you don’t want to outsmarted by AI:

Reduce noise

In the era of big data, noise is everywhere. When I’m training a machine learning algorithm I do not give it every bit of data around the problem. Right now (in my job anyway), the selection of data and cleaning it is still a very human process. If you’re feeding unnecessary data the algorithm will be less accurate. It’s because it’s trying to account for this unnecessary data that doesn’t influence the outcome in any way. The members of the second camp hear this and jump up and down proclaiming that this is why computers cannot replace them. However, some humans are even more lousy at this than computers. Politics is a great example of this. We now have access to big data and outcomes, and our understanding of economics is growing. We even have unanimous agreements in the field of economics, like rent control increasing the shortages of houses. With all this data, you’d think that people would be filtering out rubbish and only focusing on the outcome that benefits the most right? Sadly no, many university grads I’ve met are obsessed with the data they think they’ve obtained from mind reading. What they will do is make a half-baked assessment of your thoughts and motives even though they know very little about your personal life, conclude that you’re saying such things because of X reason and ignore the rest of the data. Nine times out of ten they’ve been wildly wrong which isn’t shocking because they cannot mind read, but you cannot disprove them. We can take from Carl Popper’s drowning example to illustrate how useless this is in predicting/understanding someone’s behavior:

Man sees boy drowning and doesn’t do anything to save him, the boy dies.

Man sees boy drowning and risks his own life by jumping in and saving him, the boy lives.

Completely opposite actions with opposite outcomes. However, we can twist our mind-reading conclusions to fit either one. We could say that the man suffers from an inferiority complex which is why he couldn’t bring himself to save the boy. He just didn’t believe in himself. On the other hand, we could say that the man’s inferiority complex was so strong that it compelled him to foolishly risk his life to save the boy so he could prove to himself and others that he was worth something. In short, we can make up all kinds of bullshit as to why he did what he did.

Thoughts, feelings, and motives cannot be quantified. The other fact is that feelings and intentions have no effect on the outcome. Corbyn proposes a rent cap. Economists across the board and even the charity Shelter say that this is a bad idea and it has been shown to make housing shortages worse all over the world. Is Corbyn a good man? Does he care? Is he misinformed or is he simply trying to con people who don’t understand economics for cheap votes? Who knows. How will you ever find out? If you do the impossible and manage to find out, it doesn’t affect the outcome. This doesn’t mean that you should ignore emotions all the time, I hear that they are essential in personal relationships. However, with the explosion of data available via the internet, the guy who wastes his time obsessing over another person’s motives will be overtaken by machine learning and people who don’t. With this abundance of data, power comes from knowing what data to ignore.

Screen Shot 2017-10-15 at 01.01.06.png

Don’t be one dimensional

Machine learning, like us, applies weights to certain variables to predict outcomes. For instance, if you’re trying to balance your budget shopping at the grocery store will have a different effect on your finances than eating out every night. You wouldn’t really trust the guy who says each act has the same weighting. Machine learning algorithms are successful because they take a range of variables, and then adjust the weights to get the best predictions against a dataset. If you give a machine learning algorithm one variable, it would be terrible at predicting the outcome unless there’s a high correlation. Most humans can assess multiple weights. However, with the internet and echo chambers, a growing number of humans are rendering themselves inferior to machine learning because they have overweighted a single variable. Racism and sexism are great demonstrations to this. Some people obsess over one variable and force it into everything. For instance, there was a female medical student at Oxford University who was spared jail and a criminal record after stabbing a guy whilst high on cocaine. The judge concluded that she had a promising career that she didn’t deserve to lose. Some on my Facebook concluded that it was down to white supremacy and white privilege. Others concluded that it was down to her being female as women get lesser sentences than men for the same crimes. And others postulated that it was because she was rich, if she was poor it would have been a different story. The rational will realize that all of these factors could affect the outcome to some degree, and the finer details of the case and what was said in court would have also played a role. Being one-dimensional results in you making terrible models were you have to be aggressive to make your model fit. For instance, I spoke to a black student at an elite university who was being one dimensional. I made the point that there was violence on both sides at Charlottesville and although we should hold the Nazis there to account we should not ignore the violence on the left. He response was stating how angry he was, and that how dare I. He asked me to justify Nazis’ actions three times in our conversation even though I pointed out that I thought that they were horrible people. I said sorry and backed out of the conversation as quickly as possible. You’re never going to have a productive conversation with someone who is literally typing that he is extremely angry. There was no point falling out over an impossible battle so after apologizing I removed myself from the conversation. I then messaged him to ask how he was feeling, he replied stating that he still feels angry when he thinks about it and that he can’t really say much more because it would just come off as ad hominin attacks. The last time someone treated me like this was a white nationalist who kept trying to get me to defend the cop killing by a black lives matter supporter and the fact that one of their founders that they still support is a convicted cop killer. He also could barely contain his anger and constantly focused on my character misreading my points. There different sides of the same coin. There simply being one-dimensional with their data collection. Months later it was found that the left-wing group at Charlottesville has been classified as a terror group by the Department of Homeland Security [link]. It was only a matter of time, they are the violent thugs of the left just like Nazis are violent thugs of the right.

It’s very rare that one variable is usually solely responsible for the outcome. If there is, then there is a lot of evidence to back it up. One-dimensional statistics of an outcome definitely isn’t proof. If you find yourself forcing a narrative and one variable into the analysis, change, or get ready to be outperformed by a simple logistic regression algorithm.

Change your variable weights

Another reason why machine learning is so successful is that it assesses it’s error, and slightly adjusts the weights, and reassess the error. Basically learning. Now you’d think that people do this. They all do to some extent but when it comes to our analytical models of the world a growing number of adults not only fail to do this, they encourage others to follow suit and defend it under the guise of fighting bigotry. We can explain this by the following:

Let’s say that I’m a boy at school and I hold deeply sexist beliefs. I believe that girls and boys are not individuals and that there is clear intelligence difference. I believe that all girls are smart and that all boys are stupid. I hold these sexist beliefs until the final exams of my high school as anyone challenging me is dealt with aggression and name calling, so they quickly give up trying to converse with me. After exams it turns out that I do fairly well, some of the boys passed and are going to university, also some of the girls, and some boys and girls failed. Now I have strong evidence that my model is not correct and can be improved. I have two choices. I could take the information on board, refactor my model, and get ready to test it again when I get more data. This is known as growing up. However, I could choose to ignore the data and retain my bigoted model. I can simply do this by changing definitions. I could no longer identify as a man. Now I must stress, this doesn’t mean that all trans people are bigots. People are individuals. However, there is a growing number of people flippantly stating that they don’t identify as X. Nobody dares question for fear of having their character attacked. However, we should be asking why. Some will have completely valid points for not identifying as X, whilst others exploit it in order to prevent changing their model, holding onto simplistic or bigoted beliefs.

Changing the weights is one of the key reasons why machine learning is gaining ground. Sadly a lot of humans are not doing the same. A good method I use is measuring predictive outcomes. Predict a stock price or a certain outcome that you are not emotionally attached to with your current knowledge of politics and economics. Write down your assumptions and how they will affect the outcome. Later on, see if your prediction was correct. If you got it wrong, go through your assumptions, read up on them and see which ones were wrong. If you improve you may even get confident enough to put your money where your mouth is. For me it was Bitcoin. I was sure that blockchain combined with currency would make a huge impact and was liberating to many, greatly improving on standard currency. I bought it at £250, at the time of writing this it’s now at £4,369. For every £1,000 I put in I got £17,477 back in a year and a half. You’ll find that many one-dimensional analysts don’t make predictions, they just stand on the sidelines criticising, and acting like the world is falling apart because their simplistic models cannot predict or make sense of the world.

Stop dehumanising

Bizarrely, this is one the main criticisms for artificial intelligence. People confidently say that artificial intelligence dehumanizes people and problems. However, their analysis falls short, as people are just as bad if not worse. Why are they worse? Because they selectively dehumanize. A great example is taxing the super-rich at a higher rate via the ISF tax in France. The idea feels good, however, the super-rich left the country as others happily accepted them resulting in a 200 billion loss with an annual shortfall of 7 billion. GDP fell every year by 0.2%. Lower and middle-class families ended up paying more tax to make up for the deficit [link]. However, politicians could abuse the one-dimensional statistic that wealth inequality had reduced because the rich left, so technically there was a smaller gap even though the average family was worse off. Now France is bidding for London bankers after Brexit [link]. I guess France concludes that something is better than nothing. See taxing super-rich sounds like a good idea if you dehumanize the rich because they will not change anything and just pay more. However, the rich are also human and will change their actions and plans when the environment changes. Another great example is rent caps. In New York when this happened, the result was a huge housing shortage. Anyone who could sell to retail did, honest landlords left leaving more crooks as landlords as they made profits by not following the law. Owners burnt down apartments to cash in on insurance because the profit in renting was reduced, people who were already renting had no incentive to share or downsize because their rent was capped despite the increasing shortage of apartments. Also, there was no incentive to build new apartments because the profit is capped. New York was such a well-documented finding that you’ll find it in most introductory economics textbooks. However, to this day, modern politicians like Corbyn propose rent caps. It seems like a good idea if you dehumanize landlords, and assume that they are not going to change their actions and plans when their situation changes.

You also see this a lot in online debates. Sometimes one person goes 0 to 60 with emotionally charged anger, and the other person, who wasn’t expecting that, apologizes and removes themselves from the conversation. The other side sees this as a bad thing and gets even angrier and sometimes the person who left if called names whilst continuing to talk about their feelings. This makes sense if the person leaving is not a human, however, they are, and they also have feelings and the right to leave a situation when they feel attacked for uncomfortable. In conclusion, either dehumanize the whole system or don’t dehumanize it at all. Not only does dehumanizing part of the system create fairly destructive outcomes, it also leads to wildly inaccurate and distorted predictions and perceptions. Really you shouldn’t dehumanize at all, this is your biggest ace card against machine learning and artificial intelligence. Don’t throw it away.

 

Leave a Reply