In January 1950, The Telegraph published an article with the headline “ACE MAY BE FASTEST BRAIN”1. The article described work being done at the National Physical Laboratory near Teddington, where they were building an experimental prototype computer. Here’s a quote from the reporter’s piece:

“An electronic ‘brain’, which is expected to outshine all its rivals by its speed in working out mathematical problems, is being developed at the National Physical Laboratory. It is known as ‘Ace’ (automatic computing engine).”

Six and a half decades later, The Independent covered Google DeepMind’s upcoming match between their software, AlphaGo, and Go World Champion Lee Sedol2. DeepMind engineer David Silver was quoted describing AlphaGo’s process:

The search process itself is not based on brute force but on something akin to [human] imagination. In the game of Go we need this incredibly complex intuitive machinery that we only previously thought to be possible in the human brain.

What’s remarkable about these two pieces, a lifetime apart, is their desire to paint these machines as humanlike. The Telegraph article about ACE describes the “43 ‘brain cells’, 6ft high”, and how the machine will have a “memory” and be “able to ‘remember’ 256 10-digit numbers”, while The Independent quotes Silver talking about concepts that we barely understand in humans, like imagination, and attributing them to a supercomputer running a few clever pieces of code.

It probably won’t surprise you to learn that the ACE, which took four years to build, was both the size of a truck and less powerful than a modern-day calculator. It seems adorable now to describe this thing as an “electronic brain” - a device that required considerable human effort to operate and could really just store numbers and perform a few operations on them. Yet in 2016 we find ourselves speaking even more audaciously, describing a computer built to play Go as possessing intuition, imagination and creativity. What’s more, these terms like “brain” and “imagination” aren’t elaborate poetry from the journalist, but in both cases come from the engineers working on the machines themselves.

This article is about what AI is, but it’s also about why learning what AI is is important in the first place. It’s about how AI is marketed as a commodity today, and what impact that has on people whose work and social lives are touched and shaped by AI on a daily basis. And it’s about how the future of resistance against AI-backed exploitation may not just be technological in nature, but social and cultural.

Good, Better, Best

At its heart, artificial intelligence is not about intelligence at all, but about search and optimisation, two things which are often completely disconnected from what most people think of as intelligent. Let’s start with search: most AI problems are described in clear mathematical terms as a search for a solution. In chess, this solution might be a sequence of moves (or just a single move - your next one). In engineering a bridge this might be a set of numbers that describe the dimensions of various components. In medical classification this might be a flowchart of questions and the diagnoses they lead to. Different AI techniques will search for a solution differently, and we’ll look at a few examples of this in a minute, but the important bit is that in most cases we start building an AI by very clearly describing the shape of what we’re looking for, whether that’s chess moves, engineering formulae, or medical diagnoses.

The second part is optimisation. How does an AI know when it’s found a good solution, or how to compare one solution to another? Most AI figure this out by using what we call an objective function. An objective function is a mathematical description of the goal we’re using this AI system to search for - kind of like someone playing Hot And Cold. The AI is searching through billions of solutions, and every time it picks one up and looks at it, the objective function shouts out “You’re getting warmer!” or “Colder!”. In chess, our ultimate goal is checkmate. In our bridge example our goal might be to balance physical equations that describe whether our bridge falls down or not. In a medical classification system, our goal might be to accurately diagnose a thousand patients correctly, using historical data where we already know what illnesses they had.

Optimisation is the secret at the heart of artificial intelligence. Historically, every area in which AI has made serious strides has had a clear way of defining its goals, and often what progress towards those goals means as well. The goal state in Chess is not up for debate: you are either in checkmate, or you aren’t, and no-one can disagree over that fact. Chess even has pretty good ways to measure who is winning, by assigning points to each piece still on the board. This makes it a perfect problem for AI, because every decision the system considers can be evaluated perfectly. Sure, there might be different techniques that thinks about different options differently - is it better to maximise the number of ways to win, or minimise the number of ways to lose, for instance. But ultimately, chess is mathematically precise. It can be optimised.

We can see this effect everywhere: engineering problems like optimising the design of a bridge have precise formulae that describe how good or bad a bridge design is. Medical diagnoses are either correct or not; symptoms are either present or not; even pain is reduced to a ten-point scale that a machine can compare, organise, and sort. But this is where optimisation begins to get blurry. What happens when something can’t be measured? What happens when we can’t express mathematically what we’re looking for?

Probably Approximately Correct

When artificial intelligence began as a field in the 1950s, a lot of its applications fell into two categories: wild ambitions, like being able to freely converse with a computer; and tightly defined problems, like proving mathematical theorems. But as the decades passed by, computers became something that filled not just laboratories but offices, schools and homes. Computers weren’t designed to just remember ten-digit numbers and crunch aeronautical engineering problems; they were used to play games, to converse with faraway friends, to manage labour, to organise governments.

Within computers, nothing is vague. Everything is eventually written down as a zero or a one, even if those zeroes and ones might build up into something more complex. When we look at a picture of the Mona Lisa on an online image search we don’t see the image as it was painted - we see a sequence of pixels, each with a precise numerical description of its colour. We know that this is not how the Mona Lisa exists, it isn’t an accurate representation of the pigments and oils and canvas that make the painting up. But we also accept that it’s close enough - what we lose in realism is minor compared to what we gain by being able to represent it in a machine.

As computers became a bigger part of society, so too did AI. No longer was it being applied only to mathematics and engineering: now AI could be applied to recommend products to people in online stores; it could be used to decide which social media posts someone should see; it could choose which workers were assigned which jobs. But like finding a way to represent the Mona Lisa, a tradeoff had to be made to apply AI to these real-world problems, and the tradeoff here was that every one of these problems needed clear goals to aim for. Shopping, socialising, work - all of these tasks would be treated like they were chess.

The most obvious problem is that these tasks aren’t chess. Your Netflix history can be represented as data, but that doesn’t mean who you are as a human being is entirely expressed in that data. People are complex creatures, influenced by a thousand different, subtle things during every day of their lives. When a friend recommends you a film to watch it’s not just based on what you’ve watched - it’s based on your personal politics, how you’ve felt recently, experiences you’ve shared together, or things you’re working on. Netflix only knows two things: what you watch, and what every other customer watches. For a recommender system this can be fairly innocuous - at worst, Amazon will simply shower you with recommendations for electric toothbrushes after you spend three seconds looking at one. But for other tasks, reducing a complex real-world task down into data can erase the most vital parts of that task - usually the most human parts. When Uber’s system notices a spike in demand for customers and issues a temporary price raise, it only sees numerical demand. It can’t know that people are fleeing an emergency, or rallying to protest. It just sees human behaviour distilled into numbers, and acts accordingly.

Another problem is that even if we agree to pretend these things are chess, not everyone agrees what checkmate looks like. A hot topic at the moment in AI is how to apply it to detect ‘fake news’3. News articles aren’t as hard to represent digitally as human beings - we can analyse the text, images and hyperlinks in an article quite easily. But deciding what we’re looking for is much harder, not just as an engineering challenge, but philosophically. What counts as fake news? Where will a North American tech company draw the line defining it, compared to where a university in Africa might, or a governmental committee in East Asia? What does fake news mean to the executive board of Facebook, compared to the outsourced workers who are paid to moderate violent content, or the underpaid migrant cleaners who work in their offices? This would be a difficult problem to solve if it was open to debate and input from many different groups, but the people who get to define the goals for AI are often AI researchers and engineers - a group largely populated by middle-class white men from living in the Global North. Everyone agrees on what the goal is in a game of chess, but when it comes to defining misinformation, redistribution of wealth, social justice, or the rights of workers, we might be less happy with one particular demographic group defining what the goals are.

However, perhaps the biggest problem with this way of representing the world is that it exacerbates the existing structural problems about only valuing things which we can measure. If it proves too difficult to model something like the tiredness of your workers, it’s easier to leave it out of consideration. If workers get tired, get ill, or have commitments in their personal lives, it’ll become evident through things that can be measured, like the number of jobs completed or the time it took to complete them. Why bother modelling something that’s hard to measure, if we can wait and watch it be expressed through something measurable? The way that AI views the world reinforces the way capitalism views it, which probably goes some way to justifying the symbiotic relationship we’ve seen in recent years. After all, to a corporation, most things probably do seem like chess.

The Mechanical Worker

If most artificial intelligence software is so simple, and so bad at guessing how the world works, then why don’t we see through it very often? Why are we still talking about AI as if it’s about to take over the planet?

For one thing, sometimes AI works just fine. As simplistic as Amazon or Netflix’s recommendations are, they’re fairly effective at providing people with things that they’ll like to watch. While people are a lot more nuanced than just their viewing history, recommending action movies to people who like watching action movies isn’t all that different from having genre shelves in an old VHS rental store. Even for more unusual problems, like my own research into automatically designing videogames4, a very simple goal can sometimes produce interesting things, even if it’s not exactly what people wanted.

A broader reason why AI is dominating our lives right now, however, is an unusual combination of social ideas that have come together in a perfect storm of support for AI. The first is a particular brand of respect for ‘science’ in the abstract, and ‘scientists’ as people who are hard to understand but brilliant at what they do. You can see this slowly growing over the last decade in communities like Reddit, in social media groups like I Fucking Love Science, or popular culture like The Big Bang Theory and XKCD. Scientists are confusing geniuses who are ultimately proven right about their worldview, even if it runs contrary to what religious people or liberal arts majors might believe. This elevates both AI and the people who work on it to a level above the general public - it is something we won’t be able to understand, but we trust the people who do understand it to know what they’re doing and for rational ‘science’ to win out in the end.

The second factor is that many people are misrepresenting AI in order to make it appear more intelligent than it is. Sometimes this is unintentional - I imagine Dr. Bullard didn’t think much of describing the ACE as a “brain” back in 1950, and it’s quite possible DeepMind’s engineers don’t think twice about using words like “imagination” or “creativity” in 2018. But the increasing commercial involvement in AI means that there’s now a lot of financial incentive to exaggerate not just a specific system, but the state of the industry as a whole. AI systems like Siri and Alexa are given pre-recorded voices to feel more intelligent, and prepared responses to unusual questions to entertain their users. Reports are issued to warn us about the dangers of unchecked AI research, or about which jobs will be automated by which year. All of this helps the hype surrounding AI to continue to grow, which drives investment as well as increasing the interest of governments and businesses. Accenture estimates AI could generate $14tn of ‘gross added value’ by 2035. Whether this number means anything is irrelevant, of course - all that matters is that people think it is.

The third factor is the influence of science fiction on our expectations of AI. AI is regularly depicted in popular culture at an extremely advanced level - usually something that can talk to in plain English, and as something more intelligent than a human being. It’s never explained how the AI works, or how technology made it to this advance state, and in fact the tension in science fiction often comes from the fact that these machines are unknowable. Artificial intelligence or sentience, is something that can’t be explained; it’s Arthur C. Clarke’s “sufficiently advanced technology” that is indistinguishable from magic. This is why many people’s reaction to finding out how an AI system actually works is often disappointment. In fact, their disappointment is so strong that once an AI technique has become accepted as a solved problem, it often fails to be seen as being AI anymore - a phenomenon we call the “AI Effect.”

The “AI Effect” is a powerful thing. In the 1980s, the idea that a computer could beat someone at chess was an AI dream, something we see references to in films like Blade Runner. Today, decades after Gary Kasparov lost to IBM’s supercomputer Deep Blue, many people regard chess as an uninteresting problem. Rather than some mysterious alien intelligence, Deep Blue is often described as just trying billions of possible moves until it found winning ones. We learned how it worked, and the magic died. AI researcher Larry Tesler puts it succinctly: “AI is whatever hasn’t been done yet.”

But the AI Effect has changed. The AI landscape is now dominated by machine learning systems and neural networks, techniques which are much harder to explain to people. And if the techniques are hard to explain, the actual systems we build with them are even harder. We can give people an idea about how a particular machine learning system works, but many neural networks are so complex, so specialised, and so abstract that it’s hard even for AI experts to explain their internal workings. In most situations, this would make neural networks less appealing. But this mystery, this opaqueness, is exactly how we think AI should be, how AI appears in films. It makes the systems more appealing, not less, and enhances the theatre of AI.

Any of these things alone would be bad, but together they create a perfect storm, particularly for the general public. The public are encouraged to imagine AI as something beyond their understanding, something unknowable even to the geniuses who are developing it in their laboratories. As further evidence of this, every new advance is presented in a confusing way that makes it impossible to separate actual technological progress from polish and speculation. Which parts of Siri are intelligent? Which parts are powered by ten thousand people labelling data in an office all day long? Which parts were pre-written by an engineer making a funny joke? As a result, we can be left feeling helpless - our lives are increasingly influenced by technology we cannot control, did not ask for, and don’t understand.

Virus Scanning

Whether or not AI is really useful, there’s no doubt that the belief that AI works is going to change our lives. For people who are already being exploited, AI will most likely be used to amplify this exploitation. For example, employers no longer need to snoop on the social media accounts of job applicants or employees; services like Fama use AI to automate this process, enabling mass surveillance of workers’ online posts and flagging things that might assist in making hiring, firing, and promotion decisions5. This forces workers to either moderate their entire online presence or make all interactions private. Even more importantly, though, services like this can be used to provide a veneer of impartial logic over prejudiced managerial decisions. By sending systems like Fama after employees the company already wish to dismiss or avoid hiring for illegal reasons (decisions based on race or gender, for example) it’s easy to find justifications that come from a computer - something perceived as impartial and rational.

This veneer of rationality works even when the humans using the system aren’t aware of it. Several systems have emerged recently that claim to be able to identify criminals or criminal behaviour, with the most extreme claiming to be able to identify a criminal simply from a photo (which, of course, they can’t)6. In 2016, ProPublica investigated a system that used criminal records to predict the likelihood someone would reoffend, a real system used by federal systems in the US, and found that it made mistakes twice as often with black people than it did with white people7. One of the biggest causes is that the system is powered by data from the American prison system, a system that is already racist and corrupt. An algorithm that produces clinical assessments of people sounds logical and rational, and someone using this software may feel that these judgements come from a deeper understanding of the facts, but in reality they are build on a foundation of rotten data. All the AI does is paint over the rot.

Even though we like to frame AI as making ‘decisions’ for companies, they lack many features that humans would have in the same position: algorithms can’t protest, have a change of heart, leak information, or make judgement calls. They simply do what they’re told, which makes their capacity to amplify power while simultaneously defending it from criticism extremely worrying. But that doesn’t mean that the use of AI cannot be resisted, both socially and through direct action.

What does technological resistance look like? It can come in many forms. One approach is to find low-cost and low-complexity ways to attack big AI systems. A good example is resistance against the increased use of facial recognition systems, where software is trained to watch videos or look at pictures and match up faces with a database of people. Despite how accurate these systems seem, very simple geometric patterns can confuse them8, and there are exciting experiments that suggest these patterns could be printed onto scarves or painted onto faces9. This kind of resistance is cheap, easily distributed, and can be surprisingly effective, too.

Obscuring or muddling your online activity is another crucial way to resist AI systems which rely on data to create accurate profiles of your behaviour. Ad blockers are the most simple way most people do this - many online ads also record information about who saw them and on which sites, which over time build a picture of someone’s browsing habits. Blocking ads not only makes the internet more pleasant to view, but cuts back on the amount of information that’s recorded about you. The internet is rife with examples of this data being collected - apps like Twitter beg you to turn on location services so they can record where you use their app, and many Facebook users recently learned that the data Facebook holds about them is terrifyingly complete, including detailed call logs gathered from phones which have the Facebook app installed. While this is mostly seen as a data protection issue, it is also a way that artificial intelligence systems gain power over us. Companies like Facebook and Google know that data is currency, regardless of what the data actually is. The more they have, the more things they can discover, and the more valuable their systems become. In the future we may need software which intentionally seeds false data in our online habits - browsing Amazon randomly when we’re not using it, or inserting invisible characters into our tweets to break up keywords algorithms might search for. These digital smokescreens may lead to an arms race in the long term, but could be crucial in stopping companies from gaining a full picture of who we are simply from how we use technology.

Another more social angle on digital resistance is to change the prevailing mindset in the artificial intelligence community. If we’re feeling generous, this might mean trying to change the minds of the engineers and researchers already working on AI, to encourage critical thinking about the things we are asked to build and research. The more productive route, I suspect, is to actually bring more people into the community, from outside of computing. Putting artificial intelligence into the hands of more people defuses some of the power associated with it, and increases the likelihood it’ll be used for good, as well as making it harder to mislead people with it. This is something that the technology community has to organise and work hard to achieve on its own.

But here’s another way we can resist some of the problems posed by the reckless use of AI: work to change the role that AI has in society. If we dismantle the power structures surrounding AI, built on superior knowledge, mysterious black boxes and descriptions of the world written by a minority of technologists, then we change the relationship the world has with AI. We can transform it from being something we fear into something we understand, something we can discuss with one another, something we can make demands about, something we can negotiate around. It is the start of a road that leads to us taking back control over a technology that is being positioned to control us.

What might that look like? One form it can take is demanding explanations of what software is doing. In 2018, new legislation will come into effect in the EU which says that people can demand an explanation for certain kinds of decisions made about them, even if those decisions were made by an AI10. It’s unclear right now how strictly this will be enforced, and it only covers certain kinds of data protection issues, but it’s a step towards forcing technology companies to demystify their work, and explain clearly why software does certain things. If workers demanded plain-English explanations for how algorithms grade worker performance or assign jobs to gig workers, it could help reveal hidden exploitative practices, as well as help people negotiate more precisely about their working conditions. Above all else, it could help lessen the stress and anxiety caused by being watched and managed by anonymous algorithms.

It can take a more everyday form too - encouraging people to remain critical of claims made by AI companies and to demand evidence and explanations of new advances in technology. We need to smash the idea that artificial intelligence is complicated and mysterious, and encourage journalists, academics and other groups to start treating AI like any other kind of human endeavour - one that should be made more accessible, not less. While it might ruin the mystery and the science-fiction dreams we have, AI won’t become any less magical by being more easily understood - but it will be harder to weaponise public confusion, harder to mislead and misdirect people with jargon and stunts, and harder to shift blame from human greed to machine error.

In 1996, Gary Kasparov sat down for a six-match series against IBM’s Deep Blue chess-playing computer11. Deep Blue would go on to win the series, three and a half games to Kasparov’s two and a half. The Independent covered the story:

Kasparov’s defeat does not imply that computers are intelligent. Even Deep Blue’s programmers would not claim any intelligence for their vast number-cruncher of a machine. Vast numbers of calculations may end up providing a better result than human intelligence, but the process is a long way from being intelligent itself. What it does show, however, is that in an area as complex as chess, huge calculating ability may be enough to overcome a basic lack of understanding.

Four decades after the “electronic brain” was turned on in Teddington, and two decades before AlphaGo’s “intuition” and “imagination” beat Lee Sedol, this article’s less impressed tone sits as a reminder that artificial intelligence is only as terrifying as we imagine it to be. Indeed, one of the most important ways to fight back against AI besides directly disrupt the technology itself, might be to attack and take apart the cult that surrounds it, that reinforces its importance to governments, to businesses, to the general public. In doing so, we not only save the world from the worst excesses of techno-utopianism, but we also perhaps save artificial intelligence from itself, and open up the possibility to set this technology on a better path.


  1. The Daily Telegraph, quoted in Alan Turing’s Automatic Computing Engine by Jack Copeland - https://books.google.de/books?id=HI7MPjv-ffYC&pg=PA8&lpg=PA8sig=Z3nG-3Z4Ix6ZvsibMArsTJmxfGo&hl=en&sa=X&ved=0ahUKEwiQrbmG8sXZAhWBVxQKHa-sBEoQ6AEIKTAA#v=onepage&q&f=false 

  2. Google AlphaGo Computer Beats Professional At ‘World’s Most Complex Boardgame’ Go, The Independent - https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-alphago-computer-beats-professional-at-worlds-most-complex-board-game-go-a6837506.html 

  3. The Fake News Challenge - http://www.fakenewschallenge.org/ 

  4. Games By ANGELINA - http://gamesbyangelina.org/ 

  5. Fama.io - https://www.fama.io/#why-fama 

  6. Troubling study says artificial intelligence can predict who will be criminals base on facial features, The Intercept - https://theintercept.com/2016/11/18/troubling-study-says-artificial-intelligence-can-predict-who-will-be-criminals-based-on-facial-features/ 

  7. Machine Bias, ProPublica - https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 

  8. New “camouflage” seeks to make you unrecognizable to facial-recognition technology, QZ - https://qz.com/878820/new-camouflage-promises-to-make-you-unrecognizable-to-facial-recognition-technology/ 

  9. Facial recognition jamming temporary face tattoos, @baphometdata - https://twitter.com/baphometadata/status/941751469010714624 

  10. Artificial Intelligence Is Setting Up The Internet For A Huge Clash With Europe, Wired -
    https://www.wired.com/2016/07/artificial-intelligence-setting-internet-huge-clash-europe/ 

  11. It calculates 1 billion chess moves every second, but it’s still not as bright as you, The Independent - https://www.independent.co.uk/news/uk/it-calculates-1-billion-chess-moves-every-second-but-its-still-not-as-bright-as-you-1318781.html 



author

Mike Cook (@mtrc)

Mike Cook is an AI researcher studying creativity, generative software, and game design. He currently works at the University of Falmouth’s Metamakers Institute.


Subscribe to Notes from Below

Subscribe now to Notes from Below, and get our print issues sent to your front door three times a year. For every subscriber, we’re also able to print a load of free copies to hand out in workplaces, neighbourhoods, prisons and picket lines. Can you subscribe now and support us in spreading Marxist ideas in the workplace?