AI

Your AI vocab cheat sheet

Artificial intelligence is going to affect our lives in unpredictable ways. So that you’re not caught completely of guard when the robots arrive, here are a few of the most important buzzwords in AI.

AI 

Short for artificial intelligence – as well you know – AI is a broad topic, and what might have been called artificial intelligence a generation ago is now just a regular part of day-to-day life. The most basic definition of artificial intelligence is the ability of machines to imitate intelligent, human-like behaviour.

But there are some artificial intelligences that can solve remarkably complex problems but aren’t necessarily copying how the human brain works. In the case of some types of machine learning, we don’t know exactly how an AI works, only that it is producing a seemingly intelligent result. Artificial intelligence is therefore sometimes divided into two types: weak AI (aka narrow AI) and strong AI. 

Weak AI is a synthetic intelligence applied to a specific, narrow task. This is the type of AI we deal with on a daily basis. Object recognition AIs might surpass human minds for accuracy, but then they might be utterly useless for weather modeling.

Strong AI (aka artificial general intelligence/AGI) is the type of artificial intelligence we see in movies. While weak AI uses computational power to perform specific tasks – build solar panels, drive a vehicle, deploy propaganda bots over social networks – strong AI tries to deliver the total package, a fully sentient being. Were Dr Frankenstein alive today, he would be researching strong AI.

Algorithm

The word ‘algorithm’ comes from the Latinised name of the 9th century Persian scholar and mathematician Muhammad ibn Musa al-Khwarizmi. Al-Khwarizmi’s book The Compendious Book on Calculation by Completion and Balancing introduced the medieval world to algebra and to its application in solving a host of practical problems.

For most of us, ‘algorithm’ means “a magical secret sauce that makes computers work”. But there’s nothing magical about an algorthim. An algorithm is a set of unambiguous calculations designed to solve a specific problem.

Given that an algorithm is a logical series of mathematical steps, it will always produce a predictable outcome. Computer coding is the language we use to explain to computer hardware the algorithms we would like it to enact. Complex software uses whole constellations of algorithms co-operating to produce predictable outcomes.

Algorithms are not mysterious. They are human-designed sequences of instructions. If software behaves in a certain way, it’s because carefully constructed algorithms are instructing it to act in exactly that way.

Black Box

A black box refers to a system where the input and output are known but the actual processes that produce the output are obscured, not examined, or not available. The black box is exemplified by machine learning, where the specific ways a system is improving or learning may not be entirely clear to the user, or to the developers.

Bot 

A bot is a simple AI. It’s software, created to independently perform tasks that would be too time-consuming or impractical for a human to perform. We encounter and use bots everyday. Right now, spam bots are launching endless streams of emails at your inbox, and if you go to your ISP’s website to complain, you may be put on hold by a chatbot (or a single human overseeing multiple chatbots), and that infuriating post on your social media feed may actually just be the work of a propaganda bot, not a human.

In popular culture, we think of bots as being kind of dumb. That’s due in part to their diminutive name and humble beginnings. But the term bot is becoming synonymous with any kind of compact artificial intelligence. Even if a bot is just a hammer in your digital toolkit, it can still be a very, very powerful hammer. 

Deep Learning

Deep learning is a type of machine learning (see below) that uses algorithms inspired by the operations of a biological brain. Deep learning isn’t always about copying the human mind, although the processes may look very similar to those involved in human decision making and behaviour.

Deep learning usually employs complex neural networks (see below) and is superior to mere algorithmic AI in that it gets better and better the more data it is fed.

The principles behind deep learning have been around for a while, but it’s only recently we have had the computing power and the ability to manipulate large datasets to realise it. 

In deep learning, a system can be trained to make decisions based on context. Real world deep learning applications have included live translation of text – not just ASCII, but text read from a sign or from the page of a randomly magazine – and adding appropriately chosen sound effects to a silent film.

Deep learning is being used to identify and classify image content and we’ve seen that function applied copiously in the latest MAM systems.

Machine Learning

Machine learning is a variety of artificial intelligence in which a computer – or software, specifically – is allowed through repeated exposure to a variety of data to teach itself. Currently, machine learning might involve the supervision of a human being, who can tell the AI how accurately it has met its goals. Deep learning is one type of machine learning, which utilises neural networks.

In machine learning, computers are – by any meaningful definition of the word – genuinely learning, and just as we can’t exactly see what is happening in the brain of a child as it learns to ride a bike, we can’t always tell exactly how machine learning is taking place. We’ve given the computer a set of algorithmic tools and we can see that it is getting better and better, maybe more creative too, in solving a task, but how it is actually doing may be obscure – a black box.  

Neural Network

Neural networks are a system of computer hardware or software inspired by the operation of a biological brain.

A neural network consists of connected units, which act a bit like artificial neurons in a biological brain. Each ‘neuron’ can transmit a signal to the next. An artificial neuron that receives a signal can process it and then signal the additional artificial neurons connected to it. Artificial neurons are usually split into layers, with different layers potentially forming different functions.

Turing Test 

Alan Turing (played by Benedict Cumberbatch in The Imitation Game) was one of the great pioneers of computing. After helping the British Army break German codes in the Second World War with the famous Enigma machine, he worked to develop early computers, including writing an algorithm called Turochamp which could play chess. The computers to run Turochamp didn’t yet exist – the user had to look up each move in a huge collection of printed pages. In a 1950 paper, ‘Computing Machinery and Intelligence’, Turing determined a threshold at which a machine could be said to ‘think’ – when a human interviewer would be unable to determine through conversation whether the subject was human or not – the Turing Test.

This article originally appeared in the June 2018 issue of FEED magazine.

Tagged
Leave a Reply

Your email address will not be published. Required fields are marked *