The Blog

 

This is the first part in a beginner-to-beginner blog series, a series from a developer new to artificial intelligence and machine learning to other beginners seeking to learn more about artificial intelligence.

When I first began thinking of ways I could apply my neuroscience degree to my newfound love of programming, I was immediately drawn to the study of artificial intelligence. It only took some hours of cursory research for me to stumble upon the term machine learning. At first, AI and machine learning seemed indistinguishable from each other, and this is the attitude often taken in media and by companies claiming to use AI technology in their products. In actuality, they are related, but quite distinct in both meaning and function.

On the surface, machine learning is one of many branches within AI. The following diagram is a simple visual of how machine learning falls inside of AI. It is a part, but far from the entire story of how we create machine intelligence.

visual representation of ml inside of ai

Let’s begin from machine learning and use that understanding to make a distinction from AI. A popular definition for machine learning was said by a pioneer in the field, Arthur Samuel, who defined it as a “field of study that gives computers the ability to learn without being explicitly programmed.” What machine learning is trying to prove is the concept that given sets of data, machines can learn on their own, independent of humans.

Machine learning

Tom Mitchell, said in his book “Machine Learning,”

“The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience.”

There are three types of machine learning that exist today:

  1. Supervised learning — In this type of machine learning, the algorithm is developed from labeled data. In other words, the programmer tells the machine explicitly how each piece of data is assigned. One example of this type of learning is in speech recognition. When giving the machines a large data set of audio samples, the machine would be explicitly told the meaning and text of the audio. If the algorithms work correctly, the machine can then apply its knowledge to a new audio sample.

  2. Unsupervised learning — Similar to supervised, unsupervised requires giving the machine a large amount of data. However, the data is unlabeled, and the machine must then try to understand and predict what patterns do exist. One example could be providing a machine with a large data set of faces and letting it separate and find patterns among the faces for itself.

    The following diagram uses an example of fruit sorting to show the difference between supervised and unsupervised learning. In the former, the machine is told that this input is a group of apples. When given a fruit later, it should be able to accurately predict that it is indeed an apple. In unsupervised learning, the machine is given a mixed assortment of data and, based on its algorithms, expected to sort the fruit in distinct categories.

    difference between supervised and unsupervised learning Background Augmentation Generative Adversarial Networks [BAGANs]: Effective Data Generation Based on GAN-Augmented 3D Synthesizing – Scientific Figure on ResearchGate (accessed 24 Apr 2019)

  3. Reinforcement learning — The final type of machine learning consists of providing a set of rules and letting the machine discover how to best reach a defined goal. This is the kind of learning common when trying to teach a machine gameplay like chess or poker.

Artificial intelligence

Artificial intelligence has a much less neat definition than machine learning — and for good reason. It is messier not only because AI encompasses a broad set of algorithmic behaviors, including machine learning, but also because it is an ever-evolving and changing thing.

A common definition of AI found in textbooks is “the study and design of intelligent agents,” where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

AI aims to make computers learn and become intelligent in the ways we traditionally believe humans do. We’ve gone from the awe-inspiring moment in 1997, when Deep Blue, IBM’s supercomputer, beat a world-reigning chess champion to the present where facial recognition and self-driving cars have become or are near reality. Throughout time, AI has been synonymous with a vague but possible future where machines develop as close to human consciousness that we can imagine. The gap between us and the machine will only get smaller as what was once unimaginable becomes possible.

Why the distinction matters

It is vital that beginner programmers understand the difference between AI and machine learning for a few reasons. Understanding the specificity of machine learning is what will create the best ideas on how to break those boundaries. For example, I once took an improv class in which we played a game where we would each give the rest of the class basic instructions as to what they should perform. On my turn, I told the group to imagine a childhood memory and try to re-enact it with the rest of the group without speaking. What came out of this game were both wild and intriguing moments of connection between the actors, but those moments only existed because there were boundaries given to what they were supposed to do. In the world of improv and in the future of programming, an understanding of the boundaries and limitations is what allows for expansive and innovative ideas.

The discoveries to be made in AI are bigger than we can yet conceptualize, and it is important that programmers understand the distinctions within AI to make informed and viable decisions with the technology they create.