At the end of the day, these labels are kind of arbitrary and will keep evolving. As of today, effectively Machine Learning and Artificial Intelligence are one and the same, since there are no viable alternatives to ML. To make the Machine intelligent, we need to teach the Machine.

Within ML itself, there’s probably hundreds of individual methods and approaches. Some more practical, some more scientific. Some general purpose, some extremely specific. The labels “supervised” and “unsupervised” have been added later, to establish some general order in that chaos. Not everything falls in either category, however.

Supervised algos learn to predict an output based on inputs. Unsupervised algos learn potential outputs based on inputs. Both learn from (big) batches of data. Reinforcement Learning does not have a concept of a learned output at all, and it learns in (time) steps. It evaluates a “reward” after each individual input step, to figure out what actions to take to increase the long-term total reward. There are other exceptions to the above like dimensionality reduction algos.

Without getting super semantic about the whole thing, any method using a Neural Network is a subset of Neural Networks. Therefore Deep Learning, or Deep Neural Networks, is a type of Neural Network. Same goes for Recurrent Neural Networks. And Convolutional Neural Networks. And Generative Adversarial Networks. The latter two are usually only relevant in the context of a deep networks, so you might say they are a subset of Deep Learning.

To make matters worse, or more complicated at least, you also have methods that combine these tools across domains. The prime example being Deep Q-Learning, or Deep Reinforcement Learning, which is famously used by Deepmind’s Alpha program, and contains both Deep Neural Networks and an underlying Reinforcement Learning process.

This increasing complexity is to be expected, however, if we use the brain as an example. The concept of Neural Networks is inspired by what we know of the infrastructure of the brain, but we have very little clues about what methods (algorithms) the brain actually has. How does it learn? Is it one crazy complicated master algo, or a thousand small algos creating emergent generalized intelligence? Certainly the layers of the mammalian neocortex would indicate a hierarchy of learning. Does the process itself adapt and change over time?

To increase the intelligence of our artificial systems, we will need to keep increasing the complexity.

Thinks about the future a lot. Founder of two startups. Lives in Singapore.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store