An introduction

The ‘short and sweet’ on aI

Where did AI come from?

Society first toyed with the concept of artificial intelligence in the early 20th century when science fiction introduced human-like robots into the creative industry. It was around the 1950s when academics began to seriously explore the notion of AI¹. The famous mathematician and computer scientist, Alan Turing, widely known for his contribution to what we now know to be a computer, was one such academic². His paper, “Computing Machinery and Intelligence,” highlighted the logic behind building intelligent machines and how to test their intelligence². The term “artificial intelligence” was officially coined by John McCarthy, a computer scientist and one of the founding fathers of AI, alongside Alan Turing². He organized the famous Dartmouth Summer Research Project on Artificial Intelligence in 1956, which is considered the conference that started artificial intelligence as a field².

Hype vs. Reality? - Keeping it in Perspective

How do you differentiate between hype and reality? When looking at headlines in the news, it is not apparent which claims make illogical leaps in their assumptions or which falsely represent the current state of the technology. To put it in historical context, this is not the first time there has been a lot of hype surrounding AI; the field has actually gone through many hype cycles, each followed by an "AI winter." These were periods of time when AI became unpopular with the general public due to its failure to deliver on its promises and to live up to expectations. AI research still occurred, but it sometimes went by other names. We know, therefore, that there is a precedent for the public to overestimate AI’s ability. Keeping this in mind, we should approach the headlines with a heavy dose of curiosity and skepticism. After all, there is always the possibility that the current hype cycle could lead to another AI winter. Only time will tell.

Getting Familiar with the Concepts and Jargon

To gain a broad overview of what AI is, the best place to start is basic concepts and terminology. This is often the most challenging part, and it is okay if it takes a little while for the concepts to click; part of the reason for this is that definitions often change and are not used consistently by everyone to refer to the same things. It is important to keep in mind that "artificial intelligence" and "machine learning (ML)" are separate terms with overlapping, but distinct, definitions. The term "artificial intelligence" is somewhat nebulous. Although you will find that there are generally agreed-upon definitions of what the term means as a concept, when it comes to determining whether something should be classified as artificial intelligence or not is another matter entirely.

Broadly speaking, AI refers to the idea of mimicking human intelligence in regards to the way we act and think; in other words, it refers to completing tasks or functions previously requiring human intelligence to complete. To put this into perspective, when the calculator was first invented, it fell under this definition of AI, as it was able to complete a task previously only completed by humans. However, the definition of what it means to mimic human intelligence has evolved significantly since the 1950s, when the term "artificial intelligence" was first coined. As the definition of human intelligence evolves, so will the definition of artificial intelligence.

Machine learning, on the other hand, is easier to define and should be viewed as a technique of AI. It refers to the use of algorithms to analyze and identify patterns within datasets without having to be explicitly programmed. In other words, it gives machines the ability to learn directly from data. However, something important to note is that machine learning is on a spectrum; it is less about whether machine learning is present or not, and more about the degree to which it is present in an algorithm. On one end of the spectrum are algorithms such as logistic regression, while on the opposite end of the spectrum you would find deep neural networks. The less a model or algorithm needs to be explicitly programmed and the more it learns from examples, the more like machine learning it is. Deep learning, then, is a subset of ML, which we will describe in later parts of this series.

How does this relate to what you already know?

Fortunately, there are some machine learning tools you may already know. Predictive models and other statistical methods you may be familiar with also utilize machine learning. A multivariate regression analysis, for example, can be used to look at a large number of variables, and determine which of them are significantly associated with the outcome of interest and the degree of that association. The outcome of the algorithm is an equation representing the line that best fits the data. This is an example of machine learning because the final equation (i.e. output) is not determined by the individual writing the algorithm; rather, it is determined by the actual data the algorithm sees. In fact, the equation may change if it is run with additional data or run a second or third time on a different dataset. You can imagine how such a capability can be very helpful to us. If an algorithm can learn on its own, then, the more data it has, the smarter it becomes. Additionally, there is less manual effort involved in development.

  1. http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

  2. https://medium.com/@exastax/the-difference-between-ai-and-machine-learning-32cfef316372