What Is 'Artificial Intelligence'? And Why Is It Suddenly Everywhere

Written By Katrina Paddon

You may be surprised to learn that the term 'Artificial Intelligence' or 'AI' has been in use for 70 years, since it was coined at the 1956 Dartmouth Conference on Artificial Intelligence by emeritus Stanford Professor John McCarthy. His original definition for AI is “the science and engineering of making intelligent machines”(1). Since that time, while AI as a technology has been developing and maturing, AI as a mainstream topic of discussion has been largely off the radar... until now.  In the past few years, it seems AI has infiltrated every aspect of our lives and yet, there's an undeniable sense that we're often talking at cross-purposes. So... what exactly is AI? And why is it suddenly EVERYWHERE? 

At its most basic level, artificial intelligence is just that: a non-biological intelligence (thanks, Max Tegmark! (2)).  The thing is, that's not really what we're trying to put our fingers on when we're constructing a definition of AI. The definition we're looking for has connotations: it's a computer-based technology, it's generally pegged against human intelligence (artificial can be anything non-biological), there has been human involvement to establish it, it subsequently acts with autonomy to produce an output.  And while all these characteristics can be addressed by Tegmark's broad definition, it's so broad that we're likely to end up talking at cross-purposes with our customers, colleagues, advisors, with a consequential impact on responsibility-assigning characteristics such as risk ownership and liabilities. 


Roight (as the Aussies say), the context is set. So why doesn't someone just define it and put us all out of our misery?

This is clearly a facetious, oversimplified question, but it's a great in-road to exploring the complex factors that are considered in the bid to define AI. Figure 2 provides a handful of definitions from leading organizations and philosophers: if these actors can't align, what hope do the rest of us have? This then leads to our first big 'ah ha' moment: if it seems like everyone is using 'AI' with different intent behind them, it's because they are. At the macro conceptual level, there is general alignment across actors, yet when-push-comes-to-shove, pen-is-put-to-paper, the-rubber-hits-the-road: aligning on a definition of AI that is sufficiently specific has thus far eluded us. Let's explore some of the reasons for that.  

First, what actually is intelligence

Definitionally, it's a "mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment." (3)

Makes sense to me. But I suspect my interpretation is vastly different to yours, which is vastly different to our colleagues'. And herein lies our biggest problem: what is the 'intelligence' referred to in 'artificial intelligence'? This has been on the minds of the industry since day dot, and on the minds of philosophers for millennia. 

In 1950, Alan Turing devised the Turing Test, named in honour of its creator, which proposed that if an evaluator could not distinguish between the response of a computer and the response of a human, the computer was exhibiting AI. We now know that this test is too simple, but given this test was identified in the 1950's, it was pretty relevant for the technology at the time.

These days, the struggle continues. Pick up most any book on AI and the reader will observe that introducing the field of AI almost always begins by exploring the contributing disciplines, which span the gamut of the sciences and arts disciplines. They don't start with the 'artificial' (i.e. the technology); they start with the 'intelligence' and the complexities of assigning a common understanding or boundary to the term.  

So when we align on a common understanding of intelligence, Plato would love to know, please and thanks!!

Our next big hurdle is that different actors have different priorities and they adjust the definition according to their need and circumstance. 

As we can see in the table below, each definition of AI has a slightly different focus according to the lens of the actor. This is an incredibly important factor in our inability to align on a definition of AI because a single definition will inevitably lead some actors to be disadvantaged (through risk-ownership, liability ownership, higher/undue levels of responsibility, etc.) This is BIG because it could severely limit some actors and preference others, and the field is too young to know how it's all going to play out.  

For example, Determann, a lawyer, has developed a definition that specifically assumes AI is a computer system and its outputs are unpredictable. Predictability is, of course, important to lawyers because their role is to help us identify and mitigate our risks. If an output is unpredictable, it's a lot harder to determine who is responsible, and that makes it much harder to protect us from negative outcomes.  

For these same reasons, lawmakers and regulatory bodies, such as the OECD and the EU, often apply a broad definition of AI, which is more encompassing and therefore reduces the risk of missed scope and allows for laws and regulations to evolve with the technology.  These broad laws and regulations can be uncomfortable for companies trying to comply, because they may not even know how to comply. And if they comply, now, will they continue to be able to comply in future, if technology evolves and/or regulations change?  We'll explore some of the double-edged swords and walking the AI tightrope in a future article.

In contrast to the previous examples, a marketing and sales function, too, often applies a liberal definition of AI, but they aren't thinking so much about responsibility/liability, rather their focus is on customer acquisition and sales. AI is the hot new thing in the market and customers associate AI with many positive benefits such as: being perceived as a market-leader, improved efficiency, cost-reduction, improved decision-making, etc. Applying an 'AI' label captures the attention of potential customers moreso than boring old software and tools, or the enigmatic technological concepts such as machine learning, deep learning, LLMs. 

I'll leave you with this parting thought on the topic: the OECD, which has secured agreement amongst its members for a definition of 'AI system', was unable, in six years, to agree a definition of AI.  The struggle is real.

And it's all then compounded by confusion.

Intertwined with the definitional challenge is the subset of concepts that sit under the AI umbrella.  As we can see from Figure 3, there's a lot of terminology that relates to AI, and this is just the tip of the iceberg.  

Drawing on Philosophy 100 (yessssss! I knew Logical Reasoning would come in handy!!!), each of these concepts are a subset of the previous. SO: Machine Learning is a subset of AI, Deep Learning is a subset of Machine Learning, and Generative AI is a subset of Deep Learning.  Ipso facto, they are all subsets of AI, but they are not synonymous with AI. 

This gets pretty confusing when most people are still learning that all these technologies even exist

So we've got a lack of alignment on the definition of AI and we've got a whole field of ever-advancing technology wherein most people have only just started to recognize the lexicon, never mind the underlying meaning. As one can imagine, the compound effect of these two scenarios makes the AI ecosystem pretty confusing to non-technologists


Ok, so there're a lot of different definitions.  So, what?  


We'll dig in to this in the next article but I'll leave you with this parting thought: if we can't even define AI, how on earth are we building it in a responsible, ethical, values-aligned manner?