One of my pet peeves is people using terms they don’t really understand, especially when they misuse them. Knowing terms and knowing how to use them are two fundamentally different things, yet some people simply can’t resist the temptation to hijack the latest buzzword or tech term in an attempt to impress.

Of late, one of the victims of reckless misuse is artificial intelligence (AI). Repeatedly confused with all things technologically “smart”, the term is now bounced around coffee shops, conference halls and boardroom tables with troubling frequency. Let’s set the record straight: the apps on your smart device, the GPS on your cell phone AND the voice activated gadgets that switch your living room lights on and off are not AI.

But if that’s the case, then what is? What is artificial intelligence?

Well, colloquially and broadly speaking, it is the advancement of computer systems to perform the kind of tasks usually limited to humans. More specifically, when American computer scientist John McCarthy coined the term in 1955, he was referring to “thinking machines”. In other words, machines that are able to mimic cognitive functions, such as learning and problem solving, that we associate with human minds.

As you can see, the definition of AI isn’t the complication. In fact, it is the simplicity of it that enables people to freely use — and misuse — the term. We can all spell out the words and throw them into a sentence, but the complexity is in really knowing how to use them, and this is where many people fall down.

When you use AI, the chances are, what you really mean is machine learning. That is, the broader concept of machines being able to carry out tasks in a way that you would consider “smart”. It is the core of AI, much like the engine is a core component of a car.

Of course, a car is more than the engine, but without it, there is really no car. Put another way, machine learning is simply a way of achieving AI, based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

Admittedly, AI and machine learning seem similar, but they are not quite the same thing, and the fact that they’re used synonymously — and therefore incorrectly — is one of my pet peeves. The trouble is, to get to grips with how to use the terms, you need to have an idea of how machine learning works, and that means mathematics.

Unfortunately for all of the math haters out there, it is based upon algorithms, which amounts to a step-by-step procedure for solving a problem or accomplishing some end. Machine learning is a way of “training” an algorithm so that it can learn. This “training” involves feeding huge amounts of data to the algorithm and then allowing it to adjust itself and improve.

As the algorithm learns and creates patterns, it uses what it has learnt in order to recognise more of the unknown. Take Bayesian statistics as an example. It is at the heart of many, though of course not all, machine-learning models. Using the Bayesian model, the algorithm is represented by the following equation: P(A|B) = P(B|A)P(A)/P(B)

* Where P(A|B) is the likelihood of A given that B is true;

* P(B|A) is the likelihood of B given that A is true; and

* P(A) and P(B) are probabilities of A and B if they are independent.

It’s a deceptively simple formula used to calculate conditional probability, but when run over millions of rows of data with numerous columns, features and characteristics for each record, it identifies patterns that our brains never could, and learns how to improve without human interference.

In next week’s column, we’ll see what the algorithms are used for and what AI is capable of, but for now, let’s stick with the basics. Don’t be fooled by impressive vocabulary or try to impress others by using terms that you only know the dictionary definition of. Take the time to understand or leave them well alone.

Tommy Weir is a CEO coach and author of “Leadership Dubai Style”. Contact him at tsw@tommyweir.com.