How Bayes’ Theorem Coincides with Machine Learning

Acusio Bivona
2 min readMar 18, 2021

--

Just why is Bayes so naive?

Image Source

In this blog post, we’ll look at how to apply some of the theories for machine learning that are acquainted with Bayes’ theorem and foundational principles of Bayesian statistics. Classification problems are a natural implementation of Bayes’ theorem when you’re trying to predict a classification based on other data, which can be thought of as conditional probability. I will help you understand how to make a classification using the probabilities provided by Naive Bayes.

By believing that the features are autonomous of one another, Naive Bayes algorithms apply Bayes’ formula to several variables. The conditional probabilities for each of the variables will then be multiplied to determine an overall likelihood. For instance, a scientist may look at different patient measures to see whether they can help determine whether or not a person has a disease. Assuming that these measures are independent and uncorrelated, the conditional probability of any of these measures can be examined, and Bayes’ theorem can be used to calculate a relative probability of getting the disease or not. When all probabilities are taken together, an implicit trust in a patient developing the disorder can be determined. Depending on which likelihood is greater, you may make a prediction about whether or not you think a person has the disease.

It’s important to remember that multiplying the conditional probabilities is predicated on the premise that these probabilities and their corresponding features are distinct. This is almost never the case, but it is because of this presumption that the Naive Bayes algorithm is referred to as naive or simple. In the right conditions, though, Naives Bayes can be very successful. Calculating the denominator — P(X1, X2, etc.) — in reality is indeed difficult or unlikely since this particular set of features may have never been seen before. This isn’t always necessary. This is due to the fact that the specific probabilities aren’t needed to produce a prediction when using a classifier. Rather, you must clearly determine which choice is the most likely. You will do this by calculating P(Y0) — the likelihood of not having the disease — and P(Y1) — the likelihood of having the disease. Moreover, because the denominator is the same for both P(Y0) and P(Y1), the numerators can be compared because they are proportional to the total likelihood.

And that’s how Bayes’ theorem is used to create classification algorithms and why it is considered naive.

Thank you for reading!

LinkedIn

--

--

Acusio Bivona

Follower of Christ - Fitness Coach - Specialist in Functional Strength and Sustainable Weight Loss