Naïve-Bayes Technique for Machine Learning

Naïve-Bayes Technique for Machine Learning

What is Occam’s Razor?

We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.

When you have two competing theories that make exactly the same predictions, the simpler one is the better.”

One famous example of Occam’s Razor in action is found in conspiracy theories surrounding the NASA moon landings.

Many conspiracy theorists believe that the first Moon Landing was staged and filmed in a studio, part of an elaborate hoax. Their justification relies upon many twisted and convoluted theories, whereas the NASA argument is fairly straightforward.

The conspiracy line of reasoning contains too many assumptions: “If this happened, then this may have happened.” Too many ‘ifs’ is a sign that a statement needs ‘Occamizing’ and clarifying.

Therefore, using Occam’s razor, the NASA argument should be considered correct. This is not the same as saying that it is proved, only that it is best to investigate the simplest theory first.

Naïve-Bayes

Consider a simple problem Insurance sector problem of recommending a product to a potential customer. The recommendation is based on certain customer attributes, similar to predictive analytics in target marketing. However, it is not always possible that we come across data which satisfy all the underlying assumptions for required statistical or decision model.  So we have found different ways of handling them.

This is where we come across a machine learning technique like Naïve-Bayes.

Let’s start with a simple problem that many business units comes across – Out of many products a bank has to offer such as saving account, Insurance, credit cards etc., which is more suitable product for a given customer? Had this been a simple conversion problem a logistic model would have been enough, given the data satisfies all the assumptions. One could also consider a multinomial model, but  the complexity it would introduce will be enormous.

The other approach will be simple calculation of probability conditioned on the set of co-variates known as independent variables. With the well accepted existence of Inverse Probability, prediction can be done more precisely for the problems where it is difficult to meet the assumptions of traditional statistical techniques.

What is Naïve Bayes?

In machine learning, Naïve Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong (naive) independence assumptions between the features.

In some way it’s the generalization of Inverse law of probability or Bayes Theorem.

Let’s take a problem that give the traits of my leads , what are the chances of him falling a particular product out of many products.

Probability that a customer will buy product A given his age is 27 and has income of $40K = (Probability that how many leads of age 27 and Income $40K have product A)*(Probability of buying product A irrespective of any constraints)/(Probability of a lead falling in group having age 27 and has Income $40K).

Which can be put into a mathematical equation as below:

P(θ/x) = P(x/θ)  P(θ) / P(x)

How do I know the prediction from the model is correct?

We can apply model validation algorithm and instead of concordance or P-value we can look at match percentage among training and validation set and most important is error of misclassification.

Let’s do a simple exercise to understand this:

I took three attributes like Age, Income and Distance from the Bank in Km and created a dummy data of 150 customers without assuming any statistical assumptions.

And then I have three products as Loan, Insurance and Saving Account.

Like any classification prediction and validation problem we have the same approach here also, below is the glimpse of codes and process about how to do it in R.

##Load The data:

 Data_Test=read.csv(“DataFile.csv”,header=T)

##Create Training and Validation datasets as

x=round(runif(nrow(Data_Test)/2,0,( Data_Test),0)

dataTrain= Data_Test [x,]

dataValid= Data_Test [-x,]

##Call the libraray:

library(“e1071”)

names(Data_Test)

##Run Niave Bayes Model:

model=naiveBayes(Credit.Card~

                          Gender

                        +Age

+DistanceFromHome

+Income

+,  data=dataTrain)

##Test the model on the validation  data:

Status=predict(model,dataValid)

## Create Validation Result in Validation Dataset:

dataValid$PredictedCreditStatus=Status

## Generate Probability for the obtained Class:

StatusP=predict(model,dataValid,type=”raw”)

bdata=cbind(dataValid,StatusP)

## Get the file in csv format in your working directory:

Write.csv(bdata,”PredictedNaiveBayesResult.csv”)

Choices?

Now consider a simple target marketing problem of a Bank about whom should I approach which what product Type?

At first go it looks a simple logistic problem but what if the data is not complete or the model didn’t solve our business purpose or model itself didn’t turned out to be good just because out of many few statistical assumptions are not met.

In such cases do we take the liberty to leave the problem half solved or go with techniques like Naïve Bayes which will give some insight about what information data contains.

We will discuss this particular problem in my next blog.

This blog is written by Kunal Jha from BRIDGEi2i

Related Posts

Leave a comment