Naive Bayes Algorithm in Machine Learning
Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values. It is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle. All Naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature given the class variable.
Naive Bayes Theorem:
Naive Bayes Theorem theorem uses probability theory to classify data. The key insight of Bayes theorem is that the probability of an event can be adjusted as new data is introduced. The most popular application of Naive Bayes theorem is spam filters. A spam filter looks at email messages for certain keywords and puts them in a spam folder if they match.
Naive Bayes Algorithm:
Naive Bayes algorithm provides a way of calculating posterior probability P(c|x) from P(c), P(x), and P(x|c). Look at the equation below:
P(c|x) = P(x1|c) x P(x2|c) x…..x P(xn|c) x P(c)
In the above equation:
1. P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
2. P(c) is the prior probability of class.
3. P(x|c) is the likelihood which is the probability of the predictor given class.
4. P(x) is the prior probability of the predictor.