Cross-entropy is commonly used in machine learning as a loss function. Cross-entropy is a measure from the field of information theory, building upon entropy and generally calculating the di... Read more

Last Updated on October 18, 2019 It is often desirable to quantify the difference between probability distributions for a given random variable. This occurs frequently in machine learning, w... Read more

Last Updated on October 18, 2019 In this tutorial you are going to learn about the Naive Bayes algorithm including how it works and how to implement it from scratch in Python (without librar... Read more

Information gain calculates the reduction in entropy or surprise from transforming a dataset in some way. It is commonly used in the construction of decision trees from a training dataset, b... Read more

Information theory is a subfield of mathematics concerned with transmitting data across a noisy channel. A cornerstone of information theory is the idea of quantifying how much information t... Read more

Probabilistic models can define relationships between variables and be used to calculate probabilities. For example, fully conditional models may require an enormous amount of data to cover... Read more

Last Updated on October 9, 2019 Discover a Gentle Introduction to Bayesian Optimization. Global optimization is a challenging problem of finding an input that results in the minimum or maxim... Read more

Last Updated on October 7, 2019 Classification is a predictive modeling problem that involves assigning a label to a given input data sample. The problem of classification predictive modelin... Read more

Last Updated on October 4, 2019 Bayes Theorem provides a principled way for calculating a conditional probability. It is a deceptively simple calculation, although it can be used to easily c... Read more

Probability for Machine Learning Crash Course.Get on top of the probability used in machine learning in 7 days. Probability is a field of mathematics that is universally agreed to be the bed... Read more