Types of Generative Models in Machine Learning
Generative Models in Machine Learning:
Generative Models are one of the most promising approaches towards this goal. To train a generative model we first collect a large amount of data in some domain and then train a model to generate data like it. Generative models used in machine learning for either modeling data directly or as an intermediate step to forming a conditional probability density function.
Types of Generative Models:
There are mainly 7 types of Generative Models in Machine Learning:
1. Auto-regressive Model:
Autoregressive Model is when a value from a time series is regressed on previous values from that same time series. The order of an autoregression is the number of immediately preceding values in the series that are used to calculate the value at the present time. In simple words, Autoregressive models predict future values based on past values. These models are flexible at handling a wide range of different time-series patterns.
2. Bayes Network:
Bayes Network is a generative probabilistic graphical model that allows efficient and effective representation of the joint probability distribution over a set of random variables. It consists of two main parts, which are structure and parameters. The structure is a directed acyclic graph (DAG), and the parameters consist of conditional probability distributions associated with each node. This network can be used for various applications, such as time series prediction, anomaly detection, reasoning and other such.
3. Mixture Model:
A mixture model is a probabilistic model for representing the presence of sub-populations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observation in the overall population. Mixture models are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled populations without sub-population identifying information.
4. Latent Variable Model:
A latent variable model is a statistical model that relates a set of observable variables. An observable variable is also called a manifest variable, as opposed to a latent variable. It is a variable that can be observed and directly measured. Latent variable models used in psychology, demography, economics, engineering, medicine learning, artificial intelligence, bioinformatics, natural language processing, etc.
5. Gaussian Model:
Gaussian Model is a generative probabilistic model. It assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. It commonly used as a parametric model of the probability distribution of features in a bio-metric system.
6. Hidden Markov Model:
Hidden Markov Model (HMM) is a statistical model widely used in various fields, especially in sequential data analysis and time series modeling. HMM designed to model systems that are assumed to be Markovian. It means that the future state of the system depends only on the current state and is independent of previous states. It consist of two main components: hidden states and observable states. The hidden states represent the underlying, unobservable states of the system, while the observable states are the states we can directly observe.
7. Flow-Based Model:
Flow-based models are generative models that directly model the data distribution through a series of invertible transformations, i.e., It is a set of mathematical operations that can be both applied and reversed without any loss of information. The core idea of flow-based models lies in the concept of normalizing flows. Flow-based models can capture complex data distributions by chaining multiple invertible transformations together.