Implements the Universal Background Model (UBM) training
Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.
Parameters allow users to change the configuration of an algorithm when scheduling an experiment
The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box
For a Gaussian Mixture Models (GMM), this algorithm implements the Universal Background Model (UBM) training described in [Reynolds2000].
First, this algorithm estimates the means, diagonal covariance matrix and the weights of each gaussian component using the KMeans clustering. After, only the means are re-estimated using the Maximum Likelihood (ML) estimator.
This algorithm relies on the Bob library.
The input, features, is a training set of floating point vectors as a two-dimensional array of floats (64 bits), the number of rows corresponding to the number of training samples, and the number of columns to the dimensionality of the training samples. The output, ubm, is the GMM trained using the ML estimator.
|[Reynolds2000]||Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn. "Speaker verification using adapted Gaussian mixture models." Digital signal processing 10.1 (2000): 19-41.|