Implements multi-layer perceptron (MLP) training
Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.
Parameters allow users to change the configuration of an algorithm when scheduling an experiment
The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box
This algorithm implements a scoring procedure for a multi-layer perceptron (MLP) [Bishop] [Duda], a neural network architecture that has some well-defined characteristics such as a feed-forward structure.
This implementation relies on the Bob library.
The inputs are:
The output scores is the corresponding set of score values.
|[Bishop]||Pattern Recognition and Machine Learning, C.M. Bishop, chapter 5|
|[Duda]||Pattern Classification, Duda, Hart and Stork, chapter 6|