Reference¶
minimize (loss, train[, valid, params, ...]) |
Minimize a loss function with respect to some symbolic parameters. |
Base¶
This module defines a base class for optimization techniques.
build (algo, loss[, params, inputs, updates, ...]) |
Construct an optimizer by name. |
Optimizer (loss[, params, inputs, updates, ...]) |
An optimizer computes gradient updates to iteratively optimize a loss. |
First-Order Optimizers¶
This module defines first-order gradient descent optimizers.
SGD (loss[, params, inputs, updates, ...]) |
Basic optimization using stochastic gradient descent. |
NAG (loss[, params, inputs, updates, ...]) |
Stochastic gradient optimization with Nesterov momentum. |
Adaptive Optimizers¶
This module defines gradient descent optimizers with adaptive learning rates.
ADADELTA (loss[, params, inputs, updates, ...]) |
ADADELTA optimizer. |
ADAGRAD (loss[, params, inputs, updates, ...]) |
ADAGRAD optimizer. |
Adam (loss[, params, inputs, updates, ...]) |
Adam optimizer using unbiased gradient moment estimates. |
ESGD (*args, **kwargs) |
Equilibrated SGD computes a diagonal Hessian preconditioner. |
RMSProp (loss[, params, inputs, updates, ...]) |
RMSProp optimizer. |
RProp (loss[, params, inputs, updates, ...]) |
Resilient backpropagation optimizer. |
Datasets¶
This module contains a class for handling batched datasets.
In many optimization tasks, parameters must be updated by optimizing them with respect to estimates of a loss function. The loss function for many problems is estimated using a set of data that we have measured.
Dataset (inputs[, name, batch_size, ...]) |
This class handles batching and shuffling a dataset. |