Reference¶
Base¶
This module defines a base class for optimization techniques.
build (algo, loss, params, inputs[, updates, ...]) |
Construct an optimizer by name. |
Optimizer (loss, params, inputs[, updates, ...]) |
An optimizer computes gradient updates to iteratively optimize a loss. |
First-Order Optimizers¶
This module defines first-order gradient descent optimizers.
SGD (loss, params, inputs[, updates, ...]) |
Optimize using stochastic gradient descent with momentum. |
NAG (loss, params, inputs[, updates, ...]) |
Optimize using Nesterov’s Accelerated Gradient (NAG). |
Adaptive Optimizers¶
This module defines gradient descent optimizers with adaptive learning rates.
ADADELTA (loss, params, inputs[, updates, ...]) |
ADADELTA optimizes scalar losses scaled stochastic gradient steps. |
Adam (loss, params, inputs[, updates, ...]) |
Adam optimizes using per-parameter learning rates. |
ESGD (*args, **kwargs) |
Equilibrated SGD computes a diagonal preconditioner for gradient descent. |
RMSProp (loss, params, inputs[, updates, ...]) |
RMSProp optimizes scalar losses using scaled gradient steps. |
RProp (loss, params, inputs[, updates, ...]) |
Optimization algorithm using resilient backpropagation. |
Datasets¶
This module contains a class for handling batched datasets.
In many optimization tasks, parameters must be updated by optimizing them with respect to estimates of a loss function. The loss function for many problems is estimated using a set of data that we have measured.
Dataset (inputs[, name, batch_size, ...]) |
This class handles batching and shuffling a dataset. |