- Documentation updates.
- Added developmental branch for Sonnet 2.
setup.py
is now simpler. Thedm-sonnet-gpu
package has been retired, all functionality has been available indm-sonnet
for some time now.- Use
tf_inspect
instead ofinspect
for inspecting arguments.
- Documentation updates.
- Tiny shakespeare data is now compressed.
setup.py
now uses environment variables for GPU builds.- Removed unused
wrap_rnn_cell_class
decorator fromrnn_core
. - Add deprecation warning for layer norm in
ConvLSTM
. - Modules now support named outputs.
- Switch to using Python3 as default.
- Added
custom_getter
parameter tosnt.ConvNet2DTranspose
. It now can be used when transposing an existingConvNet2D
orConvNet2DTranspose
. - Fixed a regression in the
bayes_by_backprop_getter
caused by a change in thetfp.Distribution
API. - Changes in test sizes.
- Added a class to linear transform the concatenation of a list of Tensors.
- Extended support for lower precision inputs in Layer Norm.
- Increased Python 3 support.
- Fixed dependency issues in setup.py
- Module connection stacks are now thread local.
- Nested modules now register reused variables in
_all_variables
. snt.ACTCore
now allows the user to specify a maximum number of pondering steps.- Added an end-to-end example of how to train an MLP with Sonnet.
reuse_vars
is not an experimental decorator anymore. Changesnt.experimental.reuse_vars
to@snt.reuse_variables
.- Change how documentation is being handled. We are not using Sphinx to render the page anymore, and using MkDocs instead. The markdown files used to generate it are now in the
docs
directory and available to read in GitHub. - Change documentation template.
- Fix incorrect package name in
setup.py.tmpl
. - kwargs are forwarded to submodules in
DeepRNN
. - Add
supports_kwargs
function to reflect on whether a module can support a certain kwarg. - Filter kwargs passed to the normalizer in
ConvNet2D
/ConvNet2dTranspose
. Any kwargs definitely not supported by the lower level function will be removed. If the normalizer has a generic argspec (e.g. if it has**kwargs
in the signature) then nothing will be removed. - Support non-2D recurrent state in pondering RNN.
- Logging utility function.
- Various doc & typo fixes.
- Backwards Incompatible change: Change
Embed
to use standard deviation of 1 to initialize embedding variables. - Allow normalization scheme to be customised in
ConvNet2D
. - Print type of variables (legacy/resource) in
log_variables()
. - Support more padding options in
_ConvND
. - Allow
LayerNorm
to use >2 dimensional input. - Make
SkipConnectionCore
andResidualCore
explicitly call the initial state and zero state methods of the wrapped cores. - Fix key scaling in
RelationalMemory
.
- Ensure that dependency version checks are made before libraries are imported.
- Improve Eager support.
- Allow Sonnet modules to defun wrap their reuse_variables methods.
- Depend on tensorflow_probability rather than
tf.contrib.distributions
. - VQ-VAE: allow unknown batch dimension.
- Change axis to -1 in concat in
DeepRNN
when using skip_connections. - Add
rate
argument to theSeparableConv[1,2]D
classes. - Add argument-overriding custom getter that only updates defaults.
- Add dropout & clone() to
MLP
. - Add Learn to Execute example for Relational Memory Core to examples.
- Add
snt.count_variables_by_type()
. - Documentation fixes.
- Better integration with Tensorflow's Eager Mode.
snt.LSTMBlockCell
is now a class.- Added semantic versioning to check compatibility with Tensorflow.
RNNCellWrapper
andwrap_rnn_cell_class
are now in the public namespace.- A TensorFlow RNNCell can now be wrapped as an RNNCore.
ConvLSTM
corrected to apply bias only once.Conv1DLSTM
andConv2DLSTM
now support Layer Norm.- Added
override_args
custom getter. snt.reuse_variables
now keeps the signature of the original method.VectorQuantizerEMA
now supports return ofencoding_indices
.- Added a bidirectional recurrent core.
- Added support for
tf.bfloat16
. - Added demo for Relational Memory Core for "nth farthest" task from "Relational recurrent neural networks" (Santoro et al., 2018; available in https://arxiv.org/abs/1806.01822).
- Added option to densify gradients in
snt.Embed
.
- Add
snt.RelationalMemory
, implementation of "Relational Recurrent Neural Networks", Santoro et al., 2018. - Fix error message in
snt.DeepRNN
.
- Fix Python 3 compatibility issues in VQVAE notebook.
- Make Bayesian RNN tests more hermetic.
- Make the convolutional module use private member variables if possible.
- Add VQ-VAE plus EMA variant.
- Add Jupyter Notebook demonstrating VQ-VAE training.
- Add snt.SeparableConv1D.
- ConvNet2D now supports custom getters.
- Add snt.summarize_variables.
- Merge SeparableConv2D into _ConvND.
- Fix brnn_ptb for python3.
- Refactoring of convolutional network modules.
snt.InPlaneConv2D
now uses_Conv2D
andsnt.DepthwiseConv2D
uses_ConvND
. - Remove
skip_connection
fromsnt.ConvLSTM
. - Update on tests to conform to new versions of Numpy and Tensorflow.
- Documentation fixes.
- Changed the way reuse_variables handles name scopes.
.get_variable_scope()
now supports a root scope (empty string).snt.ConvNet2D
can now take an optional argument specifying dilation rates..get_all_variables()
now returns variables sorted by name.
.get_all_variables()
method added to AbstractModule. This returns all variables which affect the computation of a Module, whether they were declared internally or passed into the Module's constructor.- Various bug fixes and documentation improvements.
This version requires TensorFlow version 1.5.0.
- Custom getters added for Bayes-By-Backprop, along with example which reproduces paper experiment.
- Refactoring and improve tests for convolutional modules.
- Enable custom_getter for
snt.TrainableVariable
. - Support for
tf.ResourceVariable
insnt.Conv2D
. snt.LSTM
now returns a namedtuple for state.- Added support for
tf.float16
inputs in convolution and BatchNorm modules. snt.Conv3D
now initializes biases to zero by default.- Added LSTM with recurrent dropout & zoneout.
- Changed behavior of
snt.Sequential
to act as identity when it contains no layers. - Implemented
snt.BatchNormV2
, with different interface and more sensible default behavior thansnt.BatchNorm
. - Added a Recurrent Highway Network cell (
snt.HighwayCore
). - Refactored Convolutional modules (
snt.Conv{1,2,3}D
,snt.Conv{1,2,3}DTranspose
andsnt.CausalConv1D
) to use a common parent, and improved test coverage. - Disable brittle unit tests which relied on fixing RNG seeds.
sonnet.nest
*iterable
functions now point to their equivalent from TF.- Small documentation changes.
- Add
custom_getter
option tosonnet.Embed
.
This version requires TensorFlow version 1.4.0.
- Switch parameterized tests to use Abseil.
- BatchApply passes through scalar non-Tensor inputs unmodified.
- More flexible mask argument to
Conv2D
. - Added Sonnet
ModuleInfo
to the "sonnet" graph collection. This allows to keep track of which modules generated which connected sub-graphs. This information is serialised and available when loading a meta-graph-def. This can be used, for instance, to visualise the TensorFlow graph from a Sonnet perspective. - Scale_gradient now handles all float dtypes.
- Fixed a bug in clip_gradient that caused clip values to be shared.
- ConvNet can now use the NCHW data format.
- Cleaned up and improved example text for
snt.custom_getters.Context
.
- Separate
BatchNormLSTM
andLSTM
to two separate modules. - Clarify example in Readme.
custom_getters
subpackage. This allows modules to be made non-trainable, or to completely block gradients. See documentation fortf.get_variable
for more details.Sequential.get_variables()
generates a warning to indicate that no variables will ever be returned.ConvLSTM
now supports dilated convolutions.utils.format_variables
allows logging Variables with non-static shape.snt.trainable_initial_state
is now publicly exposed.- Stop using private property of
tf.Graph
inutil.py
.
This version requires TensorFlow 1.3.0.
- Backwards incompatible change: Resampler ops removed, they are now available
in
tf.contrib.resampler
. - Custom getters supported in RNNs and AlexNet.
- Replace last references to
contrib.RNNCell
withsnt.RNNCore
. - Removed Tensorflow dependencies in Bazel config files, which makes it unnecessary to have Tensorflow as a submodule of Sonnet.
- First steps of AlexNet cleanup.
- Add option to disable batch normalization on fully-connected layers.
- Remove HALF mode.
- Add AlexNetMini and AlexNetFull.
- Fixed bias compatibility between NHWC and NCHW data formats in Conv2D. Uses tf.nn.bias_add for bias addition in all convolutional layers.
- snt.BatchApply now also accepts scalar-valued inputs such as Boolean flags.
- Clean up and clarify documentation on nest's dict ordering behavior.
- Change installation instructions to use pip.
- Add optional bias for the multipler in AddBias.
- Push first version of wheel files to PyPI.
- Fix install script for Python 3.
- Better error message in AbstractModule.
- Fix out of date docs about RNNCore.
- Use tf.layers.utils instead of tf.contrib.layers.utils, allowing to remove the use of contrib in the future, which will save on import time.
- Fixes to docstrings.
- Support "None" entries in BatchApply's inputs.
- Add
custom_getter
option to convolution modules and MLP. - Better error messages for BatchReshape.
install.sh
now supports relative paths as well as absolute.- Accept string values as variable scope in
snt.get_variables_in_scope
andsnt.get_normalized_variable_map
. - Add IPython notebook that explains how Sonnet's
BatchNorm
module can be configured.
- Added all constructor arguments to
ConvNet2D.transpose
andConvNet2DTranspose.transpose
. - Backwards incompatible change is_training flags of
_build
functions no longer default to True. They must be specified explicitly at every connection point. - Added causal 1D Convolution.
- Fixed to scope name utilities.
- Added
flatten_dict_items
tosnt.nest
. Conv1DTranspose
modules can accept input with undefined batch sizes.- Apply verification to output_shape in
ConvTranspose
modules.
This version is only compatible with TensorFlow 1.2.0, not the current GitHub HEAD.
- Resampler op now tries to import from tf.contrib first and falls back to the Sonnet op. This is in preparation for the C++ ops to be moved into tf/contrib.
snt.RNNCore
no longer inherits fromtf.RNNCell
. All recurrent modules will continue to be suppoted bytf.dynamic_rnn
,tf.static_rnn
, etc.- The ability to add a
custom_getter
to a module is now supported bysnt.AbstractModule
. This is currently only available insnt.Linear
, with more to follow. See the documentation fortf.get_variable
for how to use custom_getters. - Documentation restructured.
- Some functions and tests reorganised.
- Cell & Hidden state clipping added to
LSTM
. - Added Makefile for generating documentation with Sphinx.
- Batch Norm options for
LSTM
now deprecated to a separate classBatchNormLSTM
. A future version ofLSTM
will no longer contain the batch norm flags. @snt.experimental.reuse_vars
decorator promoted to@snt.reuse_variables
.BatchReshape
now takes apreserve_dims
parameter.DeepRNN
prints a warning if the heuristic is used to infer output size.- Deprecated properties removed from
AbstractModule
. - Pass inferred data type to bias and weight initializers.
AlexNet
now checks that dropout is disabled or set to 1.0 when testing..get_saver()
now groups partitioned variables by default.- Docstring, variable name and comment fixes.
- breaking change: Calling
AbstractModule.__init__
with positional arguments is now not supported. All calls to__init__
should be changed to use kwargs. This change will allow future features to be added more easily. - Sonnet modules now throw an error if pickled. Instead of serializing module instances, you should serialize the constructor you want to call, plus the arguments you would pass it, and recreate the module instances in each run of the program.
- Sonnet no longer allows the possibility that
self._graph
does not exist. This would only be the case when reloading pickle module instances, which is not supported. - Fix tolerance on initializers_test.
- If no name is passed to the AbstractModule constructor, a snake_case version of the class name will be used.
_build()
now checks that__init__
has been called first and throws an error otherwise.- Residual and Skip connection RNN wrapper cores have been added.
get_normalized_variable_map()
now has an option to group partitioned variables, matching what tf.Saver expects.snt.BatchApply
now support kwargs, nested dictionaries, and allowsNone
to be returned.- Add a group_sliced_variables option to get_normalized_variable_map() that groups partitioned variables in its return value, in line with what tf.Saver expects to receive. This ensures that partitioned variables end up being treated as a unit when saving checkpoints / model snapshots with tf.Saver. The option is set to False by default, for backwards compatibility reasons.
snt.Linear.transpose
creates a new module which now uses the same partitioners as the parent module.