torch-adf
: Assumed Density Filtering (ADF) Probabilistic Neural Networks
Release v22.1.0 (What's new?).
torch-adf
provides implementations for probabilistic
PyTorch neural network layers,
which are based on assumed density filtering.
Assumed density filtering (ADF) is a general concept from Bayesian inference, but in the case of feed-forward neural networks that we consider here
it is a way to approximately propagate a random distribution through the neural network.
The layers in this package have the same names and arguments as their corresponding
PyTorch versions. We use Gaussian distributions for our ADF approximations, which are
described by their means and (co-)variances. So unlike the standard PyTorch layers,
each torch-adf
layer takes two inputs and produces two outputs (one for the means
and one for the (co-)variances).
Getting Started
torch-adf
is a Python package hosted on PyPI.
It is intended to be used as part of the PyTorch framework.
The recommended installation method is pip-installing
into a virtual environment.
$ pip install torch-adf
The next three steps should bring you up and running in no time:
The Overview section will show you a simple example of
torch-adf
in action and introduce you to its core ideas.The Examples section will give you a comprehensive tour of
torch-adf
’s features. After reading, you will know about advanced features and how to use them.The API Reference reference is a quick way to look up details of all features and their options.
If at any point you get confused by some terminology, please check out our Glossary.
Project Information
torch-adf
is released under the MIT license,
its documentation lives at Read the Docs,
the code on GitHub,
and the latest release can be found on PyPI.
It’s tested on Python 3.4+.
If you’d like to contribute to torch-adf
you’re most welcome.
We have written a short guide to help you get you started!
Further Reading
Additional information on the algorithmic aspects of torch-adf
can be found
in the following works:
Jochen Gast, Stefan Roth, “Lightweight Probabilistic Deep Networks”, 2018
Jan Macdonald, Stephan Wäldchen, Sascha Hauch, Gitta Kutyniok, “A Rate-Distortion Framework for Explaining Neural Network Decisions”, 2019