- Add a function of "double_backward" simplifying the training loop (#661)
- Fix issue with setting of param_group for the DPOptimizer wrapper (issue 649) (#660)
- Fix issue of DDP optimizer for FGC. The step function incorrectly called "original_optimizer.original_optimizer" (#662)
- Replace "opt_einsum.contract" by "torch.einsum"(#663)
- Make the import of opt_einsum.contract (linear.py) explicit (#658)
Fast Gradient Clipping and Ghost Clipping (#656)
- Fix gradient shape error for DPMultiheadAttention (issue 650) (#651)
- Pass kwargs from make_private to _prepare_optimizer (#648)
- Fix BatchMemoryManager length (#641)
- Fix GPU-CPU device mismatch error in util filter_dilated_rows (#633)
- Fix Opacus's runtime error with an empty batch (issue 612) (#631)
- Fix DP MultiheadAttention (#598)
- Fix: make prv accountant robust to larger epsilons (#606)
- Fix the corner case when the optimizer has no trainable parameters (#619)
Highlight: Upgraded to PyTorch 1.13+ as required dependency
- Added clipping schedulers (#556)
- Util to check per sample gradients (#532)
- Align DataLoader interface with vanilla PyTorch (#543)
- Fix GDP accountant epsilon retrieval changing internal state (#541)
- Add option to specify number of steps in UniformSampler (#550)
- Fix privacy computation script (#565)
- Implement the
PRVAccountant
based on the paper Numerical Composition of Differential Privacy (#493) - Support
nn.EmbeddingBag
(#519)
- Fix benchmarks (#503, #507, #508)
- Align
make_private_with_epsilon
withmake_private
(#509, #526) - Test fixes (#513, #515, #527, #533)
- Summed discriminator losses to perform one backprop step (#474)
- Fixed issue with missing argument in MNIST example (#520)
- Functorch gradients: investigation and fix (#510)
- Support empty batches (#530)
We're glad to present Opacus v1.2, which contains some major updates to per sample gradient computation mechanisms and includes all the good stuff from the recent PyTorch releases.
- Functorch - per sample gradients for all
- ExpandedWeights - yet another way to compute per sample gradients
- See Release notes and GradSampleModule README for detailed feature explanation
- Fix
utils.unfold2d
with non-symmetric pad/dilation/kernel_size/stride (#443) - Add support for "same" and "valid" padding for hooks-based grad sampler for convolution layers
- Improve model validation to support frozen layers and catch copied parameters (#489)
- Remove annoying logging from
set_to_none
(#471) - Improved documentation (#480, #478, #482, #485, #486, #487, #488)
- Imtegration test improvements (#407, #479, #481. #473)
- Support layers with a mix of frozen and learnable parameters (#437)
- Throw an error when params in optimizer are not the same as that of module's in make_private (#439)
- Fix unfold2d and add test (#443)
- Fix typos in DDP tutorial (#438)
- Replace torch einsum with opt_einsum (#440)
- Support tied parameters (#417)
- Fix callsite sensitiveness of
zero_grad()
(#422, #423) - Improve microbenchmark argument parsing and tests (#425)
- Fix opacus nn.functional import (#426)
- Add microbenchmarks (#412, #416)
- Add more badges to readme (#424)
- Fix accountant when using number of steps instead of epochs
- Add params check when converting BatchNorm to GroupNorm (#390)
- Fix typo in gdp accountant mechanism name (#386)
- Fix linter errors (#392)
- Add friendly and detailed message for unsupported layers (#401)
- Run linter on nightly workflow (#399)
- Add warning for Gaussian DP accounting (#400)
- Clone replacement modules on the same device as original (#356)
- Implementing 3D dilation (#408)
- fix(batch_memory_manager): Ensures split_idxs use native python types (#410)
- Migrate nightly CircleCI flows to scheduled pipelines (#402)
- Migrate from ubuntu 16.04 to 20.04 on CircleCI (#403)
- Add support for GDP accounting in get_noise_multiplier (#303)
- Conservative search for target epsilon in get_noise_multiplier (#348)
- Warn and ignore "drop_last" when set in DPDataLoader (#357)
- Fix per-layer clipping in distributed (#347)
- Update code of conduct and file headers
- Add "Support Ukraine" banner to opacus website homepage
- Lint fixes
- DPOptimizer
- Passes through
.defaults
field to match pytorch Optimizer (#329) - Better exception message in
.step()
when p.grad_sample=None (#331) - Correct
closure
call after applying DP noise (#330)
- Passes through
- Proper gradient scaling in DDP mode
- Corrections of typos and errors in tutorials
- Opacus can be installed with conda: added recipe in conda-forge (#326)
- Formatting change in accordance with black-22.1.0
- Hidden states of RNN is passed to device (#314)
- Validate and fix trainable modules only (#316)
- Minor corrections and typo fixes in links, documentation, and tutorials.
- This release packs in lot of new features and bug fixes, and most importantly, also brings forth new APIs that are simpler, more modular, and easily extensible.
- We have bumped up the major version number from 0 to 1 and have introduced breaking changes. However, the major version bump also indicates a step-function upgrade in the capabilities.
- See [Release notes](https://github.com/pytorch/opacus/releases/tag/v1.0.0] and Migration Guide for more details about the changes.
- PR #273 contains the pointers to all the commits and PRs that went into this release.
- DDP support for faster distributed training (#196)
- Support of GRU and RNN; refactored LSTM implementation (#222)
- PyTorch Lightning Demo (#244)
- Improve nn.Linear grad sampler memory consumption (#192)
- Update Opacus to stop using deprecated torch.set_deterministic (#197)
- Fix optimizer.step after engine.detach()
- Test fixes
- Better validation error reporting (#199)
- grad sampler type checking (#241)
- Major refactoring - per-sample gradient computation is separated into its own module - GradSampleModule (#175)
- Improved RDP to (eps, delta)-DP conversion (#162)
- Multi-GPU support (#166)
- Handle empty batches in Poisson sampling (#164)
- Fixed memory leak from no_grad execution (#180)
- PackedSequence support for DPLSTM (#150) (thanks @touqir14 !)
- Pytest moved to dev installation (#144)
This version introduces a mildly-breaking change: the privacy engine will now support sampling with variable batch size, just like in the Abadi et al. paper. To accommodate this feature, we have made batch_size
a kwarg (no longer positional). We are also enforcing that all kwargs must not be specified positionally. If you had code that passed kwargs positionally, you will find an error (which will be very simple to fix).
- Enforce kwargs to Privacy Engine (#136).
- Fix batch construction and privacy engine (#128). (thanks @ConstanceBeguier!)
- Compute required sigma to reach (epsilon, delta) budget (#126)
- Friendly user message for unused parameters (#118).
- Print helpful message when models are not in train mode (#113)
- Now the Opacus package has a
__version__
attribute. - Fix immer security issue, fix website errors
- Updated setup.py version requirements to support 3.6.8 for Windows (#108) (thanks @madhavajay!)
- Rewrote the grad_sample tests to use Hypothesis (#125). (thanks @touqir14!)
- Extend DPLSTM to support multilayer, dropout (#101)
- Modifications to Char LSTM name classification example
- Introduce issue templates for GitHub (#102)
- Added support for Conv3D layers
- Linter fixes for Conv3D (#105)
- Make TorchCSPRNG an optional dependency (#106)
- Removed unnecessary calls to zero_grad from examples and tutorials (#96)
- Fix PyPI deployment (#91).
- Refactor grad sample tests (#90).
- Avoid storing activations in certain scenarios (#87)
- Reimplemented the Embedding layer, making it 9x faster with lower memory footprint (#73).
- Reimplemented the DPLSTM layer, making it 2x faster with lower memory footprint.
- Extended our Conv support to grouped convolutions (#78).
- Small fixes to clipping logic (#45).
- Changed docstring style from numpy -> Google.
- Throw an error if sample rate > 1 in privacy engine.
- Migrated our IMDB example from TorchText -> HuggingFace (#85).
- Added PRNG shuffling to our examples.
- Compatibility with Python 3.6 (Minimum required version changed from 3.7 to 3.6.9).
- Allow DP-LSTM to have null init.
- Initial commit.