- Bayesian
CustomRegressor
- Conformalized
CustomRegressor
(splitconformal
andlocalconformal
for now) - See this example, this example, and this notebook
self.n_classes_ = len(np.unique(y))
# for compatibility with sklearn
preprocess
ing for allLazyDeep*
- Attribute
estimators
(a list ofEstimator
's as strings) forLazyClassifier
,LazyRegressor
,LazyDeepClassifier
,LazyDeepRegressor
,LazyMTS
, andLazyDeepMTS
- New documentation for the package, using
pdoc
(notpdoc3
) - Remove external regressors
xreg
at inference time forMTS
andDeepMTS
- New class
Downloader
: querying the R universe API for datasets (see https://thierrymoudiki.github.io/blog/2023/12/25/python/r/misc/mlsauce/runiverse-api2 for similar example inmlsauce
) - Add custom metric to
Lazy*
- Rename Deep regressors and classifiers to
Deep*
inLazy*
- Add attribute
sort_by
toLazy*
-- sort the data frame output by a given metric - Add attribute
classes_
to classifiers (ensure consistency with sklearn)
- Subsample response by using the number of rows, not only a percentage (see https://thierrymoudiki.github.io/blog/2024/01/22/python/nnetsauce-subsampling)
- Improve consistency with sklearn's v1.2, for
OneHotEncoder
- add robust scaler
- relatively faster scaling in preprocessing
- Regression-based classifiers (see https://www.researchgate.net/publication/377227280_Regression-based_machine_learning_classifiers)
DeepMTS
(multivariate time series forecasting with deep quasi-random layers): see https://thierrymoudiki.github.io/blog/2024/01/15/python/quasirandomizednn/forecasting/DeepMTS- AutoML for
MTS
(multivariate time series forecasting): see https://thierrymoudiki.github.io/blog/2023/10/29/python/quasirandomizednn/MTS-LazyPredict - AutoML for
DeepMTS
(multivariate time series forecasting): see https://github.com/Techtonique/nnetsauce/blob/master/nnetsauce/demo/thierrymoudiki_20240106_LazyDeepMTS.ipynb - Spaghetti plots for
MTS
andDeepMTS
(multivariate time series forecasting): see https://thierrymoudiki.github.io/blog/2024/01/15/python/quasirandomizednn/forecasting/DeepMTS - Subsample continuous and discrete responses
- actually implement deep
Estimator
s in/deep
(in addition to/lazypredict
) - include new multi-output regression-based classifiers (see https://thierrymoudiki.github.io/blog/2021/09/26/python/quasirandomizednn/classification-using-regression for more details)
- use proper names for
Estimator
s in/lazypredict
and/deep
- expose
SubSampler
(stratified subsampling) to the external API
- lazy predict for classification and regression (see https://thierrymoudiki.github.io/blog/2023/10/22/python/quasirandomizednn/nnetsauce-lazy-predict-preview)
- lazy predict for multivariate time series (see https://thierrymoudiki.github.io/blog/2023/10/29/python/quasirandomizednn/MTS-LazyPredict)
- lazy predict for deep classifiers and regressors (see this example for classification and this example for regression)
- update and align as much as possible with R version
- colored graphics for class MTS
- Fix error in nodes' simulation (base.py)
- Use residuals and KDE for predictive simulations
plot
method for MTS objects
- Begin residuals simulation
- Avoid division by zero in scaling
- less dependencies in setup
- Implement RandomBagRegressor
- Use of a DataFrame in MTS
- rename attributes with underscore
- add more examples to documentation
- Fix numbers' simulations
- Remove memoize from Simulator
- loosen the range of Python packages versions
- Add Poisson and Laplace regressions to GLMRegressor
- Remove smoothing weights from MTS
- Use C++ for simulation
- Fix R Engine problem
- RandomBag classifier cythonized
- Documentation with MkDocs
- Cython-ready
- contains a refactorized code for the
Base
class, and for many other utilities. - makes use of randtoolbox for a faster, more scalable generation of quasi-random numbers.
- contains a (work in progress) implementation of most algorithms on GPUs, using JAX. Most of the nnetsauce's changes related to GPUs are currently made on potentially time consuming operations such as matrices multiplications and matrices inversions.
- (Work in progress) documentation in
/docs
MultitaskClassifier
- Rename
Mtask
toMultitask
- Rename
Ridge2ClassifierMtask
toRidge2MultitaskClassifier
- Use "return_std" only in predict for MTS object
- Fix for potential error "Sample weights must be 1D array or scalar"
- One-hot encoding not cached (caused errs on multitask ridge2 classifier)
- Rename ridge to ridge2 (2 shrinkage params compared to ridge)
- Implement ridge2 (regressor and classifier)
- Upper bound on Adaboost error
- Test Time series split
- Add AdaBoost classifier
- Add RandomBag classifier (bagging)
- Add multinomial logit Ridge classifier
- Remove dependency to package
sobol_seq
(not used)
- Initial version