mirror of
https://github.com/AntoineHX/smart_augmentation.git
synced 2025-05-04 04:00:46 +02:00
'Smart_aug' objective is to make data augmentation differentiable, thus allowing to learn its parameters, with 'Higher' objects, such as 'Data_aug' classes. The meta-learning of the data augmentation parameter is performed jointly with the training of the model. Thus it minimize the overhead compared to other data augmentation learning techniques.
FAR-HO | ||
Gradient-Descent-The-Ultimate-Optimizer | ||
higher | ||
PBA | ||
salvador | ||
UDA | ||
.gitignore | ||
README.md |
'Smart_aug' is module to make data_augmentation differentiable, thus allowing to learn its parameters, with 'Data_aug' classes.
Requirements and Installation
- Python version >= 3.5
- PyTorch version >= 1.3
- Higher version >=
Example Usage
Look to 'test_dataug.py'