mirror of
https://github.com/AntoineHX/smart_augmentation.git
synced 2025-05-04 04:00:46 +02:00
.. | ||
graph | ||
.gitignore | ||
20190929-paper.pdf | ||
data_aug.py | ||
dataset_aug.py | ||
dataset_aug_v2.py | ||
hyperopt.py | ||
hyperopt_v2.py | ||
main.py | ||
README.md | ||
requirements.txt | ||
tests.py |
Gradient Descent: The Ultimate Optimizer
⚠️ WARNING: THIS IS NOT MY WORK ⚠️ |
---|
This repository contains the paper and code to the paper Gradient Descent: The Ultimate Optimizer.
I couldn't find the code (which is found in the appendix at the end of the paper) anywhere on the web. What I present here is the code of the paper with instructions on how to set it up.
Getting the code in a runnable state required some fixes on my part so the code might be slightly different than that presented in the paper.
Set up
git clone https://github.com/Rainymood/Gradient-Descent-The-Ultimate-Optimizer
cd Gradient-Descent-The-Ultimate-Optimizer
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
python main.py
When you are done you can exit the virtualenv with
deactivate