A Pytorch and Lightning based framework for research and ml pipeline automation.
$\text{Hyperparameter space}.$ $\text{Genetic algorithms(single-objective/multi-objective)}$ $\text{Best hyperparameters in config.yaml}$ $\text{Training session}$
from lightorch.htuning.optuna import htuning
from ... import NormalModule
from ... import FourierVAE
def objective(trial) -> Dict[str, float]:
... # define hyperparameters
return hyperparameters
if __name__ == '__main__':
htuning(
model_class = FourierVAE,
hparam_objective = objective,
datamodule = NormalModule,
valid_metrics = [f"Training/{name}" for name in [
"Pixel",
"Perceptual",
"Style",
"Total variance",
"KL Divergence"]],
directions = ['minimize', 'minimize', 'minimize', 'minimize', 'minimize'],
precision = 'medium',
n_trials = 150,
)
exec: python3 -m htuning
trainer: # trainer arguments
logger: true
enable_checkpointing: true
max_epochs: 250
accelerator: cuda
devices: 1
precision: 32
model:
class_path: utils.FourierVAE #model relative path
dict_kwargs: #**hparams
encoder_lr: 2e-2
encoder_wd: 0
decoder_lr: 1e-2
decoder_wd: 0
alpha:
- 0.02
- 0.003
- 0.003
- 0.01
beta: 0.00001
optimizer: adam
data: # Dataset arguments
class_path: data.DataModule
init_args:
type_dataset: mnist
batch_size: 12
pin_memory: true
num_workers: 8
from lightorch.training.cli import trainer
if __name__ == '__main__':
trainer()
exec: python3 -m training -c config.yaml
- Built in Module class for:
- Adversarial training.
- Supervised, Self-supervised training.
- Multi-Objective and Single-Objective optimization and Hyperparameter tuning with optuna.
- Fourier Convolution.
- Fourier Deconvolution.
- Partial Convolution. (Optimized implementation)
- Grouped Query Attention, Multi Query Attention, Multi Head Attention. (Interpretative usage) (with flash-attention option)
- Self Attention, Cross Attention.
- Normalization methods.
- Positional encoding methods.
- Embedding methods.
- Useful criterions.
- Useful utilities.
- Built-in Default Feed Forward Networks.
- Adaptation for
$\mathbb{C}$ modules. - Interpretative Deep Neural Networks.
- Monte Carlo forward methods.
@misc{lightorch,
author = {Jorge Enciso},
title = {LighTorch: Automated Deep Learning framework for researchers},
howpublished = {\url{https://github.com/Jorgedavyd/LighTorch}},
year = {2024}
}