torchoutil

Collection of functions and modules to help development in PyTorch.


Keywords
pytorch, deep-learning, utilities
License
Other
Install
pip install torchoutil==0.2.0

Documentation

torchoutil

Python PyTorch Code style: black Build Documentation Status

Collection of functions and modules to help development in PyTorch.

Installation

pip install torchoutil

The main requirement is PyTorch.

To check if the package is installed and show the package version, you can use the following command in your terminal:

torchoutil-info

This library works on all Python versions >=3.8, all PyTorch versions >= 3.10, and on Linux, Mac and Windows systems.

Examples

torchoutil functions and modules can be used like torch ones. The default acronym for torchoutil is to.

Label conversions

Supports multiclass labels conversions between probabilities, classes indices, classes names and onehot encoding.

import torchoutil as to

probs = to.as_tensor([[0.9, 0.1], [0.4, 0.6]])
names = to.probs_to_name(probs, idx_to_name={0: "Cat", 1: "Dog"})
# ["Cat", "Dog"]

This package also supports multilabel labels conversions between probabilities, classes multi-indices, classes multi-names and multihot encoding.

import torchoutil as to

multihot = to.as_tensor([[1, 0, 0], [0, 1, 1], [0, 0, 0]])
indices = to.multihot_to_indices(multihot)
# [[0], [1, 2], []]

Typing

import torchoutil as to

x1 = to.as_tensor([1, 2])
print(isinstance(x1, to.Tensor2D))  # False
x2 = to.as_tensor([[1, 2], [3, 4]])
print(isinstance(x2, to.Tensor2D))  # True
import torchoutil as to

x1 = to.as_tensor([1, 2], dtype=to.int)
print(isinstance(x1, to.SignedIntegerTensor))  # True

x2 = to.as_tensor([1, 2], dtype=to.long)
print(isinstance(x2, to.SignedIntegerTensor))  # True

x3 = to.as_tensor([1, 2], dtype=to.float)
print(isinstance(x3, to.SignedIntegerTensor))  # False

Padding

import torchoutil as to

x1 = to.rand(10, 3, 1)
x2 = to.pad_dim(x, target_length=5, dim=1, pad_value=-1)
# x2 has shape (10, 5, 1)
import torchoutil as to

tensors = [to.rand(10, 2), to.rand(5, 3), to.rand(0, 5)]
padded = to.pad_and_stack_rec(tensors, pad_value=0)
# padded has shape (10, 5)

Masking

import torchoutil as to

x = to.as_tensor([3, 1, 2])
mask = to.lengths_to_non_pad_mask(x, max_len=4)
# Each row i contains x[i] True values for non-padding mask
# tensor([[True, True, True, False],
#         [True, False, False, False],
#         [True, True, False, False]])
import torchoutil as to

x = to.as_tensor([1, 2, 3, 4])
mask = to.as_tensor([True, True, False, False])
result = to.masked_mean(x, mask)
# result contains the mean of the values marked as True: 1.5

Others tensors manipulations!

import torchoutil as to

x = to.as_tensor([1, 2, 3, 4])
result = to.insert_at_indices(x, indices=[0, 2], values=5)
# result contains tensor with inserted values: tensor([5, 1, 2, 5, 3, 4])
import torchoutil as to

perm = to.randperm(10)
inv_perm = to.get_inverse_perm(perm)

x1 = to.rand(10)
x2 = x1[perm]
x3 = x2[inv_perm]
# inv_perm are indices that allow us to get x3 from x2, i.e. x1 == x3 here

Pre-compute datasets to HDF files

Here is an example of pre-computing spectrograms of torchaudio SPEECHCOMMANDS dataset, using pack_dataset function:

from torchaudio.datasets import SPEECHCOMMANDS
from torchaudio.transforms import Spectrogram
from torchoutil import nn
from torchoutil.extras.hdf import pack_to_hdf

speech_commands_root = "path/to/speech_commands"
packed_root = "path/to/packed_dataset.hdf"

dataset = SPEECHCOMMANDS(speech_commands_root, download=True, subset="validation")
# dataset[0] is a tuple, contains waveform and other metadata

class MyTransform(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.spectrogram_extractor = Spectrogram()

    def forward(self, item):
        waveform = item[0]
        spectrogram = self.spectrogram_extractor(waveform)
        return (spectrogram,) + item[1:]

pack_to_hdf(dataset, packed_root, MyTransform())

Then you can load the pre-computed dataset using HDFDataset:

from torchoutil.extras.hdf import HDFDataset

packed_root = "path/to/packed_dataset.hdf"
packed_dataset = HDFDataset(packed_root)
packed_dataset[0]  # == first transformed item, i.e. transform(dataset[0])

Extras requirements

torchoutil also provides additional modules when some specific package are already installed in your environment. All extras can be installed with pip install torchoutil[extras]

  • If tensorboard is installed, the function load_event_file can be used. It is useful to load manually all data contained in an tensorboard event file.
  • If numpy is installed, the classes NumpyToTensor and ToNumpy can be used and their related function. It is meant to be used to compose dynamic transforms into Sequential module.
  • If h5py is installed, the function pack_to_hdf and class HDFDataset can be used. Can be used to pack/read dataset to HDF files, and supports variable-length sequences of data.
  • If pyyaml is installed, the functions to_yaml and load_yaml can be used.

Contact

Maintainer: