ClipCap

Using pretrained encoder and language models to generate captions from multimedia inputs.


Keywords
machine, learning, image, captioning, audio, CLIP, GPT, audio-captioning, encoder-decoder, image-captioning, language-model, vision-transformer, vqa
License
MIT
Install
pip install ClipCap==1.0.0

Documentation

ClipCap

Using pretrained encoder and language models to generate captions from multimedia inputs, allowing high fidelity text generation using the rich textual detail already learned by pretrained LMs on tasks such as image captioning, VQA, audio captioning and more.

More details and results to come soon.

Installation

By default, the encoders remained uninstalled for ease of access. View the data preprocessing documentation for info on how to install these.

pip install git+https://github.com/TheoCoombes/ClipCap.git

Supported Encoders

  • CLIP for tasks such as Image Captioning, VQA etc.
  • CLAP for tasks such as Audio Captioning, Audio Question Answering, etc.

Data Preprocessing

You can run the data preprocess script using the command below. (More info)

python3 -m clipcap.preprocess --help

Training

You can run the training script using preprocessed data with the command below. (More info)

python3 -m clipcap.train --help

Acknowledgments

This repository is heavily based on @rmokady's original implementation of ClipCap and also contains modified versions of @rom1504's clip-inference and embedding-reader libraries. Many thanks to both for their amazing work :)

TODO

Improved documentation and eval + inference scripts to come soon.