featurex

Multimodal feature extraction in Python


Keywords
audio, feature-extraction, python, video
License
MIT
Install
pip install featurex==0.0.1

Documentation

pliers: a python package for automated feature extraction

PyPI version fury.io pytest Coverage Status Documentation Status DOI:10.1145/3097983.3098075

Pliers is a Python package for automated extraction of features from multimodal stimuli. It provides a unified, standardized interface to dozens of different feature extraction tools and services--including many state-of-the-art deep learning-based models and content analysis APIs. It's designed to let you rapidly and flexibly extract all kinds of useful information from videos, images, audio, and text.

You might benefit from pliers if you need to accomplish any of the following tasks (and many others!):

  • Identify objects or faces in a series of images
  • Transcribe the speech in an audio or video file
  • Apply sentiment analysis to text
  • Extract musical features from an audio clip
  • Apply a part-of-speech tagger to a block of text

Each of the above tasks can typically be accomplished in 2 - 3 lines of code with pliers. Combining them all--and returning a single, standardized DataFrame--might take a bit more work. Say maybe 5 or 6 lines.

In a nutshell, pliers provides a high-level, unified interface to a large number of feature extraction tools spanning a wide range of modalities.

Documentation

The official pliers documentation on ReadTheDocs is comprehensive, and contains a quickstart, API Reference, and more.

Pliers overview (with application to naturalistic fMRI)

Pliers is a general purpose tool, this is just one domain where it's useful.

Tutorial Video

The above video is from a tutorial as a part of a course about naturalistic data.

How to cite

If you use pliers in your work, please cite both the pliers and the following paper:

McNamara, Q., De La Vega, A., & Yarkoni, T. (2017, August). Developing a comprehensive framework for multimodal feature extraction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1567-1574). ACM.