Vectors-of-Locally-Aggregate-Concepts

Tool for creating document features


License
MIT
Install
pip install Vectors-of-Locally-Aggregate-Concepts==0.1

Documentation

Vectors of Locally Aggregated Concepts (VLAC)

PyPI - Status PyPI - Python PyPI - License PyPI - PyPi

Installation

Installation can be done using pypi

pip install vlac

Purpose

As illustrated in the Figure below, VLAC clusters word embeddings to create k concepts. Due to the high dimensionality of word embeddings (i.e., 300) spherical k-means is used to perform the clustering as applying euclidean distance will result in little difference in the distances between samples. The method works as follows. Let wi be a word embedding of size D assigned to cluster center ck. Then, for each word in a document, VLAC computes the element-wise sum of residuals of each word embedding to its assigned cluster center. This results in k feature vectors, one for each concept, and all of size D. All feature vectors are then concatenated, power normalized, and finally, l2 normalization is applied. For example, if 10 concepts were to be created out of word embeddings of size 300 then the resulting document vector would contain 10 x 300 values.

Usage / Example

Below is an example of how to use the model. The example mirrors the Reuters R8 dataset

from vlac import VLAC
import pickle

# Contains embeddings for Reuters R8
with open('Data/r8_glove_1f.pickle', 'rb') as handle:
    model = pickle.load(handle)

# Load data
with open('Data/r8_docs.txt', "r") as f:
    docs = f.readlines()

# Train model and transform collection of documents
vlac_model = VLAC(documents=docs, model=model, oov=False)
vlac_features, kmeans = vlac_model.fit_transform(num_concepts=30)

# Create features for new documents
vlac_model = VLAC(documents=docs, model=model, oov=False)
vlac_features = vlac_model.transform(kmeans=kmeans)