A comprehensive lexical discovery application that is useful for finding semantic relationships such as, the antonyms, synonyms, hypernyms, hyponyms, homophones and definitions for a specific word.


Keywords
antonyms, bag, of, words, definitions, hypernyms, hyponyms, homophones, information, retrieval, lexicon, semantic, relationships, synonyms, natural, language, processing, bag-of-words, dictionary, nlp, python, python3, text-analysis, textual-analysis, wordlists, wordnet, wordnets, wordsearch
License
Other
Install
pip install wordhoard==1.5.1

Documentation

Overviews

PyPI   License: MIT  GitHub issues  GitHub pull requests  Downloads 

Primary Use Case

Textual analysis is a broad term for various research methodologies used to qualitatively describe, interpret and understand text data. These methodologies are mainly used in academic research to analyze content related to media and communication studies, popular culture, sociology, and philosophy. Textual analysis allows these researchers to quickly obtain relevant insights from unstructured data. All types of information can be gleaned from textual data, especially from social media posts or news articles. Some of this information includes the overall concept of the subtext, symbolism within the text, assumptions being made and potential relative value to a subject (e.g. data science). In some cases it is possible to deduce the relative historical and cultural context of a body of text using analysis techniques coupled with knowledge from different disciplines, like linguistics and semiotics.

Word frequency is the technique used in textual analysis to measure the frequency of a specific word or word grouping within unstructured data. Measuring the number of word occurrences in a corpus allows a researcher to garner interesting insights about the text. A subset of word frequency is the correlation between a given word and that word's relationship to either antonyms and synonyms within the specific corpus being analyzed. Knowing these relationships is critical to improving word frequencies and topic modeling.

WordHoard was designed to assist researchers performing textual analysis to build more comprehensive lists of antonyms, synonyms, hypernyms, hyponyms and homophones.

Installation

Install the distribution via pip:

pip3 install wordhoard

General Package Utilization

Please reference the WordHoard Documentation for package usage guidance and parameters.

Sources

This package is currently designed to query these online sources for antonyms, synonyms, hypernyms, hyponyms and definitions:

  1. classicthesaurus.com
  2. collinsdictionary.com
  3. merriam-webster.com
  4. synonym.com
  5. thesaurus.com
  6. wordhippo.com
  7. wordnet.princeton.edu

Dependencies

This package has these core dependencies:

  1. backoff
  2. BeautifulSoup
  3. deckar01-ratelimit
  4. deepl
  5. lxml
  6. requests
  7. urllib3

Additional details on this package's dependencies can be found here.

Development Roadmap

If you would like to contribute to the WordHoard project please read the contributing guidelines.

Items currently under development:

  • Expanding the list of hypernyms, hyponyms and homophones
  • Adding part-of-speech filters in queries

Issues

This repository is actively maintained. Feel free to open any issues related to bugs, coding errors, broken links or enhancements.

You can also contact me at John Bumgarner with any issues or enhancement requests.

Sponsorship

If you would like to contribute financially to the development and maintenance of the WordHoard project please read the sponsorship information.

License

The MIT License (MIT). Please see License File for more information.

Author

Copyright (c) 2020 John Bumgarner