LLM Blocks ๐ค
LLM Blocks is a Python library that provides a flexible and easy-to-use interface for interacting with OpenAI's GPT models. It provides a set of classes and methods to handle different types of interactions with the model, such as chat, template, and streamed responses.
๐ Table of Contents
๐ Why Use LLM Blocks
LLM Blocks simplifies the process of interacting with OpenAI's GPT models. It provides a set of classes and methods that abstract away the complexity of the underlying API calls, allowing you to focus on what matters most - building your application. Whether you're building a chatbot, a code generator, or any other application that leverages AI, LLM Blocks can help you get there faster.
๐ Repo Structure
llm_blocks
โโโ blocks.py
โโโ block_factory.py
โโโ __init__.py
โโโ requirements.dev.txt
tests
โโโ test_blocks.py
๐ป Installation
To install LLM Blocks, you can use pip:
pip install llm_blocks
๐ฏ Usage
Here's a simple example of how to use LLM Blocks:
from llm_blocks import block_factory
# Create a block
block = block_factory.get('block')
# Execute the block with some content
response = block.execute("Hello, world!")
# or execute like a function
response = block("Hello, world!")
# Print the response
print(response)
๐งช Testing
To run the tests, navigate to the root directory of the project and run:
python -m unittest discover tests
๐ค Contributing
Contributions are welcome! Please read our contributing guidelines to get started.
๐ License
This project is licensed under the terms of the MIT license. See the LICENSE file for details.
๐ง Contact
If you have any questions, feel free to reach out to us at contact@llmblocks.com.