A GPT conversational TUI tool that runs within the terminal.


Keywords
TUI, terminal, GPT, CLI, textual, user, interface, ai, ai-group-chat, openai, python
License
Other
Install
pip install gptui==0.2.1

Documentation

GPTUI

GitHub PyPI - Version GitHub Workflow Status (with event) GitHub Workflow Status (with event) Static Badge

English readme简体中文 readme

gptui_logo

GPTUI is a GPT conversational TUI (Textual User Interface) tool that runs within the terminal. Using the Textual framework for its TUI interface and equipping the plugin framework provided by Semantic Kernel. GPTUI offers a lightweight Kernel to power AI applications. The top-level TUI application is decoupled from the underlying Kernel, allowing you to easily replace the TUI interface or expand its functionalities. At present, only the GPT model of OpenAI is supported, and other LLM interfaces will be added later.

  gptui_demo

TUI Features

  • Create and manage conversations with GPT.
  • Display context tokens in real-time.
  • View and adjust GPT conversation parameters at any time, such as temperature, top_p, presence_penalty, etc.
  • A dedicated channel to display internal process calls.
  • Offers a file channel through which you can upload to or download from GPT.
  • Voice functionality.
  • Group talk functionality12.
  • AI-Care. Your AI can propactively care for you3.
  • Optional built-in plugins (continuously evolving):
    • Internet search4.
    • Open interpreter521. (Temporarily removed, waiting to be added back after it supports openai v1.x.)
    • Reminders1.
    • Recollecting memories from vectorized conversation history.
  • Support custom plugins.

gptui_img

🎬 Demo Videos

Compatibility

GPTUI runs in a command line environment and is compatible with Linux, macOS, Windows and Android6. Using the functionality provided by textual-web, you can also run GPTUI in the browser and share it with remote friends👍.

⚙️ GPTUI Kernel

GPTUI offers a lightweight Kernel for building AI applications, allowing you to easily expand GPTUI's capabilities or construct your own AI application.

gptui-framework

The kernel relies on jobs and handlers to perform specific functions. To achieve new functionalities, all you need to do is write or combine your own jobs and handlers. The manager and kernel of GPTUI are entirely independent of the client application, enabling you to effortlessly relocate the manager or kernel for use elsewhere. The application layer of GPTUI (client) employs the CVM architecture, where the model layer provides foundational, reusable modules for interaction with LLM, independent of specific views and controllers implementations. If you wish to build your own AI application, you can start here, fully utilizing the kernel, manager, and models. To alter or expand UI functionalities, typically, only modifications to the controllers and views are needed.

See Development Documentation for details. Documentation.

If you need to use certain functions or components of gptui, you can import and use gptui as a library.

Installation

Normal use requires ensuring stable network connection to OpenAI. If you encounter any issues, please refer to troubleshooting.

Install with pip

pip install gptui

Config your API keys before running.

To run:

gptui

Specify config file:

gptui --config <your_config_file_path>

This program loads files through the following steps:

  1. Read the configuration file from --config. If not specified, proceed to the next step.
  2. Search for ~/.gptui/.config.yml in the user directory. If not found, move to the next step.
  3. Copy the default configuration file gptui/config.yml to ~/.gitui/.config.yml and use it.

Install from source

git clone https://github.com/happyapplehorse/gptui.git
cd gptui
pip install .

API configuration is required before running.

To run:

gptui
# Or you can also use
# python -m gptui

You can also directly run the startup script (this allows you to modify the source code and run it immediately): First, install the dependencies:

pip install -r requirements.txt

Then, run the startup script:

python main.py

When running the program with python main.py or python -m gptui, use gptui/config.yml as the configuration file.

On Linux or macOS systems, if you want to use voice functionalities, you'll need to install pyaudio separately.

Configuration

Config API keys

Configure the corresponding API Keys in ~/.gptui/.env_gptui. Refer to the .env_gptui.example file. When using the "WebServe" plugin, GOOGLE_KEY and GOOGLE_CX need to be provided, which can be obtained free of charge from Google.

Config File

See ./config.yml for a config file example that lists all configurable options. Depending on the platform you are using, it is best to configure the following options:

  • os: system platform

Otherwise, some features may not work properly, such as code copy and voice related functions.

Configuration Guide

For detailed configuration instructions, please refer to here.

Quick Start

Interface Layout

gptui-layout

  • chat area: Display area for chat content.
  • status area: Program status display area, displaying response animations and notifications.
  • input area: Chat content input area.
  • auxiliary area: Auxiliary information area, displaying "internal communication" between the program and the LLM, including function call information, etc.
  • control area: The program's control area, where you can view and set the state of the program, such as change OpenAI chat parameters.
  • chat tabs: Conversation Tab Bar.
  • conversation control: Conversation control buttons. From top to bottom they are:
    • +: New conversation
    • >: Save conversation
    • <: Load conversation
    • -: Delete conversation
    • x: Delete conversation file
    • n: Disposable conversation
    • : Upload file
  • panel selector: Panel selection area. From top to bottom they are:
    • C: Conversation file records
    • D: System file tree
    • A: Auxiliary information panel
    • T: File pipeline panel
    • P: Plugin selection panel
  • switches:Direct control switches. From left to right they are:
    • R: Program state auto save and restore switch
    • V: Voice switch
    • S: Read reply by voice
    • F: Fold files in chat
    • |Exit|: Exit program
  • dashboard:Context window size for chat.
  • others:
    • <: Previous chat
    • >: Next chat
    • 1: Number of chats
    • : [Running status](#Running status)
    • : Fold right non-chat area
    • ?: Help documentation

Running status

: Ready.
: Task running.

Dynamic Commands

Switch to S in the control area, enter the command and press enter. Currently supports the following commands:

  • Set chat parameters Command: set_chat_parameters()
    Parameters: OpenAI chat parameters in dictionary form, refer to OpenAI Chat.
    Example: set_chat_parameters({"model": "gpt-4", "stream": True})
  • Set max sending tokens ratio Command: set_max_sending_tokens_ratio()
    Parameters: The ratio of the number of sent tokens to the total token window, in float form. The remaining token count is used as the limit for the number of tokens GPT returns.
    Example: set_max_sending_tokens_ratio(0.5)

Hotkeys

GPTUI provides hotkeys for commonly used features, see Help. In addition, you can also press ESC, ctrl+[, or ctrl+/ to bring up the hotkey menu (this mode offers more comprehensive hotkey functionalities, but they are not exactly the same as the direct hotkeys.).

Documentation

For detailed usage and development instructions, see here, for in-program help documentation see here.

Contribution

Some of GPTUI's plugin features rely on prompt, you can continue to help me improve these prompt. And I'd like to have appropriate animation cues during certain state changes. If you have any creative ideas, I'd appreciate your help in implementing them. P.S.: Each contributor can leave a quote in the program.

Note

This project utilizes OpenAI's Text-to-Speech (TTS) services for generating voice outputs. Please be aware that the voices you hear are not produced by human speakers, but are synthesized by AI technology.

License

GPTUI is built upon a multitude of outstanding open-source components and adheres to the MIT License open-source agreement. You are free to use it.

Footnotes

  1. It is recommended to use this under the GPT-4 model or a better one. 2 3

  2. Note: This feature may incur a significant token cost. 2

  3. Powered by AI-Care.

  4. GOOGLE_KEY and GOOGLE_CX are required. Obtained free from here.

  5. This plugin utilizes open-interpreter, you need to first follow the instructions provided by open-interpreter to properly set up the environment and API. The open-interpreter has the permission to execute code, please ensure that you are already aware of the associated risk before enabling this feature.

  6. I haven't tested it on the Windows platform yet, and some functionalities like code copying, voice features, etc., still need drivers to be written. I will complete these features later. When running on Android, please use the Termux terminal tool. For additional features like code copying and voice functionalities, you need to install Termux-API and grant the necessary permissions.