langchain-teddynote

LangChain Helper Library


Keywords
langchain, teddynote, rag
License
Apache-2.0
Install
pip install langchain-teddynote==0.0.1

Documentation

langchain-teddynote

๋žญ์ฒด์ธ ํ•œ๊ตญ์–ด ํŠœํ† ๋ฆฌ์–ผ์— ์‚ฌ์šฉ๋˜๋Š” ๋‹ค์–‘ํ•œ ์œ ํ‹ธ ํŒŒ์ด์ฌ ํŒจํ‚ค์ง€.

LangChain ์„ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋ถˆํŽธํ•œ ๊ธฐ๋Šฅ์ด๋‚˜, ์ถ”๊ฐ€์ ์ธ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์„ค์น˜

pip install langchain-teddynote

์‚ฌ์šฉ๋ฒ•

์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ

์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ์„ ์œ„ํ•œ stream_response ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

from langchain_teddynote.messages import stream_response
from langchain_openai import ChatOpenAI

# ๊ฐ์ฒด ์ƒ์„ฑ
llm = ChatOpenAI(
    temperature=0.1,  # ์ฐฝ์˜์„ฑ (0.0 ~ 2.0)
    model_name="gpt-4o",  # ๋ชจ๋ธ๋ช…
)
answer = llm.stream("๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์•„๋ฆ„๋‹ค์šด ๊ด€์žฅ์ง€ 10๊ณณ๊ณผ ์ฃผ์†Œ๋ฅผ ์•Œ๋ ค์ฃผ์„ธ์š”!")

# ์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ๋งŒ ํ•˜๋Š” ๊ฒฝ์šฐ
stream_response(answer)

# ์ถœ๋ ฅ๋œ ๋‹ต๋ณ€์„ ๋ฐ˜ํ™˜ ๊ฐ’์œผ๋กœ ๋ฐ›๋Š” ๊ฒฝ์šฐ
# final_answer = stream_response(answer, return_output=True)

LangSmith ์ถ”์ 

# LangSmith ์ถ”์ ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. https://smith.langchain.com
# ํ™˜๊ฒฝ๋ณ€์ˆ˜ ์„ค์ •์€ ๋˜์–ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค.
from langchain_teddynote import logging

# ํ”„๋กœ์ ํŠธ ์ด๋ฆ„์„ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค.
logging.langsmith("ํ”„๋กœ์ ํŠธ๋ช… ๊ธฐ์ž…")

์ถœ๋ ฅ

LangSmith ์ถ”์ ์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค.
[ํ”„๋กœ์ ํŠธ๋ช…]
(๊ธฐ์ž…ํ•œ ํ”„๋กœ์ ํŠธ๋ช…)

๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ(์ด๋ฏธ์ง€ ์ž…๋ ฅ)

from langchain_teddynote.models import MultiModal
from langchain_teddynote.messages import stream_response

# ๊ฐ์ฒด ์ƒ์„ฑ
llm = ChatOpenAI(
    temperature=0.1,  # ์ฐฝ์˜์„ฑ (0.0 ~ 2.0)
    model_name="gpt-4o",  # ๋ชจ๋ธ๋ช…
)

# ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๊ฐ์ฒด ์ƒ์„ฑ
system_prompt = """๋‹น์‹ ์€ ํ‘œ(์žฌ๋ฌด์ œํ‘œ) ๋ฅผ ํ•ด์„ํ•˜๋Š” ๊ธˆ์œต AI ์–ด์‹œ์Šคํ„ดํŠธ ์ž…๋‹ˆ๋‹ค. 
๋‹น์‹ ์˜ ์ž„๋ฌด๋Š” ์ฃผ์–ด์ง„ ํ…Œ์ด๋ธ” ํ˜•์‹์˜ ์žฌ๋ฌด์ œํ‘œ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํฅ๋ฏธ๋กœ์šด ์‚ฌ์‹ค์„ ์ •๋ฆฌํ•˜์—ฌ ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค."""

user_prompt = """๋‹น์‹ ์—๊ฒŒ ์ฃผ์–ด์ง„ ํ‘œ๋Š” ํšŒ์‚ฌ์˜ ์žฌ๋ฌด์ œํ‘œ ์ž…๋‹ˆ๋‹ค. ํฅ๋ฏธ๋กœ์šด ์‚ฌ์‹ค์„ ์ •๋ฆฌํ•˜์—ฌ ๋‹ต๋ณ€ํ•˜์„ธ์š”."""

# ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๊ฐ์ฒด ์ƒ์„ฑ
multimodal_llm = MultiModal(
    llm, system_prompt=system_prompt, user_prompt=user_prompt
)

# ์ƒ˜ํ”Œ ์ด๋ฏธ์ง€ ์ฃผ์†Œ(์›น์‚ฌ์ดํŠธ๋กœ ๋ถ€ํ„ฐ ๋ฐ”๋กœ ์ธ์‹)
IMAGE_URL = "https://storage.googleapis.com/static.fastcampus.co.kr/prod/uploads/202212/080345-661/kwon-01.png"

# ๋กœ์ปฌ PC ์— ์ €์žฅ๋˜์–ด ์žˆ๋Š” ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ ์ž…๋ ฅ
# IMAGE_URL = "./images/sample-image.png"

# ์ด๋ฏธ์ง€ ํŒŒ์ผ๋กœ ๋ถ€ํ„ฐ ์งˆ์˜
answer = multimodal_llm.stream(IMAGE_URL)
# ์ŠคํŠธ๋ฆฌ๋ฐ ๋ฐฉ์‹์œผ๋กœ ๊ฐ ํ† ํฐ์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. (์‹ค์‹œ๊ฐ„ ์ถœ๋ ฅ)
stream_response(answer)

OpenAI Assistant V2

from langchain_teddynote.models import OpenAIAssistant


# RAG ์‹œ์Šคํ…œ ํ”„๋กฌํ”„ํŠธ ์ž…๋ ฅ
_DEFAULT_RAG_INSTRUCTIONS = """You are an assistant for question-answering tasks. 
Use the following pieces of retrieved context to answer the question. 
If you don't know the answer, just say that you don't know. 
Answer in Korean."""


# ์„ค์ •(configs)
configs = {
    "OPENAI_API_KEY": openai_api_key,  # OpenAI API ํ‚ค
    "instructions": _DEFAULT_RAG_INSTRUCTIONS,  # RAG ์‹œ์Šคํ…œ ํ”„๋กฌํ”„ํŠธ
    "PROJECT_NAME": "PDF-RAG-TEST",  # ํ”„๋กœ์ ํŠธ ์ด๋ฆ„(์ž์œ ๋กญ๊ฒŒ ์„ค์ •)
    "model_name": "gpt-4o",  # ์‚ฌ์šฉํ•  OpenAI ๋ชจ๋ธ ์ด๋ฆ„(gpt-4o, gpt-4o-mini, ...)
    "chunk_size": 1000,  # ์ฒญํฌ ํฌ๊ธฐ
    "chunk_overlap": 100,  # ์ฒญํฌ ์ค‘๋ณต ํฌ๊ธฐ
}


# ์ธ์Šคํ„ด์Šค ์ƒ์„ฑ
assistant = OpenAIAssistant(configs)

# ์—…๋กœ๋“œํ•  ํŒŒ์ผ ๊ฒฝ๋กœ
data = "ํŒŒ์ผ์ด๋ฆ„.pdf"

# ํŒŒ์ผ ์—…๋กœ๋“œ ํ›„ file_id ๋Š” ์ž˜ ๋ณด๊ด€ํ•ด ๋‘์„ธ์š”. (๋Œ€์‹œ๋ณด๋“œ์—์„œ ๋‚˜์ค‘์— ํ™•์ธ ๊ฐ€๋Šฅ)
file_id = assistant.upload_file(data)

# ์—…๋กœ๋“œํ•œ ํŒŒ์ผ์˜ ID ๋ฆฌ์ŠคํŠธ ์ƒ์„ฑ
file_ids = [file_id]

# ์ƒˆ๋กœ์šด ์–ด์‹œ์Šคํ„ดํŠธ ์ƒ์„ฑ ๋ฐ ID ๋ฐ›๊ธฐ
assistant_id, vector_id = assistant.create_new_assistant(file_ids)

# ์–ด์‹œ์Šคํ„ดํŠธ ์„ค์ •
assistant.setup_assistant(assistant_id)

# ๋ฒกํ„ฐ ์Šคํ† ์–ด ์„ค์ •
assistant.setup_vectorstore(vector_id)

์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ

for token in assistant.stream("์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI์˜ ์ด๋ฆ„์€?"):
    print(token, end="", flush=True)

ํ˜น์€

from langchain_teddynote.messages import stream_response

stream_response(assistant.stream("์ด์ „ ๋‹ต๋ณ€์„ ์˜์–ด๋กœ"))

์ผ๋ฐ˜ ์ถœ๋ ฅ

# ์งˆ๋ฌธ
print(assistant.invoke("์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI์˜ ์ด๋ฆ„์€?"))

๋Œ€ํ™” ๋ชฉ๋ก์„ ์กฐํšŒ

# ๋Œ€ํ™” ๋ชฉ๋ก ์กฐํšŒ
assistant.list_chat_history()

๋Œ€ํ™” ์ดˆ๊ธฐํ™”

# ๋Œ€ํ™” ์ดˆ๊ธฐํ™”
assistant.clear_chat_history()

ํŠœํ† ๋ฆฌ์–ผ