Exploring LangChain's Quickstart (1) - LLM, Prompt Template, and Chain
In this series, we’ll explore the ‘Quickstart’ section of the LangChain documentation.
In this article, we focus on LLMs, prompt templates, and chains.
1. Installation
To get started, install langchain
and its OpenAI extension, langchain-openai
:
pip install langchain
pip install langchain-openai
-
We are using the following versions. Note that LangChain often introduces breaking changes, so be careful with different versions:
$ pip list | grep langchain langchain 0.1.17 langchain-community 0.0.37 langchain-core 0.1.52 langchain-openai 0.1.6 langchain-text-splitters 0.0.1
2. Set Up the API Key
Next, set your OpenAI API key in the environment variable OPENAI_API_KEY
.
2.1. Save the API Key in a File
Create a .openai
file in your working directory and write your API key in the file.
You can obtain the API key here.
2.2. Set the API Key as an Environment Variable
Load the API key from the .openai
file and set it as the environment variable OPENAI_API_KEY
.
import os
with open('.openai') as f:
os.environ['OPENAI_API_KEY'] = f.read().strip()
3. Talk with ChatGPT Using LangChain
3.1. Initialize the LLM
First, load OpenAI’s chatbot (LLM). By default, it uses the model gpt-3.5-turbo
if no arguments are specified.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
3.2. Talk with the LLM
To talk with the LLM, use the invoke
method.
llm.invoke("What is LangChain?")
The output will look something like this. (Note: The training data for gpt-3.5-turbo
is up to date as of September 2021, so the LLM doesn’t know LangChain.)
AIMessage(content='LangChain is ...', response_metadata={'token_usage': {'completion_tokens': 75, 'prompt_tokens': 12, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-xxxxx')
OpenAI recently switched from a post-paid to a prepaid billing system. If you see an error like this, read the OpenAI documentation and ensure you have sufficient funds in advance. (It may take some time for your payment to be reflected.)
RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
4. Use Prompt Templates
If you find yourself using the same prompt format repeatedly, using templates can simplify your workflow.
Here’s how you create a template that sends the system message “You are an excellent documentation writer.” and accepts user conversation under the key input
:
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are an excellent documentation writer."),
("user", "{input}")
])
Use the invoke
method of the created template to generate prompts. Pass a dictionary with the key input
.
template.invoke({"input": "What is LangChain?"})
The generated prompt will be:
ChatPromptValue(messages=[SystemMessage(content='You are an excellent documentation writer.'), HumanMessage(content='What is LangChain?')])
You can then input this prompt into the LLM to generate a response. (Again, the response will be incorrect.)
prompt = template.invoke({"input": "What is LangChain?"})
llm.invoke(prompt)
-
Execution Result
AIMessage(content='LangChain is ...', response_metadata={'token_usage': {'completion_tokens': 86, 'prompt_tokens': 23, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-xxxxx')
5. Combine Multiple Operations (Chain)
In the previous example, the following operations were performed:
User Input => Template => LLM => Response
LangChain allows you to combine these operations into a single process known as a chain. To create a chain, link each operation with a pipe (|
).
chain = template | llm
Passing user input to the invoke
method of the chain executes all linked operations and returns the result.
chain.invoke({"input": "What is LangChain?"})
-
Execution Result (response is incorrect)
AIMessage(content='LangChain is ...', response_metadata={'token_usage': {'completion_tokens': 99, 'prompt_tokens': 23, 'total_tokens': 122}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-xxxxx')
Output as a String
To convert the AIMessage
object output into a string, use the StrOutputParser
.
from langchain_core.output_parsers import StrOutputParser
output_parser = StrOutputParser()
chain = template | llm | output_parser
chain.invoke({"input": "What is LangChain?"})
-
Execution Result (response is incorrect)
'LangChain is ...'