跳到主要内容

Prompt + LLM

最常见和最有价值的组合是:

PromptTemplate / ChatPromptTemplate -> LLM / ChatModel -> OutputParser

几乎你构建的任何其他链都将使用此构建块。

PromptTemplate + LLM

最简单的组合只是将提示和模型组合起来创建一个链,该链接受用户输入,将其添加到提示,将其传递给模型,然后返回原始模型输出。

请注意,您可以在此处根据需要混合搭配 PromptTemplate/ChatPromptTemplates 和 LLM/ChatModels。

pip install –upgrade –quiet langchain langchain-openai
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model
chain.invoke({"foo": "bears"})
AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", additional_kwargs={}, example=False)

很多时候我们想要附加将传递给每个模型调用的 kwargs。以下是一些例子:

附加停止序列

chain = prompt | model.bind(stop=["\n"])
chain.invoke({"foo": "bears"})
AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)

附加函数调用信息

functions = [
{
"name": "joke",
"description": "A joke",
"parameters": {
"type": "object",
"properties": {
"setup": {"type": "string", "description": "The setup for the joke"},
"punchline": {
"type": "string",
"description": "The punchline for the joke",
},
},
"required": ["setup", "punchline"],
},
}
]
chain = prompt | model.bind(function_call={"name": "joke"}, functions=functions)
chain.invoke({"foo": "bears"}, config={})
AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\n  "setup": "Why don\'t bears wear shoes?",\n  "punchline": "Because they have bear feet!"\n}'}}, example=False)

PromptTemplate + LLM + OutputParser

我们还可以添加输出解析器,以轻松地将原始 LLM/ChatModel 输出转换为更可行的格式

from langchain_core.output_parsers import StrOutputParser

chain = prompt | model | StrOutputParser()
chain.invoke({"foo": "bears"})
{'setup': "Why don't bears like fast food?",
'punchline': "Because they can't catch it!"}
from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser

chain = (
prompt
| model.bind(function_call={"name": "joke"}, functions=functions)
| JsonKeyOutputFunctionsParser(key_name="setup")
)
"Why don't bears wear shoes?"

简化输入

为了使调用更加简单,我们可以添加 RunnableParallel 来为我们创建提示输入字典:

from langchain_core.runnables import RunnableParallel, RunnablePassthrough

map_ = RunnableParallel(foo=RunnablePassthrough())
chain = (
map_
| prompt
| model.bind(function_call={"name": "joke"}, functions=functions)
| JsonKeyOutputFunctionsParser(key_name="setup")
)
chain.invoke("bears")
"Why don't bears wear shoes?"

由于我们使用另一个 Runnable 来构建我们的地图,我们甚至可以使用一些语法糖并且只使用一个字典:

chain = (
{"foo": RunnablePassthrough()}
| prompt
| model.bind(function_call={"name": "joke"}, functions=functions)
| JsonKeyOutputFunctionsParser(key_name="setup")
)
chain.invoke("bears")
"Why don't bears like fast food?"