我们可以使用Ollama在本地部署DeepSeek大模型;Ollama本身提供了兼容OpenAI的调用方式的,同时LangChain也支持调用OpenAI;
一、以OpenAI兼容模型调用Ollama中的DeepSeek
1.安装openai的包;
pip install openai
2.通过OpenAI调用本地的DeepSeek大模型;
这里api_key必须填写,但是可以填写任意值;
from openai import OpenAI
client = OpenAI(
base_url='http://localhost:11434/v1/',
# required but ignored
api_key='ollama',
)
chat_completion = client.chat.completions.create(
messages=[
{
'role': 'user',
'content': '天空为什么是蓝色的',
}
],
model='deepseek-r1',
)
print(chat_completion.choices[0].message.content)
二、在LangChain中以OpenAI兼容模型调用DeepSeek;
Langchain中提供了init_chat_model来统一大部分模型的调用,但是不能设置base url和api key;
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-4o-mini", model_provider="openai")
但是LangChain提供了ChatOpenAI来支持OpenAI的集成;
1.安装OpenAI的包;
pip install -U langchain-openai
2.通过ChatOpenAI指定api key和base url;
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model='deepseek-r1',
api_key="ollama",
base_url="http://localhost:11434/v1/"
)
messages = [
SystemMessage("你是一个优秀的儿童物理科学科普者。"),
HumanMessage("你好,天空为什么是蓝色的?")
]
res = model.invoke(messages)
print(res.content)
三、以chain方式调用
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_openai import ChatOpenAI
def get_prompt():
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template("你是一个很有帮助的翻译助手,请将用户的输入从{input_language}成{output_language}"),
HumanMessagePromptTemplate.from_template("{input}")
]
)
return prompt
def get_llm():
llm = ChatOpenAI(
base_url= "http://localhost:11434/v1/",
api_key="ollama",
model="deepseek-r1"
)
return llm
def rag():
llm = get_llm()
prompt = get_prompt()
chain = prompt | llm
input={
"input_language":"汉语",
"output_language":"英语",
"input":"我爱我的祖国。"
}
res = chain.invoke(input)
print(res.content)
rag()
四、以流式方式进行调用
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_openai import ChatOpenAI
def get_prompt():
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template("你是一个很有帮助的翻译助手,请将用户的输入从{input_language}成{output_language}"),
HumanMessagePromptTemplate.from_template("{input}")
]
)
return prompt
def get_llm():
llm = ChatOpenAI(
base_url= "http://localhost:11434/v1/",
api_key="ollama",
model="deepseek-r1"
)
return llm
def rag():
check_and_set_smith_variable()
llm = get_llm()
prompt = get_prompt()
chain = prompt | llm
input={
"input_language":"汉语",
"output_language":"英语",
"input":"我爱我的祖国。"
}
for t in chain.stream(input):
print(t.content, end="")
rag()