Everyone is excited about GenAI, chatbots and agents—and for good reason. This article offers a very beginner-friendly introduction to building with OpenAI and LangChain using Python, helping you take your first steps into the world of AI development. This article covers the following topics:
- Installation of dependencies
- Connecting to the OpenAI GPT model
- Creating your first chat app
- Streaming the output
- Working with UserMessage, SystemMessage of LangChain
- Working with JSON data and a model
Installation
Let us start by installing the dependencies. First, create an empty folder and add a file named requirements.txt. Inside this file, list the following dependencies:
openai>=1.0.0
langchain>=0.1.0
langchain-openai>=0.1.0
python-dotenv>=1.0.0
langgraph>=0.1.0
Once you’ve created the requirements.txt file, run the following command in the terminal to install all the dependencies:
pip3 install -r requirements.txt
You can verify whether dependencies are successfully installed by using the command below, which should display a list of installed dependencies.
pip3 list
Setting Up the Environment
After installing the dependencies, set up the environment variables. Create a file named .env in your project directory and add the following keys:
OPENAI_API_KEY= "openaikey"
LANGSMITH_TRACING="true"
LANGSMITH_API_KEY="langsmith key"
Verify that you have already obtained your OpenAI API key and LangSmith key. You can get them from their respective portals.
Working with GPT Model
Once the environment variables are set, create a file named main.py in your project and import the following packages.
Read the full article here.
https://www.telerik.com/blogs/build-llm-chat-app-using-langgraph–openai-python-part-1
Thanks for reading.
Discover more from Dhananjay Kumar
Subscribe to get the latest posts sent to your email.