With so many people using LLMs, they have become more and more ingrained in our everyday life. In particular, Agent chat is becoming a mainstream way to use all kinds of tools. It is now both familiar and increasingly mandatory for any modern applications.
In this post, we are going to see what is an agent chat, why it is a new kind of interface and even create one using LangGraph.
As usual, we will keep things simple so let’s get to it!
Agent Chat: Not Just Another Chat Interface
With the explosion of chat interfaces (ChatGPT, Claude, …), more than half the population has used them, making them familiar technology. But the chat that was first introduced by ChatGPT has completely changed and has become something else: an agent powered chat.
A classic LLM chat is just a direct interface with an LLM. You ask a question, it is directly sent to the LLM and you got its answer. You ask a second question, and the LLM will answer it the same way.
An Agent powered chat or just Agent chat is another kind of beast. It can, depending of what you say, decide to just answer your question, use this or that tool or even ask you for more information.
It is not just a sophisticated answer machine, but something that can generate a plan to complete the task you need.
Let’s take an example with ChatGPT:
Here I asked ChatGPT for the weather in Paris and it gave the current weather in Paris at the time of writing.
This is not possible with a simple LLM chat. ChatGPT used an agents behind the scene and generate a plan to answer the question. Here’s what could be the plan generated by the agents:
- Call an internal tool to get the current time in Paris
- Call a weather system tool to get the weather information for the specified date and location
- Send back the data with the answer.
The real plan would be far more complicated but you have the essential part.
To better understand Agents, don’t hesitate to check out my other posts where I deep dive on the topic and implement from scratch Agent systems: Simple Agentic RAG and Simple CRAG agent.
Agent Chat: Not a nice-to-have feature but a new type of interface
With the advent of chat interface like ChatGPT, what is actually incredible becomes the standard which gives additional pressure on every other products.
Adding ‘AI’ to your application’s name isn’t enough to make it successful. You need features which put it at the same level as the likes of ChatGPT.
one such pressure is Agent chat. You may not need this feature in the first version, but it is something that will be asked.
At the same time, this is not just an additional feature, but a completely new type of human interface.
Let’s look at the homepage of ChatGPT. It is very minimalist and the main component is the chat interface. Why ? It is because the agent behind the chat interface is this versatile and capable. And that is a completely different way of creating UX design.
Also, if you check the previous example using the weather question, you can see that it is the answer of the chat that is being shown as components and not only text. That is another point on how the chat is at the center of ChatGPT.
This kind of interface will become or has already become a new standard and all products need to take this into consideration.
Now that we have seen how important the agent interface is, let’s create a simple one using LangGraph!
Building our Agent Chat Application
Now let’s build an application that use an Agent chat interface. We are going to build a python application that display IT architecture graph and allows user to change it using only the chat. We will use LangGraph, OpenAI, Streamlit and Mermaid (a library that renders Markdown-inspired text into diagram and graph which is heavily used in Claude Desktop).
You can find all the code here. Don’t hesitate to start if you like it!
First let’s install the dependencies. We are going to use pipenv as out dependency manger :
pipenv install langchain streamlit langchain-openai python-dotenv langgraph langchain-community streamlit-mermaid
Create a .env file at the same level of your code with a single key called “OPENAI_KEY” that will contains your OpenAI api key.
Now let’s add the code! I will pass the streamlit specific code and only show the agent part.
First let’s init the graph mode and the LLM model for the agents.
# Data models
class GraphState(TypedDict):
"""State for our diagram modification graph"""
message: str
# Initialize LLM and prompts
model = ChatOpenAI(temperature=0, openai_api_key=os.environ["OPENAI_KEY"])
Now let’s add the different prompts we will need:
- We will use 3 prompts for the system, the check_redraw, the analyze_modification and the generate_diagram
- The check_redraw will check if the user want to redraw the diagram or not (the only other answer is to add more modifications). The answer is only binary but a real use case would be far more complicated.
- The analyze_modification that will analyse the user modification and extract the clean modification instructions. By modifying this part, we can make the agent more sophisticated and make it create detailed plans.
- The generate_diagram will give all the instructions to generate the new mermaid diagram.
check_redraw_template = """Determine if the user is asking to redraw/update/regenerate/update the diagram.
Return EXACTLY 'redraw' if they want to update/redraw/regenerate, 'modify' if it's a modification request.
User message: {message}
Think step by step:
1. Is the user asking to update/redraw/regenerate the existing diagram?
2. Or are they providing a new modification?
Return only one word, either 'redraw' or 'modify'."""
analyze_modification_template = """Extract the modification request from the user's message.
Be specific about what needs to be modified in the diagram.
User message: {message}
Return ONLY the modification needed, no additional text."""
generate_diagram_template = """Generate a mermaid flowchart diagram by modifying the current diagram to include the requested changes.
Return ONLY the mermaid diagram code starting with 'graph TD' or 'flowchart TD', no other text, no explanations.
Current diagram:
{current_diagram}
Modifications to implement:
{modifications}
Rules:
1. Start with 'graph TD' or 'flowchart TD'
2. Keep all existing components unless they need to be modified
3. Use descriptive names in square brackets (e.g., [Cache], [CDN], [API Gateway])
4. Use proper Mermaid syntax for connections (-->)
5. Add new components based on the modifications
6. Show all connections between components
7. Maintain proper node IDs (like A, B, C) but with descriptive labels
Example output format:
graph TD
A[Component1] --> B[Component2]
B --> C[Component3]
"""
check_redraw_prompt = ChatPromptTemplate.from_template(check_redraw_template)
analyze_mod_prompt = ChatPromptTemplate.from_template(analyze_modification_template)
diagram_prompt = ChatPromptTemplate.from_template(generate_diagram_template)
Now let’s add the steps to the graph. This looks like a lot of code but there is just a lot of streamlit related instructions. In a normal setup, you would do this in your API.
And here’s what the graph looks like:
- First you will have the router that will decide if you are modifying or asking for a redraw.
- In the modify case, you will use the analyze_modification to get the instructions and save it for redraw.
- In the redraw, you will get all the instructions, send it to the LLM to have an updated mermaid diagram and finally render it.
def router(state: GraphState) -> Literal["modify", "redraw"]:
"""Route to next step based on message analysis"""
print("\n=== ROUTER STARTED ===")
message = state["message"]
result = (check_redraw_prompt | model | StrOutputParser()).invoke(
{"message": message}
)
print(f"Router output for message '{message}': {result}")
return "redraw" if result.strip().lower() == "redraw" else "modify"
def analyze_modification(state: GraphState):
"""Extract modification from message"""
print("\n=== ANALYZING MODIFICATION ===")
message = state["message"]
modification = (analyze_mod_prompt | model | StrOutputParser()).invoke(
{"message": message}
)
print(f"Extracted modification: {modification}")
print(f"Current modifications: {st.session_state.modifications}")
# Update session state directly
st.session_state.modifications.append(modification)
print(f"Updated modifications list: {st.session_state.modifications}")
return {}
def generate_diagram(state: GraphState):
"""Generate new diagram based on modifications"""
print("\n=== GENERATING DIAGRAM ===")
print(f"Current modifications to implement: {st.session_state.modifications}")
mods_text = "\n".join(f"- {mod}" for mod in st.session_state.modifications)
new_diagram = (diagram_prompt | model | StrOutputParser()).invoke(
{
"current_diagram": st.session_state.current_diagram,
"modifications": mods_text,
}
)
print(f"Generated diagram:\n{new_diagram}")
if not new_diagram.strip():
print("Warning: Generated diagram is empty, keeping current diagram")
return {}
# Update session state only if we got a valid diagram
st.session_state.current_diagram = new_diagram
st.session_state.modifications = []
return {}
# Define workflow
workflow = StateGraph(GraphState)
# Add nodes
workflow.add_node("modify", analyze_modification)
workflow.add_node("redraw", generate_diagram)
# Build graph
workflow.add_conditional_edges(START, router, {"modify": "modify", "redraw": "redraw"})
workflow.add_edge("modify", END)
workflow.add_edge("redraw", END)
# Compile
graph = workflow.compile()
Now that we have the code ready, let’s run it to see how it works!
Test our own Agent Chat
As we discussed before, I didn’t talk about the streamlit related code so I advise you to get the working code here before trying to launch it.
Now let’s launch it:
pipenv run streamlit run app.py
You will have this screen where you can add modifications or ask for a redraw.
Let’s add some modifications and even ask for a more detailed diagram!
And here’s the updated diagram! Imagine, this is something you have generated using only prompts and nothing more. Isn’t that beatiful!
Honestly, it doesn’t look this beautiful but that is just because I did not try to make it nicer. Integrated into a real web app, you could have the same level of feature that your ChatGPT but for your own tool. Pretty nice right !?
Conclusion
As you saw, creating an agent chat is not that complicated once you understand the concepts behind it. We have seen how to create one from scratch using LangGraph and how this new type of interface is becoming a standard for applications.
The most important thing to remember is that an agent chat is not just a fancy way to talk with an LLM. It is a completely new way to interact with applications, where the chat becomes the central interface that can understand, plan, and execute actions based on user needs.
You can start small with basic features and expand depending of the usage but the import point is to make it as simple as possible for the user.
Afterwards
I hope you really loved this post. Don’t forget to check my other post as I write a lot of cool posts on practical stuff in AI.
Follow me on