How to Serialize and Deserialize Conversations#
When building AI assistants, it id=s often necessary to save the state of a conversation to disk and restore it later. This is essential for creating persistent applications that can:
Resume conversations after an application restart
Save user sessions for later continuation
Implement conversation history and analytics
Support multi-session workflows
In this tutorial, you will learn how to:
Serialize Agent conversations to JSON files
Serialize Flow conversations at any point during execution
Deserialize and resume both types of conversations
Build persistent conversation loops that survive application restarts
Concepts shown in this guide#
serialize to convert conversations to storable format
autodeserialize to restore conversations from storage
Handling conversation state persistence for both Agents and Flows
Basic Serialization#
Step 1. Add imports and configure LLM#
Start by importing the necessary packages for serialization:
import json
import os
from wayflowcore.agent import Agent
from wayflowcore.controlconnection import ControlFlowEdge
from wayflowcore.conversation import Conversation
from wayflowcore.dataconnection import DataFlowEdge
from wayflowcore.executors.executionstatus import (
FinishedStatus,
UserMessageRequestStatus,
)
from wayflowcore.flow import Flow
from wayflowcore.serialization import autodeserialize, serialize
from wayflowcore.steps import (
CompleteStep,
InputMessageStep,
OutputMessageStep,
StartStep,
)
WayFlow supports several LLM API providers. Select an LLM from the options below:
from wayflowcore.models import OCIGenAIModel
if __name__ == "__main__":
llm = OCIGenAIModel(
model_id="provider.model-id",
service_endpoint="https://url-to-service-endpoint.com",
compartment_id="compartment-id",
auth_type="API_KEY",
)
from wayflowcore.models import VllmModel
llm = VllmModel(
model_id="model-id",
host_port="VLLM_HOST_PORT",
)
from wayflowcore.models import OllamaModel
llm = OllamaModel(
model_id="model-id",
)
Step 2. Create storage functions#
Define helper functions to store and load conversations:
DIR_PATH = "path/to/your/dir"
def store_conversation(path: str, conversation: Conversation) -> str:
"""Store the given conversation and return the conversation id."""
conversation_id = conversation.conversation_id
serialized_conversation = serialize(conversation)
# Read existing data
if os.path.exists(path):
with open(path, "r") as f:
data = json.load(f)
else:
data = {}
# Add new conversation
data[conversation_id] = serialized_conversation
# Write back to file
with open(path, "w") as f:
json.dump(data, f, indent=2)
return conversation_id
def load_conversation(path: str, conversation_id: str) -> Conversation:
"""Load a conversation given its id."""
with open(path, "r") as f:
data = json.load(f)
serialized_conversation = data[conversation_id]
return autodeserialize(serialized_conversation)
These functions:
Use WayFlow’s
serialize()
to convert conversations to a storable formatStore multiple conversations in a single JSON file indexed by conversation ID
Use
autodeserialize()
to restore the original conversation objects
Serializing Agent conversations#
Agent conversations can be serialized at any point during execution:
Step 1. Create an Agent#
assistant = Agent(
llm=llm,
custom_instruction="You are a helpful assistant. Be concise.",
agent_id="simple_assistant",
)
Step 2. Run the conversation#
# Start a conversation
conversation = assistant.start_conversation()
conversation_id = conversation.conversation_id
print(f"1. Started conversation with ID: {conversation_id}")
# Execute initial greeting
status = conversation.execute()
print(f"2. Assistant says: {conversation.get_last_message().content}")
# Add user message
conversation.append_user_message("What is 2+2?")
print("3. User asks: What is 2+2?")
# Execute to get response
status = conversation.execute()
print(f"4. Assistant responds: {conversation.get_last_message().content}")
Step 3. Serialize the conversation#
AGENT_STORE_PATH = os.path.join(DIR_PATH, "agent_conversation.json")
store_conversation(AGENT_STORE_PATH, conversation)
print(f"5. Conversation serialized to {AGENT_STORE_PATH}")
Step 4. Deserialize the conversation#
loaded_conversation = load_conversation(AGENT_STORE_PATH, conversation_id)
print(f"6. Conversation deserialized from {AGENT_STORE_PATH}")
# Print the loaded conversation messages
print("7. Loaded conversation messages:")
messages = loaded_conversation.message_list.messages
for i, msg in enumerate(messages):
if msg.message_type.name == "AGENT":
role = "Assistant"
elif msg.message_type.name == "USER":
role = "User"
else:
role = msg.message_type.name
print(f" [{i}] {role}: {msg.content}")
Key points:
Each conversation has a unique
conversation_id
The entire conversation state is preserved, including message history
Loaded conversations retain their complete state and can resume execution
Access messages through
conversation.message_list.messages
Serializing Flow Conversations#
Flow conversations require special attention as they can be serialized mid-execution:
Step 1. Create a Flow#
First, create a flow using the builder function.
start_step = StartStep(name="start_step")
input_step = InputMessageStep(
name="input_step",
message_template="What's your favorite color?",
output_mapping={InputMessageStep.USER_PROVIDED_INPUT: "user_color"},
)
output_step = OutputMessageStep(
name="output_step", message_template="Your favorite color is {{ user_color }}. Nice choice!"
)
end_step = CompleteStep(name="end_step")
simple_flow = Flow(
begin_step=start_step,
control_flow_edges=[
ControlFlowEdge(start_step, input_step),
ControlFlowEdge(input_step, output_step),
ControlFlowEdge(output_step, end_step),
],
data_flow_edges=[
DataFlowEdge(
source_step=input_step,
source_output="user_color",
destination_step=output_step,
destination_input="user_color",
)
],
)
Step 2. Run the conversation#
Then start and run the flow conversation.
flow_conversation = simple_flow.start_conversation()
flow_id = flow_conversation.conversation_id
print(f"1. Started flow conversation with ID: {flow_id}")
# Execute until user input is needed
status = flow_conversation.execute()
print(f"2. Flow asks: {flow_conversation.get_last_message().content}")
Step 3. Serialize during execution#
You can now serialize the conversation during its execution, for instance here the flow is requesting the user to input some information but you can serialize the conversation and resume it later.
FLOW_STORE_PATH = os.path.join(DIR_PATH, "flow_conversation.json")
store_conversation(FLOW_STORE_PATH, flow_conversation)
print(f"3. Flow conversation serialized to {FLOW_STORE_PATH}")
Step 4. Deserialize the conversation#
You can now load back the serialized conversation.
loaded_flow_conversation = load_conversation(FLOW_STORE_PATH, flow_id)
input_step_1 = loaded_flow_conversation.flow.steps['input_step']
print(f"4. Flow conversation deserialized from {FLOW_STORE_PATH}")
# Provide user input to the loaded conversation
loaded_flow_conversation.append_user_message("Blue")
print("5. User responds: Blue")
Step 5. Resume the conversation execution#
You can resume the conversation from its state before serializing it.
outputs = loaded_flow_conversation.execute()
print(f"6. Flow output: {outputs.output_values[OutputMessageStep.OUTPUT]}")
# Print the loaded conversation messages
print("7. Loaded flow conversation messages:")
messages = loaded_flow_conversation.message_list.messages
for i, msg in enumerate(messages):
if msg.message_type.name == "AGENT":
role = "Flow"
elif msg.message_type.name == "USER":
role = "User"
else:
role = msg.message_type.name
print(f" [{i}] {role}: {msg.content}")
Important considerations:
Flows can be serialized while waiting for user input
The loaded Flow conversation resumes exactly where it left off
User input can be provided to the loaded conversation to continue execution
Building persistent applications#
For real-world applications, you’ll want to create persistent conversation loops:
def run_persistent_agent(assistant: Agent, store_path: str, conversation_id: str = None):
"""Run an agent with persistent conversation storage."""
# Load existing conversation or start new one
if conversation_id:
try:
conversation = load_conversation(store_path, conversation_id)
print(f"Resuming conversation {conversation_id}")
except (FileNotFoundError, KeyError):
print(f"Conversation {conversation_id} not found, starting new one")
conversation = assistant.start_conversation()
else:
conversation = assistant.start_conversation()
print(f"Started new conversation {conversation.conversation_id}")
# Main conversation loop
while True:
status = conversation.execute()
if isinstance(status, FinishedStatus):
print("Conversation finished")
break
elif isinstance(status, UserMessageRequestStatus):
# Save before waiting for user input
store_conversation(store_path, conversation)
print(f"Assistant: {conversation.get_last_message().content}")
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
print("Exiting and saving conversation...")
break
conversation.append_user_message(user_input)
# Final save
final_id = store_conversation(store_path, conversation)
print(f"Conversation saved with ID: {final_id}")
return final_id
This function:
Loads existing conversations or starts new ones
Saves state before waiting for user input
Allows users to exit and resume later
Returns the conversation ID for future reference
Best Practices#
Save before user input: Always serialize conversations before waiting for user input to prevent data loss.
Use unique IDs: Store conversations using their built-in
conversation_id
to avoid conflicts.Handle errors gracefully: Wrap deserialization in try-except blocks to handle missing or corrupted data.
Consider storage format: While JSON is human-readable, consider other formats for production use.
Version your serialization: Consider adding version information to handle future schema changes.
Limitations#
Tool state: When using tools with Agents, ensure tools are stateless or their state is managed separately.
Large conversations: Very long conversations may result in large serialized files.
Binary data: The default JSON serialization does not handle binary data directly.
Agent Spec Exporting/Loading#
You can export the assistant configuration to its Agent Spec configuration using the AgentSpecExporter
.
from wayflowcore.agentspec import AgentSpecExporter
serialized_assistant = AgentSpecExporter().to_json(assistant)
Here is what the Agent Spec representation will look like ↓
Click here to see the assistant configuration.
{
"component_type": "Agent",
"id": "simple_assistant",
"name": "agent_0f75efc7",
"description": "",
"metadata": {
"__metadata_info__": {
"name": "agent_0f75efc7",
"description": ""
}
},
"inputs": [],
"outputs": [],
"llm_config": {
"component_type": "VllmConfig",
"id": "80c8028b-0dd6-4dc9-ad7d-a4860ef1b849",
"name": "LLAMA_MODEL_ID",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"default_generation_parameters": null,
"url": "LLAMA_API_URL",
"model_id": "LLAMA_MODEL_ID"
},
"system_prompt": "You are a helpful assistant. Be concise.",
"tools": [],
"agentspec_version": "25.4.1"
}
component_type: Agent
id: simple_assistant
name: agent_0f75efc7
description: ''
metadata:
__metadata_info__:
name: agent_0f75efc7
description: ''
inputs: []
outputs: []
llm_config:
component_type: VllmConfig
id: 80c8028b-0dd6-4dc9-ad7d-a4860ef1b849
name: LLAMA_MODEL_ID
description: null
metadata:
__metadata_info__: {}
default_generation_parameters: null
url: LLAMA_API_URL
model_id: LLAMA_MODEL_ID
system_prompt: You are a helpful assistant. Be concise.
tools: []
agentspec_version: 25.4.1
You can then load the configuration back to an assistant using the AgentSpecLoader
.
from wayflowcore.agentspec import AgentSpecLoader
assistant: Agent = AgentSpecLoader().load_json(serialized_assistant)
Next steps#
In this guide, you learned how to:
Serialize both Agent and Flow conversations
Restore conversations and continue execution
Build persistent conversation loops
Handle conversation state across application restarts
Having learned how to serialize a conversation, you may now proceed to How to Serialize and Deserialize Flows and Agents.
Full code#
Click on the card at the top of this page to download the full code for this guide or copy the code below.
1# Copyright © 2025 Oracle and/or its affiliates.
2#
3# This software is under the Universal Permissive License
4# %%[markdown]
5# Tutorial - Build a Conversational Assistant with Agents
6# -------------------------------------------------------
7
8# How to use:
9# Create a new Python virtual environment and install the latest WayFlow version.
10# ```bash
11# python -m venv venv-wayflowcore
12# source venv-wayflowcore/bin/activate
13# pip install --upgrade pip
14# pip install "wayflowcore==26.1"
15# ```
16
17# You can now run the script
18# 1. As a Python file:
19# ```bash
20# python tutorial_agent.py
21# ```
22# 2. As a Notebook (in VSCode):
23# When viewing the file,
24# - press the keys Ctrl + Enter to run the selected cell
25# - or Shift + Enter to run the selected cell and move to the cell below# (UPL) 1.0 (LICENSE-UPL or https://oss.oracle.com/licenses/upl) or Apache License
26# 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0), at your option.
27
28
29
30
31# %%[markdown]
32## Imports for this guide
33
34# %%
35from wayflowcore.agent import Agent
36from wayflowcore.agentspec import AgentSpecExporter, AgentSpecLoader
37from wayflowcore.executors.executionstatus import (
38 FinishedStatus,
39 UserMessageRequestStatus,
40)
41from wayflowcore.tools import tool
42
43# %%[markdown]
44## Configure your LLM
45
46# %%
47from wayflowcore.models import VllmModel
48
49llm = VllmModel(
50 model_id="LLAMA70B_MODEL_ID",
51 host_port="LLAMA70B_API_URL",
52)
53
54# %%[markdown]
55## Defining a tool for the agent
56
57# %%
58@tool(description_mode="only_docstring")
59def search_hr_database(query: str) -> str:
60 """Function that searches the HR database for employee benefits.
61
62 Parameters
63 ----------
64 query:
65 a query string
66
67 Returns
68 -------
69 a JSON response
70
71 """
72 return '{"John Smith": {"benefits": "Unlimited PTO", "salary": "$1,000"}, "Mary Jones": {"benefits": "25 days", "salary": "$10,000"}}'
73
74
75# %%[markdown]
76## Specifying the agent instructions
77
78# %%
79HRASSISTANT_GENERATION_INSTRUCTIONS = """
80You are a knowledgeable, factual, and helpful HR assistant that can answer simple \
81HR-related questions like salary and benefits.
82You are given a tool to look up the HR database.
83Your task:
84 - Ask the user if they need assistance
85 - Use the provided tool below to retrieve HR data
86 - Based on the data you retrieved, answer the user's question
87Important:
88 - Be helpful and concise in your messages
89 - Do not tell the user any details not mentioned in the tool response, let's be factual.
90""".strip()
91
92# %%[markdown]
93## Creating the agent
94
95# %%
96assistant = Agent(
97 custom_instruction=HRASSISTANT_GENERATION_INSTRUCTIONS,
98 tools=[search_hr_database], # this is a decorated python function (Server tool in this example)
99 llm=llm, # the LLM object we created above
100)
101
102# %%[markdown]
103## Running the agent
104
105# %%
106# With a linear conversation
107conversation = assistant.start_conversation()
108
109conversation.append_user_message("What are John Smith's benefits?")
110status = conversation.execute()
111if isinstance(status, UserMessageRequestStatus):
112 assistant_reply = conversation.get_last_message()
113 print(f"---\nAssistant >>> {assistant_reply.content}\n---")
114else:
115 print(f"Invalid execution status, expected UserMessageRequestStatus, received {type(status)}")
116
117# then continue the conversation
118
119# %%
120# Or with an execution loop
121def run_agent_in_command_line(assistant: Agent):
122 inputs = {}
123 conversation = assistant.start_conversation(inputs)
124
125 while True:
126 status = conversation.execute()
127 if isinstance(status, FinishedStatus):
128 break
129 assistant_reply = conversation.get_last_message()
130 if assistant_reply is not None:
131 print("\nAssistant >>>", assistant_reply.content)
132 user_input = input("\nUser >>> ")
133 conversation.append_user_message(user_input)
134
135
136# %%[markdown]
137## Running with the execution loop
138
139# %%
140# run_agent_in_command_line(assistant)
141# ^ uncomment and execute
142
143# %%[markdown]
144## Export config to Agent Spec
145
146# %%
147from wayflowcore.agentspec import AgentSpecExporter
148
149serialized_assistant = AgentSpecExporter().to_json(assistant)
150
151# %%[markdown]
152## Load Agent Spec config
153
154# %%
155from wayflowcore.agentspec import AgentSpecLoader
156
157TOOL_REGISTRY = {"search_hr_database": search_hr_database}
158assistant: Agent = AgentSpecLoader(
159 tool_registry=TOOL_REGISTRY
160).load_json(serialized_assistant)