Build a Simple Fixed-Flow Assistant with Flows#
Prerequisites
This guide does not assume any prior knowledge about WayFlow. However, it assumes the reader has a basic knowledge of Large Language Models (LLMs).
You will need a working installation of WayFlow - see Installation.
Learning goals#
In this tutorial, you will develop a simple HR chatbot that answers an employee’s HR-related questions.
By doing this tutorial, you will:
Learn the basics of using a Flow to build an assistant.
Learn how to pass values around in WayFlow.
Learn how to use some of the more common types of steps.
Tip
Another type of assistant supported by WayFlow is Agents. To learn more about building conversational assistants with Agents, check out the Build a Simple Agents tutorial.
A primer on Flows#
A Flow is a type of assistant composed of individual Steps connected to form a coherent sequence of actions. By using a flow-based approach WayFlow can tackle a wide range of business processes and tasks.
Each step in the Flow performs a specific function. The flow you will build in this tutorial uses the following types of steps:
InputMessageStep: This gathers input from the user.
ToolExecutionStep: In this case it runs a Python function that fetches data from another system.
Tools
can do more than this. Take a look at ServerTools and ClientTools to find out more.PromptExecutionStep: This executes a prompt using a Large Language Model (LLM) and supports inserting data from other steps into the prompt.
OutputMessageStep: This is used to display information to the user.
Note
For advanced LLM users: Typically, an LLM chatbot maintains a chat history to support multi-turn conversations—this is referred to as an Agent.
In contrast, the PromptExecutionStep
is a stateless function that simply calls the model to generate an output based on the provided input prompt.
Building the flow#
In this tutorial, you will create an HR chatbot that answers a user’s HR-related queries. You will do this by building a fixed-flow assistant using Python code.
The chatbot will need to perform the following steps, in order, to answer a user’s HR questions.
The user is greeted and prompted for a question.
The user inputs their question to the assistant.
The assistant uses a tool to search for the requested information, querying an HR system.
The assistant uses an LLM to answer the user’s question using the data retrieved in the previous step.
The assistant returns the answer generated by the LLM to the user.
Given that you know what logical steps the assistant needs to perform to achieve the goal, how do you turn this into a working WayFlow assistant?
Turning the above logical steps into code can be broken down into several steps, each taking you towards the finished assistant. The rest of this tutorial addresses these tasks.
Setup a Jupyter Notebook#
You can follow along with this tutorial by creating a Jupyter Notebook. Ensure that wayflowcore
is installed — see the installation
instructions for details.
Alternatively, you can use any Python environment to run the code in this tutorial.
Imports and LLM configuration#
First import what is needed for this tutorial:
1from textwrap import dedent
2
3from wayflowcore.controlconnection import ControlFlowEdge
4from wayflowcore.dataconnection import DataFlowEdge
5from wayflowcore.flow import Flow
6
7# Create an LLM model to use later in the tutorial.
8from wayflowcore.models import VllmModel
9from wayflowcore.steps import (
10 CompleteStep,
11 InputMessageStep,
12 OutputMessageStep,
13 PromptExecutionStep,
14 StartStep,
15 ToolExecutionStep,
16)
17from wayflowcore.tools import tool
WayFlow supports several LLM API providers. First choose an LLM from one of the options below:
from wayflowcore.models import OCIGenAIModel
if __name__ == "__main__":
llm = OCIGenAIModel(
model_id="provider.model-id",
service_endpoint="https://url-to-service-endpoint.com",
compartment_id="compartment-id",
auth_type="API_KEY",
)
from wayflowcore.models import VllmModel
llm = VllmModel(
model_id="model-id",
host_port="VLLM_HOST_PORT",
)
from wayflowcore.models import OllamaModel
llm = OllamaModel(
model_id="model-id",
)
Note
API keys should never be stored in code. Use environment variables and/or tools such as python-dotenv instead.
Naming data#
Passing values between steps is a very common occurrence when building Flows. This is done using DataFlowEdges which define that a value is passed from one step to another.
A step has input and output descriptors, which describe what values it requires to run and what values it produces. These can be thought of as names that describe the step’s inputs and outputs.
By default, the input_descriptors
will be automatically inferred from any input from the step class that supports Jinja templating.
Parameters in a Jinja template look like this: {{this_is_a_template_parameter}}
. Additionally, input descriptors can be inferred from
other sources, such as the input parameters schemas of the tool
for the ToolExecutionStep
. There will be one input descriptor for each
parameter in the template, with a name taken from the parameter. Similarly, there will be one input descriptor for each parameter required by a tool
.
Output descriptors can also be considered names for a step’s outputs. For many steps, there will be a default name for each output. For example,
for a ToolExecutionStep
the default name for the output of the step is ToolExecutionStep.TOOL_OUTPUT
.
Input and output descriptors can be renamed using either input_mappings
, or output_mappings
allowing for more meaningful names.
Where we need to reference input and output descriptors in the code we use a variable to hold the name. Doing this eliminates errors related to typos.
Below are the names you will use for the values passed around in this tutorial.
1# Names for the input parameters of the steps in the flow.
2HR_QUERY = "user_query"
3TOOL_QUERY = "query"
4HR_DATA_CONTEXT = "hr_data_context"
5QUERY_ANSWER = "answer"
6USER_QUESTION = "user_question"
Later in this tutorial, you will examine in detail how passing values works.
Specifying the steps#
The flow follows a simple sequence of logical steps that were defined earlier in the tutorial. They consist of prompting the user for a question, searching the HR system for the required information, using an LLM to generate an answer from the retrieved data, and finally answering the user.
In a nutshell, the flow consists of the following steps in order:
StartStep: Acts as a starting point for the flow. It does nothing, and this is not strictly required, but if you exclude it, you will get a warning message.
InputMessageStep: Where the user is asked for input - the question the user wants to ask.
ToolExecutionStep: Queries the HR system to look up the relevant to the user’s query.
PromptExecutionStep: Uses an LLM to ingest and interpret the HR data and the user’s query to generate an answer.
OutputMessageStep: Displays the answer generated in the previous step to the user.
You now need to build each of the steps used in the flow.
START_STEP#
This is where the flow starts. It can take in a string to display to the user, but it is being used only as a starting point for the flow. The
message to the user is displayed in the USER_INPUT_STEP
.
1# A start step. This is where the flow starts.
2start_step = StartStep(name="start_step", input_descriptors=None)
USER_INPUT_STEP#
The InputMessageStep
prompts the user for information and saves the response for subsequent use. This step requires at least a message
template, which defines the prompt presented to the user. In this context, the user is welcomed and asked to provide their HR-related question.
An output_mapping
is used to specify a new, more meaningful name for the output_descriptor
of the step. By default, the output_descriptor
for the value produced by the step is InputMessageStep.USER_PROVIDED_INPUT
. It is important to remember that the value is not held in this; this
is only the default name for the output_descriptor
. A more meaningful name would be useful, so the output_descriptor
is renamed to the value
in HR_QUERY
. When accessing the output value of this step in a DataFlowEdge
, the name in HR_QUERY
can be used.
1user_input_message_template = dedent(
2 """
3 I am an HR Assistant, designed to answer your questions about HR matters.
4 What kinds of questions do you have today?
5 Example of HR topics:
6 - Employee benefits
7 - Salaries
8 - Career advancement
9 """
10)
11
12user_input_step = InputMessageStep(
13 name="user_input_step",
14 message_template=user_input_message_template,
15 output_mapping={InputMessageStep.USER_PROVIDED_INPUT: HR_QUERY},
16)
HR_LOOKUP_STEP#
In this case the ToolExecutionStep executes a tool, a decorated Python function, with the passed in query. Here, for
simplicity, the tool is mocked out and always returns the same data. The mock data contains HR information for two fictional employees,
John Smith
and Mary Jones
.
An output_mapping
is used to specify a new, more meaningful name for the output_descriptor
of this step. By default, the output_descriptor
for the value produced by this step is ToolExecutionStep.TOOL_OUTPUT
. It is renamed to the more meaningful name in HR_DATA_CONTEXT
.
This name can be used in a DataFlowEdge
to access the output value of the step.
1from wayflowcore.property import StringProperty
2
3# A tool which will run a query on the HR system and return some data.
4@tool(description_mode="only_docstring", output_descriptors=[StringProperty(HR_DATA_CONTEXT)])
5def search_hr_database(query: str) -> str:
6 """Function that searches the HR database for employee benefits.
7
8 Parameters
9 ----------
10 query:
11 a query string
12
13 Returns
14 -------
15 a JSON response
16
17 """
18 # Returns mock data.
19 return '{"John Smith": {"benefits": "Unlimited PTO", "salary": "$1,000"}, "Mary Jones": {"benefits": "25 days", "salary": "$10,000"}}'
20
21# Step that runs the lookup of a query using the tool.
22hr_lookup_step = ToolExecutionStep(
23 name="hr_lookup_step",
24 tool=search_hr_database,
25)
LLM_ANSWER_STEP#
The PromptExecutionStep
executes a prompt using an LLM. It requires a prompt template and an LLM(s) to do so.
As in the USER_INPUT_STEP
, the LLM’s output_descriptor
is replaced with a more meaningful name.
The default output_descriptor
for the output of PromptExecutionStep
is PromptExecutionStep.OUTPUT
. It is renamed to the value
held in QUERY_ANSWER
. This name can be used in a DataFlowEdge
to access the output value of the step.
1# The template for the prompt to be used by the LLM. Notice the use of parameters
2# such as, {{ user_question }}. The template is evaluated using the parameters that
3# are passed into the PromptExecutionStep.
4hrassistant_prompt_template = dedent(
5 """
6 You are a knowledgeable, factual, and helpful HR assistant that can answer simple \
7 HR-related questions like salary and benefits.
8 Your task:
9 - Based on the HR data given below, answer the user's question
10 Important:
11 - Be helpful and concise in your messages
12 - Do not tell the user any details not mentioned in the tool response, let's be factual.
13
14 Here is the User question:
15 - {{ user_question }}
16
17 Here is the HR data:
18 - {{ hr_data_context }}
19 """
20)
21
22# Step that evaluates the prompt template and then passes the prompt to the LLM.
23from wayflowcore.property import StringProperty
24
25llm_answer_step = PromptExecutionStep(
26 name="llm_answer_step",
27 prompt_template=hrassistant_prompt_template,
28 llm=llm,
29 output_descriptors=[StringProperty(QUERY_ANSWER)],
30)
USER_OUTPUT_STEP#
The OutputMessageStep
displays information to the user. It uses a message template to generate the output.
1# Step that outputs the answer to the user's query.
2user_output_step = OutputMessageStep(
3 name="user_output_step",
4 message_template="My Assistant's Response: {{ answer }}",
5)
Step transitions#
The Flow is almost done, you just need to specify the control flow, i.e., the transitions between the steps defined earlier.
These are straightforward here as you are building a “sequential” flow. Note that the final step, displaying the
answer to the user, transitions CompleteStep()
- a step that acts as a termination point for the flow.
Alternatively, it can also transition back to the USER_INPUT_STEP
, giving the user another opportunity to chat
with the fixed-flow assistant.
1# Define the transitions between the steps.
2control_flow_edges = [
3 ControlFlowEdge(source_step=start_step, destination_step=user_input_step),
4 ControlFlowEdge(source_step=user_input_step, destination_step=hr_lookup_step),
5 ControlFlowEdge(source_step=hr_lookup_step, destination_step=llm_answer_step),
6 ControlFlowEdge(source_step=llm_answer_step, destination_step=user_output_step),
7 # Note: you can use a CompleteStep as the termination of the flow.
8 ControlFlowEdge(source_step=user_output_step, destination_step=CompleteStep(name="final_step")),
9]
In addition to defining the transitions, you must specify the data flow, i.e., how values are passed from one step to the next.
This is done using DataFlowEdges. Each DataFlowEdge has a source_step
which defines the source step for the value, a source_output
that is the name of the value, a destination_step
that defines which step the value will be consumed by, and a destination_input
which is the name of the input
parameter for the destination step which will consume the value.
1# Define the data flows between steps.
2data_flow_edges = [
3 DataFlowEdge(
4 source_step=user_input_step,
5 source_output=HR_QUERY,
6 destination_step=hr_lookup_step,
7 destination_input=TOOL_QUERY,
8 ),
9 DataFlowEdge(
10 source_step=user_input_step,
11 source_output=HR_QUERY,
12 destination_step=llm_answer_step,
13 destination_input=USER_QUESTION,
14 ),
15 DataFlowEdge(
16 source_step=hr_lookup_step,
17 source_output=HR_DATA_CONTEXT,
18 destination_step=llm_answer_step,
19 destination_input=HR_DATA_CONTEXT,
20 ),
21 DataFlowEdge(
22 source_step=llm_answer_step,
23 source_output=QUERY_ANSWER,
24 destination_step=user_output_step,
25 destination_input=QUERY_ANSWER,
26 ),
27]
Creating the assistant#
Finally, you create the flow by using the Flow class.
You set the initial step to be the begin_step
and pass in control_flow_edges
and data_flow_edges
.
1# Create the flow passing in the steps, the name of the step to start with, the control_flow_edges and the data_flow_edges.
2assistant = Flow(
3 begin_step=start_step,
4 control_flow_edges=control_flow_edges,
5 data_flow_edges=data_flow_edges,
6)
This completes the fixed-flow HR assistant.
Running the assistant#
Before we run the assistant, what are some questions that you could ask it? The following questions can be answered from the dummy HR data and are a good starting point.
What is the salary for John Smith?
Does John Smith earn more that Mary Jones?
How much annual leave does John Smith get?
But, we can also ask the assistant questions that it shouldn’t be able to answer, because it hasn’t been given any data that is relevant to the question:
How much does Jones Jones earn?
What is Mary Jones favorite color?
So with some questions ready you can now run the assistant. Within the example code below you will pass one of these questions to the assistant.
Note
It is possible to create an assistant that answers one question and then returns to the beginning to start over. This could be done by connecting the
final step back to the user input step in the final ControlFlowEdge
, as shown below.
ControlFlowEdge(
source_step=user_output_step,
destination_step=user_input_step
),
Run the code below to run the assistant. It will ask the assistant a single one of the above question ane exit.
1# Start a conversation.
2conversation = assistant.start_conversation()
3
4# Execute the assistant.
5# This will print out the message to the user, then stop at the user input step.
6conversation.execute()
7
8# Ask a question of the assistant by appending a user message.
9conversation.append_user_message("Does John Smith earn more that Mary Jones?")
10
11# Execute the assistant again. Continues from the UserInputStep.
12# As there are no other steps the flow will run to the end.
13status = conversation.execute()
14
15# "output_message" is the default key name for the output value
16# of the OutputMessageStep.
17from wayflowcore.executors.executionstatus import FinishedStatus
18if isinstance(status, FinishedStatus):
19 answer = status.output_values[OutputMessageStep.OUTPUT]
20 print(answer)
21else:
22 print(
23 f"Incorrect execution status, expected FinishedStatus, got {status.__class__.__name__}"
24 )
The process can be summarized as follows.
The HR Assistant first prints the welcome message defined above. Next, it captures the user’s question - here you pass in a question using
conversation.append_user_message
. It processes the input through the predefined steps and transitions, and finally returns the output.
Congratulations, you have built your first fixed-flow assistant!
Agent Spec Exporting/Loading#
You can export the assistant configuration to its Agent Spec configuration using the AgentSpecExporter
.
from wayflowcore.agentspec import AgentSpecExporter
serialized_assistant = AgentSpecExporter().to_json(assistant)
Here is what the Agent Spec representation will look like ↓
Click here to see the assistant configuration.
{
"component_type": "Flow",
"id": "bf3ca5b3-39bc-40ca-9d9a-a387bee07031",
"name": "flow_e057bcd6__auto",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [],
"outputs": [
{
"description": "the input value provided by the user",
"type": "string",
"title": "user_query"
},
{
"type": "string",
"title": "hr_data_context"
},
{
"type": "string",
"title": "answer"
},
{
"description": "the message added to the messages list",
"type": "string",
"title": "output_message"
}
],
"start_node": {
"$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
},
"nodes": [
{
"$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
},
{
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
},
{
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
},
{
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
{
"$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
},
{
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
}
],
"control_flow_connections": [
{
"component_type": "ControlFlowEdge",
"id": "a71aebf8-0416-4811-bb89-6cf4627dc170",
"name": "start_step_to_user_input_step_control_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"from_node": {
"$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
},
"from_branch": null,
"to_node": {
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
}
},
{
"component_type": "ControlFlowEdge",
"id": "b2e0fb5c-8b57-4c32-9261-bd036ee607c1",
"name": "user_input_step_to_hr_lookup_step_control_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"from_node": {
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
},
"from_branch": null,
"to_node": {
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
}
},
{
"component_type": "ControlFlowEdge",
"id": "4d1bea5b-0c63-4042-891b-aca964e1640f",
"name": "hr_lookup_step_to_llm_answer_step_control_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"from_node": {
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
},
"from_branch": null,
"to_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
}
},
{
"component_type": "ControlFlowEdge",
"id": "9323cd7b-5da2-46b6-8333-46ac19d22d4d",
"name": "llm_answer_step_to_user_output_step_control_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"from_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
"from_branch": null,
"to_node": {
"$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
}
},
{
"component_type": "ControlFlowEdge",
"id": "e8075a13-3e48-4611-b68f-aa10423309ec",
"name": "user_output_step_to_final_step_control_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"from_node": {
"$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
},
"from_branch": null,
"to_node": {
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
}
}
],
"data_flow_connections": [
{
"component_type": "DataFlowEdge",
"id": "1a76f7f9-19b0-438e-aa3e-274dc06d4aa2",
"name": "user_input_step_user_query_to_hr_lookup_step_query_data_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"source_node": {
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
},
"source_output": "user_query",
"destination_node": {
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
},
"destination_input": "query"
},
{
"component_type": "DataFlowEdge",
"id": "f4a11a6e-0b79-4ede-8673-30ad1b79af2d",
"name": "user_input_step_user_query_to_llm_answer_step_user_question_data_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"source_node": {
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
},
"source_output": "user_query",
"destination_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
"destination_input": "user_question"
},
{
"component_type": "DataFlowEdge",
"id": "ee8b7cdd-762a-4202-b320-d71dab6d6318",
"name": "hr_lookup_step_hr_data_context_to_llm_answer_step_hr_data_context_data_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"source_node": {
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
},
"source_output": "hr_data_context",
"destination_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
"destination_input": "hr_data_context"
},
{
"component_type": "DataFlowEdge",
"id": "ac2e45dc-75c2-427f-8c75-1114aeda295d",
"name": "llm_answer_step_answer_to_user_output_step_answer_data_flow_edge",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"source_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
"source_output": "answer",
"destination_node": {
"$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
},
"destination_input": "answer"
},
{
"component_type": "DataFlowEdge",
"id": "cae80dba-f95a-4a24-9b93-684928a23b11",
"name": "user_input_step_user_query_to_final_step_user_query_data_flow_edge",
"description": null,
"metadata": {},
"source_node": {
"$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
},
"source_output": "user_query",
"destination_node": {
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
},
"destination_input": "user_query"
},
{
"component_type": "DataFlowEdge",
"id": "fe0c03de-5f80-4575-8d0e-605211ab9b3c",
"name": "hr_lookup_step_hr_data_context_to_final_step_hr_data_context_data_flow_edge",
"description": null,
"metadata": {},
"source_node": {
"$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
},
"source_output": "hr_data_context",
"destination_node": {
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
},
"destination_input": "hr_data_context"
},
{
"component_type": "DataFlowEdge",
"id": "a22200bc-dbba-4658-aeb1-0ddced3beb3e",
"name": "llm_answer_step_answer_to_final_step_answer_data_flow_edge",
"description": null,
"metadata": {},
"source_node": {
"$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
},
"source_output": "answer",
"destination_node": {
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
},
"destination_input": "answer"
},
{
"component_type": "DataFlowEdge",
"id": "de05873e-c565-41be-a3f8-f9c6600e4f57",
"name": "user_output_step_output_message_to_final_step_output_message_data_flow_edge",
"description": null,
"metadata": {},
"source_node": {
"$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
},
"source_output": "output_message",
"destination_node": {
"$component_ref": "953ccf06-5550-4597-9518-df1043269693"
},
"destination_input": "output_message"
}
],
"$referenced_components": {
"819ae9c0-4055-4fe6-ab05-f03c45775206": {
"component_type": "ExtendedToolNode",
"id": "819ae9c0-4055-4fe6-ab05-f03c45775206",
"name": "hr_lookup_step",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [
{
"type": "string",
"title": "query"
}
],
"outputs": [
{
"type": "string",
"title": "hr_data_context"
}
],
"branches": [
"next"
],
"tool": {
"component_type": "ServerTool",
"id": "6ca8f0ef-2496-4c9b-bbea-ca6e5969c07e",
"name": "search_hr_database",
"description": "Function that searches the HR database for employee benefits.\n\nParameters\n----------\nquery:\n a query string\n\nReturns\n-------\n a JSON response",
"metadata": {
"__metadata_info__": {}
},
"inputs": [
{
"type": "string",
"title": "query"
}
],
"outputs": [
{
"type": "string",
"title": "hr_data_context"
}
]
},
"input_mapping": {},
"output_mapping": {},
"raise_exceptions": false,
"component_plugin_name": "NodesPlugin",
"component_plugin_version": "25.4.0.dev0"
},
"c79bea59-c54b-4a54-8ac5-df6daaa60d22": {
"component_type": "PluginInputMessageNode",
"id": "c79bea59-c54b-4a54-8ac5-df6daaa60d22",
"name": "user_input_step",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [],
"outputs": [
{
"description": "the input value provided by the user",
"type": "string",
"title": "user_query"
}
],
"branches": [
"next"
],
"input_mapping": {},
"output_mapping": {
"user_provided_input": "user_query"
},
"message_template": "\nI am an HR Assistant, designed to answer your questions about HR matters.\nWhat kinds of questions do you have today?\nExample of HR topics:\n- Employee benefits\n- Salaries\n- Career advancement\n",
"rephrase": false,
"llm_config": null,
"component_plugin_name": "NodesPlugin",
"component_plugin_version": "25.4.0.dev0"
},
"d9b795cd-3949-4acb-9399-6d9f0a10d7e6": {
"component_type": "LlmNode",
"id": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6",
"name": "llm_answer_step",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [
{
"description": "\"user_question\" input variable for the template",
"type": "string",
"title": "user_question"
},
{
"description": "\"hr_data_context\" input variable for the template",
"type": "string",
"title": "hr_data_context"
}
],
"outputs": [
{
"type": "string",
"title": "answer"
}
],
"branches": [
"next"
],
"llm_config": {
"component_type": "VllmConfig",
"id": "22ce9b88-6c3f-40a4-88f9-2f81a7960993",
"name": "LLAMA_MODEL_ID",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"default_generation_parameters": null,
"url": "LLAMA_API_URL",
"model_id": "LLAMA_MODEL_ID"
},
"prompt_template": "\nYou are a knowledgeable, factual, and helpful HR assistant that can answer simple HR-related questions like salary and benefits.\nYour task:\n - Based on the HR data given below, answer the user's question\nImportant:\n - Be helpful and concise in your messages\n - Do not tell the user any details not mentioned in the tool response, let's be factual.\n\nHere is the User question:\n- {{ user_question }}\n\nHere is the HR data:\n- {{ hr_data_context }}\n"
},
"01c4a5ff-6568-42a0-a8de-8efaec780712": {
"component_type": "PluginOutputMessageNode",
"id": "01c4a5ff-6568-42a0-a8de-8efaec780712",
"name": "user_output_step",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [
{
"description": "\"answer\" input variable for the template",
"type": "string",
"title": "answer"
}
],
"outputs": [
{
"description": "the message added to the messages list",
"type": "string",
"title": "output_message"
}
],
"branches": [
"next"
],
"expose_message_as_output": true,
"message": "My Assistant's Response: {{ answer }}",
"input_mapping": {},
"output_mapping": {},
"message_type": "AGENT",
"rephrase": false,
"llm_config": null,
"component_plugin_name": "NodesPlugin",
"component_plugin_version": "25.4.0.dev0"
},
"953ccf06-5550-4597-9518-df1043269693": {
"component_type": "EndNode",
"id": "953ccf06-5550-4597-9518-df1043269693",
"name": "final_step",
"description": null,
"metadata": {
"__metadata_info__": {}
},
"inputs": [
{
"description": "the input value provided by the user",
"type": "string",
"title": "user_query"
},
{
"type": "string",
"title": "hr_data_context"
},
{
"type": "string",
"title": "answer"
},
{
"description": "the message added to the messages list",
"type": "string",
"title": "output_message"
}
],
"outputs": [
{
"description": "the input value provided by the user",
"type": "string",
"title": "user_query"
},
{
"type": "string",
"title": "hr_data_context"
},
{
"type": "string",
"title": "answer"
},
{
"description": "the message added to the messages list",
"type": "string",
"title": "output_message"
}
],
"branches": [],
"branch_name": "final_step"
},
"91594dfc-44fa-41d9-91ba-52ae7b0b0f16": {
"component_type": "StartNode",
"id": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16",
"name": "start_step",
"description": "",
"metadata": {
"__metadata_info__": {}
},
"inputs": [],
"outputs": [],
"branches": [
"next"
]
}
},
"agentspec_version": "25.4.1"
}
component_type: Flow
id: bf3ca5b3-39bc-40ca-9d9a-a387bee07031
name: flow_e057bcd6__auto
description: ''
metadata:
__metadata_info__: {}
inputs: []
outputs:
- description: the input value provided by the user
type: string
title: user_query
- type: string
title: hr_data_context
- type: string
title: answer
- description: the message added to the messages list
type: string
title: output_message
start_node:
$component_ref: 91594dfc-44fa-41d9-91ba-52ae7b0b0f16
nodes:
- $component_ref: 91594dfc-44fa-41d9-91ba-52ae7b0b0f16
- $component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
- $component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
- $component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
- $component_ref: 01c4a5ff-6568-42a0-a8de-8efaec780712
- $component_ref: 953ccf06-5550-4597-9518-df1043269693
control_flow_connections:
- component_type: ControlFlowEdge
id: a71aebf8-0416-4811-bb89-6cf4627dc170
name: start_step_to_user_input_step_control_flow_edge
description: null
metadata:
__metadata_info__: {}
from_node:
$component_ref: 91594dfc-44fa-41d9-91ba-52ae7b0b0f16
from_branch: null
to_node:
$component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
- component_type: ControlFlowEdge
id: b2e0fb5c-8b57-4c32-9261-bd036ee607c1
name: user_input_step_to_hr_lookup_step_control_flow_edge
description: null
metadata:
__metadata_info__: {}
from_node:
$component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
from_branch: null
to_node:
$component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
- component_type: ControlFlowEdge
id: 4d1bea5b-0c63-4042-891b-aca964e1640f
name: hr_lookup_step_to_llm_answer_step_control_flow_edge
description: null
metadata:
__metadata_info__: {}
from_node:
$component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
from_branch: null
to_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
- component_type: ControlFlowEdge
id: 9323cd7b-5da2-46b6-8333-46ac19d22d4d
name: llm_answer_step_to_user_output_step_control_flow_edge
description: null
metadata:
__metadata_info__: {}
from_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
from_branch: null
to_node:
$component_ref: 01c4a5ff-6568-42a0-a8de-8efaec780712
- component_type: ControlFlowEdge
id: e8075a13-3e48-4611-b68f-aa10423309ec
name: user_output_step_to_final_step_control_flow_edge
description: null
metadata:
__metadata_info__: {}
from_node:
$component_ref: 01c4a5ff-6568-42a0-a8de-8efaec780712
from_branch: null
to_node:
$component_ref: 953ccf06-5550-4597-9518-df1043269693
data_flow_connections:
- component_type: DataFlowEdge
id: 1a76f7f9-19b0-438e-aa3e-274dc06d4aa2
name: user_input_step_user_query_to_hr_lookup_step_query_data_flow_edge
description: null
metadata:
__metadata_info__: {}
source_node:
$component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
source_output: user_query
destination_node:
$component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
destination_input: query
- component_type: DataFlowEdge
id: f4a11a6e-0b79-4ede-8673-30ad1b79af2d
name: user_input_step_user_query_to_llm_answer_step_user_question_data_flow_edge
description: null
metadata:
__metadata_info__: {}
source_node:
$component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
source_output: user_query
destination_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
destination_input: user_question
- component_type: DataFlowEdge
id: ee8b7cdd-762a-4202-b320-d71dab6d6318
name: hr_lookup_step_hr_data_context_to_llm_answer_step_hr_data_context_data_flow_edge
description: null
metadata:
__metadata_info__: {}
source_node:
$component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
source_output: hr_data_context
destination_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
destination_input: hr_data_context
- component_type: DataFlowEdge
id: ac2e45dc-75c2-427f-8c75-1114aeda295d
name: llm_answer_step_answer_to_user_output_step_answer_data_flow_edge
description: null
metadata:
__metadata_info__: {}
source_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
source_output: answer
destination_node:
$component_ref: 01c4a5ff-6568-42a0-a8de-8efaec780712
destination_input: answer
- component_type: DataFlowEdge
id: cae80dba-f95a-4a24-9b93-684928a23b11
name: user_input_step_user_query_to_final_step_user_query_data_flow_edge
description: null
metadata: {}
source_node:
$component_ref: c79bea59-c54b-4a54-8ac5-df6daaa60d22
source_output: user_query
destination_node:
$component_ref: 953ccf06-5550-4597-9518-df1043269693
destination_input: user_query
- component_type: DataFlowEdge
id: fe0c03de-5f80-4575-8d0e-605211ab9b3c
name: hr_lookup_step_hr_data_context_to_final_step_hr_data_context_data_flow_edge
description: null
metadata: {}
source_node:
$component_ref: 819ae9c0-4055-4fe6-ab05-f03c45775206
source_output: hr_data_context
destination_node:
$component_ref: 953ccf06-5550-4597-9518-df1043269693
destination_input: hr_data_context
- component_type: DataFlowEdge
id: a22200bc-dbba-4658-aeb1-0ddced3beb3e
name: llm_answer_step_answer_to_final_step_answer_data_flow_edge
description: null
metadata: {}
source_node:
$component_ref: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
source_output: answer
destination_node:
$component_ref: 953ccf06-5550-4597-9518-df1043269693
destination_input: answer
- component_type: DataFlowEdge
id: de05873e-c565-41be-a3f8-f9c6600e4f57
name: user_output_step_output_message_to_final_step_output_message_data_flow_edge
description: null
metadata: {}
source_node:
$component_ref: 01c4a5ff-6568-42a0-a8de-8efaec780712
source_output: output_message
destination_node:
$component_ref: 953ccf06-5550-4597-9518-df1043269693
destination_input: output_message
$referenced_components:
819ae9c0-4055-4fe6-ab05-f03c45775206:
component_type: ExtendedToolNode
id: 819ae9c0-4055-4fe6-ab05-f03c45775206
name: hr_lookup_step
description: ''
metadata:
__metadata_info__: {}
inputs:
- type: string
title: query
outputs:
- type: string
title: hr_data_context
branches:
- next
tool:
component_type: ServerTool
id: 6ca8f0ef-2496-4c9b-bbea-ca6e5969c07e
name: search_hr_database
description: "Function that searches the HR database for employee benefits.\n\
\nParameters\n----------\nquery:\n a query string\n\nReturns\n-------\n\
\ a JSON response"
metadata:
__metadata_info__: {}
inputs:
- type: string
title: query
outputs:
- type: string
title: hr_data_context
input_mapping: {}
output_mapping: {}
raise_exceptions: false
component_plugin_name: NodesPlugin
component_plugin_version: 25.4.0.dev0
c79bea59-c54b-4a54-8ac5-df6daaa60d22:
component_type: PluginInputMessageNode
id: c79bea59-c54b-4a54-8ac5-df6daaa60d22
name: user_input_step
description: ''
metadata:
__metadata_info__: {}
inputs: []
outputs:
- description: the input value provided by the user
type: string
title: user_query
branches:
- next
input_mapping: {}
output_mapping:
user_provided_input: user_query
message_template: '
I am an HR Assistant, designed to answer your questions about HR matters.
What kinds of questions do you have today?
Example of HR topics:
- Employee benefits
- Salaries
- Career advancement
'
rephrase: false
llm_config: null
component_plugin_name: NodesPlugin
component_plugin_version: 25.4.0.dev0
d9b795cd-3949-4acb-9399-6d9f0a10d7e6:
component_type: LlmNode
id: d9b795cd-3949-4acb-9399-6d9f0a10d7e6
name: llm_answer_step
description: ''
metadata:
__metadata_info__: {}
inputs:
- description: '"user_question" input variable for the template'
type: string
title: user_question
- description: '"hr_data_context" input variable for the template'
type: string
title: hr_data_context
outputs:
- type: string
title: answer
branches:
- next
llm_config:
component_type: VllmConfig
id: 22ce9b88-6c3f-40a4-88f9-2f81a7960993
name: LLAMA_MODEL_ID
description: null
metadata:
__metadata_info__: {}
default_generation_parameters: null
url: LLAMA_API_URL
model_id: LLAMA_MODEL_ID
prompt_template: "\nYou are a knowledgeable, factual, and helpful HR assistant\
\ that can answer simple HR-related questions like salary and benefits.\n\
Your task:\n - Based on the HR data given below, answer the user's question\n\
Important:\n - Be helpful and concise in your messages\n - Do not tell\
\ the user any details not mentioned in the tool response, let's be factual.\n\
\nHere is the User question:\n- {{ user_question }}\n\nHere is the HR data:\n\
- {{ hr_data_context }}\n"
01c4a5ff-6568-42a0-a8de-8efaec780712:
component_type: PluginOutputMessageNode
id: 01c4a5ff-6568-42a0-a8de-8efaec780712
name: user_output_step
description: ''
metadata:
__metadata_info__: {}
inputs:
- description: '"answer" input variable for the template'
type: string
title: answer
outputs:
- description: the message added to the messages list
type: string
title: output_message
branches:
- next
expose_message_as_output: True
message: 'My Assistant''s Response: {{ answer }}'
input_mapping: {}
output_mapping: {}
message_type: AGENT
rephrase: false
llm_config: null
component_plugin_name: NodesPlugin
component_plugin_version: 25.4.0.dev0
953ccf06-5550-4597-9518-df1043269693:
component_type: EndNode
id: 953ccf06-5550-4597-9518-df1043269693
name: final_step
description: null
metadata:
__metadata_info__: {}
inputs:
- description: the input value provided by the user
type: string
title: user_query
- type: string
title: hr_data_context
- type: string
title: answer
- description: the message added to the messages list
type: string
title: output_message
outputs:
- description: the input value provided by the user
type: string
title: user_query
- type: string
title: hr_data_context
- type: string
title: answer
- description: the message added to the messages list
type: string
title: output_message
branches: []
branch_name: final_step
91594dfc-44fa-41d9-91ba-52ae7b0b0f16:
component_type: StartNode
id: 91594dfc-44fa-41d9-91ba-52ae7b0b0f16
name: start_step
description: ''
metadata:
__metadata_info__: {}
inputs: []
outputs: []
branches:
- next
agentspec_version: 25.4.1
You can then load the configuration back to an assistant using the AgentSpecLoader
.
from wayflowcore.agentspec import AgentSpecLoader
tool_registry = {"search_hr_database": search_hr_database}
assistant = AgentSpecLoader(tool_registry=tool_registry).load_json(serialized_assistant)
Note
This guide uses the following extension/plugin Agent Spec components:
PluginInputMessageNode
PluginOutputMessageNode
See the list of available Agent Spec extension/plugin components in the API Reference
Next steps#
In this tutorial, you learned how to build a fixed-flow assistant. To continue learning, check out:
Read the API Reference.
Take a look at the How-to Guides.
Full code#
Click on the card at the top of this page to download the full code for this guide or copy the code below.
1# Copyright © 2025 Oracle and/or its affiliates.
2#
3# This software is under the Universal Permissive License
4# %%[markdown]
5# Tutorial - Build a Fixed-Flow Assistant
6# ---------------------------------------
7
8# How to use:
9# Create a new Python virtual environment and install the latest WayFlow version.
10# ```bash
11# python -m venv venv-wayflowcore
12# source venv-wayflowcore/bin/activate
13# pip install --upgrade pip
14# pip install "wayflowcore==26.1"
15# ```
16
17# You can now run the script
18# 1. As a Python file:
19# ```bash
20# python tutorial_flow.py
21# ```
22# 2. As a Notebook (in VSCode):
23# When viewing the file,
24# - press the keys Ctrl + Enter to run the selected cell
25# - or Shift + Enter to run the selected cell and move to the cell below# (UPL) 1.0 (LICENSE-UPL or https://oss.oracle.com/licenses/upl) or Apache License
26# 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0), at your option.
27
28
29
30
31# %%[markdown]
32## Imports
33
34# %%
35from textwrap import dedent
36
37from wayflowcore.controlconnection import ControlFlowEdge
38from wayflowcore.dataconnection import DataFlowEdge
39from wayflowcore.flow import Flow
40
41# Create an LLM model to use later in the tutorial.
42from wayflowcore.models import VllmModel
43from wayflowcore.steps import (
44 CompleteStep,
45 InputMessageStep,
46 OutputMessageStep,
47 PromptExecutionStep,
48 StartStep,
49 ToolExecutionStep,
50)
51from wayflowcore.tools import tool
52
53# LLM model configuration
54
55llm = VllmModel(
56 model_id="LLAMA_MODEL_ID",
57 host_port="LLAMA_API_URL",
58)
59
60
61# %%[markdown]
62## Define value names
63
64# %%
65# Names for the input parameters of the steps in the flow.
66HR_QUERY = "user_query"
67TOOL_QUERY = "query"
68HR_DATA_CONTEXT = "hr_data_context"
69QUERY_ANSWER = "answer"
70USER_QUESTION = "user_question"
71
72# %%[markdown]
73## Define start step
74
75# %%
76# A start step. This is where the flow starts.
77start_step = StartStep(name="start_step", input_descriptors=None)
78
79# %%[markdown]
80## Define user input step
81
82# %%
83user_input_message_template = dedent(
84 """
85 I am an HR Assistant, designed to answer your questions about HR matters.
86 What kinds of questions do you have today?
87 Example of HR topics:
88 - Employee benefits
89 - Salaries
90 - Career advancement
91 """
92)
93
94user_input_step = InputMessageStep(
95 name="user_input_step",
96 message_template=user_input_message_template,
97 output_mapping={InputMessageStep.USER_PROVIDED_INPUT: HR_QUERY},
98)
99
100# %%[markdown]
101## Define HR lookup step
102
103# %%
104from wayflowcore.property import StringProperty
105
106# A tool which will run a query on the HR system and return some data.
107@tool(description_mode="only_docstring", output_descriptors=[StringProperty(HR_DATA_CONTEXT)])
108def search_hr_database(query: str) -> str:
109 """Function that searches the HR database for employee benefits.
110
111 Parameters
112 ----------
113 query:
114 a query string
115
116 Returns
117 -------
118 a JSON response
119
120 """
121 # Returns mock data.
122 return '{"John Smith": {"benefits": "Unlimited PTO", "salary": "$1,000"}, "Mary Jones": {"benefits": "25 days", "salary": "$10,000"}}'
123
124# Step that runs the lookup of a query using the tool.
125hr_lookup_step = ToolExecutionStep(
126 name="hr_lookup_step",
127 tool=search_hr_database,
128)
129
130# %%[markdown]
131## Define llm answer step
132
133# %%
134# The template for the prompt to be used by the LLM. Notice the use of parameters
135# such as, {{ user_question }}. The template is evaluated using the parameters that
136# are passed into the PromptExecutionStep.
137hrassistant_prompt_template = dedent(
138 """
139 You are a knowledgeable, factual, and helpful HR assistant that can answer simple \
140 HR-related questions like salary and benefits.
141 Your task:
142 - Based on the HR data given below, answer the user's question
143 Important:
144 - Be helpful and concise in your messages
145 - Do not tell the user any details not mentioned in the tool response, let's be factual.
146
147 Here is the User question:
148 - {{ user_question }}
149
150 Here is the HR data:
151 - {{ hr_data_context }}
152 """
153)
154
155# Step that evaluates the prompt template and then passes the prompt to the LLM.
156from wayflowcore.property import StringProperty
157
158llm_answer_step = PromptExecutionStep(
159 name="llm_answer_step",
160 prompt_template=hrassistant_prompt_template,
161 llm=llm,
162 output_descriptors=[StringProperty(QUERY_ANSWER)],
163)
164
165
166# %%[markdown]
167## Define user output step
168
169# %%
170# Step that outputs the answer to the user's query.
171user_output_step = OutputMessageStep(
172 name="user_output_step",
173 message_template="My Assistant's Response: {{ answer }}",
174)
175
176# %%[markdown]
177## Define flow transitions
178
179# %%
180# Define the transitions between the steps.
181control_flow_edges = [
182 ControlFlowEdge(source_step=start_step, destination_step=user_input_step),
183 ControlFlowEdge(source_step=user_input_step, destination_step=hr_lookup_step),
184 ControlFlowEdge(source_step=hr_lookup_step, destination_step=llm_answer_step),
185 ControlFlowEdge(source_step=llm_answer_step, destination_step=user_output_step),
186 # Note: you can use a CompleteStep as the termination of the flow.
187 ControlFlowEdge(source_step=user_output_step, destination_step=CompleteStep(name="final_step")),
188]
189
190# %%[markdown]
191## Define data transitions
192
193# %%
194# Define the data flows between steps.
195data_flow_edges = [
196 DataFlowEdge(
197 source_step=user_input_step,
198 source_output=HR_QUERY,
199 destination_step=hr_lookup_step,
200 destination_input=TOOL_QUERY,
201 ),
202 DataFlowEdge(
203 source_step=user_input_step,
204 source_output=HR_QUERY,
205 destination_step=llm_answer_step,
206 destination_input=USER_QUESTION,
207 ),
208 DataFlowEdge(
209 source_step=hr_lookup_step,
210 source_output=HR_DATA_CONTEXT,
211 destination_step=llm_answer_step,
212 destination_input=HR_DATA_CONTEXT,
213 ),
214 DataFlowEdge(
215 source_step=llm_answer_step,
216 source_output=QUERY_ANSWER,
217 destination_step=user_output_step,
218 destination_input=QUERY_ANSWER,
219 ),
220]
221
222# %%[markdown]
223## Create assistant
224
225# %%
226# Create the flow passing in the steps, the name of the step to start with, the control_flow_edges and the data_flow_edges.
227assistant = Flow(
228 begin_step=start_step,
229 control_flow_edges=control_flow_edges,
230 data_flow_edges=data_flow_edges,
231)
232
233# %%[markdown]
234## Run assistant
235
236# %%
237# Start a conversation.
238conversation = assistant.start_conversation()
239
240# Execute the assistant.
241# This will print out the message to the user, then stop at the user input step.
242conversation.execute()
243
244# Ask a question of the assistant by appending a user message.
245conversation.append_user_message("Does John Smith earn more that Mary Jones?")
246
247# Execute the assistant again. Continues from the UserInputStep.
248# As there are no other steps the flow will run to the end.
249status = conversation.execute()
250
251# "output_message" is the default key name for the output value
252# of the OutputMessageStep.
253from wayflowcore.executors.executionstatus import FinishedStatus
254if isinstance(status, FinishedStatus):
255 answer = status.output_values[OutputMessageStep.OUTPUT]
256 print(answer)
257else:
258 print(
259 f"Incorrect execution status, expected FinishedStatus, got {status.__class__.__name__}"
260 )
261
262# %%[markdown]
263## Export config to Agent Spec
264
265# %%
266from wayflowcore.agentspec import AgentSpecExporter
267
268serialized_assistant = AgentSpecExporter().to_json(assistant)
269
270# %%[markdown]
271## Load Agent Spec config
272
273# %%
274from wayflowcore.agentspec import AgentSpecLoader
275
276tool_registry = {"search_hr_database": search_hr_database}
277
278assistant = AgentSpecLoader(tool_registry=tool_registry).load_json(serialized_assistant)