Build a Simple Fixed-Flow Assistant with Flows#

python-icon Download Python Script

Python script/notebook for this guide.

Simple flow tutorial script

Prerequisites

This guide does not assume any prior knowledge about WayFlow. However, it assumes the reader has a basic knowledge of Large Language Models (LLMs).

You will need a working installation of WayFlow - see Installation.

Learning goals#

In this tutorial, you will develop a simple HR chatbot that answers an employee’s HR-related questions.

By doing this tutorial, you will:

  1. Learn the basics of using a Flow to build an assistant.

  2. Learn how to pass values around in WayFlow.

  3. Learn how to use some of the more common types of steps.

Tip

Another type of assistant supported by WayFlow is Agents. To learn more about building conversational assistants with Agents, check out the Build a Simple Agents tutorial.

A primer on Flows#

A Flow is a type of assistant composed of individual Steps connected to form a coherent sequence of actions. By using a flow-based approach WayFlow can tackle a wide range of business processes and tasks.

Each step in the Flow performs a specific function. The flow you will build in this tutorial uses the following types of steps:

Note

For advanced LLM users: Typically, an LLM chatbot maintains a chat history to support multi-turn conversations—this is referred to as an Agent. In contrast, the PromptExecutionStep is a stateless function that simply calls the model to generate an output based on the provided input prompt.

Building the flow#

In this tutorial, you will create an HR chatbot that answers a user’s HR-related queries. You will do this by building a fixed-flow assistant using Python code.

The chatbot will need to perform the following steps, in order, to answer a user’s HR questions.

  1. The user is greeted and prompted for a question.

  2. The user inputs their question to the assistant.

  3. The assistant uses a tool to search for the requested information, querying an HR system.

  4. The assistant uses an LLM to answer the user’s question using the data retrieved in the previous step.

  5. The assistant returns the answer generated by the LLM to the user.

Given that you know what logical steps the assistant needs to perform to achieve the goal, how do you turn this into a working WayFlow assistant?

Turning the above logical steps into code can be broken down into several steps, each taking you towards the finished assistant. The rest of this tutorial addresses these tasks.

Setup a Jupyter Notebook#

You can follow along with this tutorial by creating a Jupyter Notebook. Ensure that wayflowcore is installed — see the installation instructions for details.

Alternatively, you can use any Python environment to run the code in this tutorial.

Imports and LLM configuration#

First import what is needed for this tutorial:

 1from textwrap import dedent
 2
 3from wayflowcore.controlconnection import ControlFlowEdge
 4from wayflowcore.dataconnection import DataFlowEdge
 5from wayflowcore.flow import Flow
 6from wayflowcore.flowbuilder import FlowBuilder
 7
 8# Create an LLM model to use later in the tutorial.
 9from wayflowcore.models import VllmModel
10from wayflowcore.steps import (
11    CompleteStep,
12    InputMessageStep,
13    OutputMessageStep,
14    PromptExecutionStep,
15    StartStep,
16    ToolExecutionStep,
17)
18from wayflowcore.tools import tool

WayFlow supports several LLM API providers. First choose an LLM from one of the options below:

from wayflowcore.models import OCIGenAIModel, OCIClientConfigWithApiKey

llm = OCIGenAIModel(
    model_id="provider.model-id",
    compartment_id="compartment-id",
    client_config=OCIClientConfigWithApiKey(
        service_endpoint="https://url-to-service-endpoint.com",
    ),
)

Note

API keys should never be stored in code. Use environment variables and/or tools such as python-dotenv instead.

Naming data#

Passing values between steps is a very common occurrence when building Flows. This is done using DataFlowEdges which define that a value is passed from one step to another.

A step has input and output descriptors, which describe what values it requires to run and what values it produces. These can be thought of as names that describe the step’s inputs and outputs.

By default, the input_descriptors will be automatically inferred from any input from the step class that supports Jinja templating. Parameters in a Jinja template look like this: {{this_is_a_template_parameter}}. Additionally, input descriptors can be inferred from other sources, such as the input parameters schemas of the tool for the ToolExecutionStep. There will be one input descriptor for each parameter in the template, with a name taken from the parameter. Similarly, there will be one input descriptor for each parameter required by a tool.

Note

Jinja templating introduces security concerns that are addressed by WayFlow by restricting Jinja’s rendering capabilities. Please check our guide on How to write secure prompts with Jinja templating for more information.

Output descriptors can also be considered names for a step’s outputs. For many steps, there will be a default name for each output. For example, for a ToolExecutionStep the default name for the output of the step is ToolExecutionStep.TOOL_OUTPUT.

Input and output descriptors can be renamed using either input_mappings, or output_mappings allowing for more meaningful names.

Where we need to reference input and output descriptors in the code we use a variable to hold the name. Doing this eliminates errors related to typos.

Below are the names you will use for the values passed around in this tutorial.

1# Names for the input parameters of the steps in the flow.
2HR_QUERY = "user_query"
3TOOL_QUERY = "query"
4HR_DATA_CONTEXT = "hr_data_context"
5QUERY_ANSWER = "answer"
6USER_QUESTION = "user_question"

Later in this tutorial, you will examine in detail how passing values works.

Specifying the steps#

The flow follows a simple sequence of logical steps that were defined earlier in the tutorial. They consist of prompting the user for a question, searching the HR system for the required information, using an LLM to generate an answer from the retrieved data, and finally answering the user.

In a nutshell, the flow consists of the following steps in order:

  • StartStep: Acts as a starting point for the flow. It does nothing, and this is not strictly required, but if you exclude it, you will get a warning message.

  • InputMessageStep: Where the user is asked for input - the question the user wants to ask.

  • ToolExecutionStep: Queries the HR system to look up the relevant to the user’s query.

  • PromptExecutionStep: Uses an LLM to ingest and interpret the HR data and the user’s query to generate an answer.

  • OutputMessageStep: Displays the answer generated in the previous step to the user.

You now need to build each of the steps used in the flow.

START_STEP#

This is where the flow starts. It can take in a string to display to the user, but it is being used only as a starting point for the flow. The message to the user is displayed in the USER_INPUT_STEP.

1# A start step. This is where the flow starts.
2start_step = StartStep(name="start_step", input_descriptors=None)

USER_INPUT_STEP#

The InputMessageStep prompts the user for information and saves the response for subsequent use. This step requires at least a message template, which defines the prompt presented to the user. In this context, the user is welcomed and asked to provide their HR-related question.

An output_mapping is used to specify a new, more meaningful name for the output_descriptor of the step. By default, the output_descriptor for the value produced by the step is InputMessageStep.USER_PROVIDED_INPUT. It is important to remember that the value is not held in this; this is only the default name for the output_descriptor. A more meaningful name would be useful, so the output_descriptor is renamed to the value in HR_QUERY. When accessing the output value of this step in a DataFlowEdge, the name in HR_QUERY can be used.

 1user_input_message_template = dedent(
 2    """
 3    I am an HR Assistant, designed to answer your questions about HR matters.
 4    What kinds of questions do you have today?
 5    Example of HR topics:
 6    - Employee benefits
 7    - Salaries
 8    - Career advancement
 9    """
10)
11
12user_input_step = InputMessageStep(
13    name="user_input_step",
14    message_template=user_input_message_template,
15    output_mapping={InputMessageStep.USER_PROVIDED_INPUT: HR_QUERY},
16)

HR_LOOKUP_STEP#

In this case the ToolExecutionStep executes a tool, a decorated Python function, with the passed in query. Here, for simplicity, the tool is mocked out and always returns the same data. The mock data contains HR information for two fictional employees, John Smith and Mary Jones.

An output_mapping is used to specify a new, more meaningful name for the output_descriptor of this step. By default, the output_descriptor for the value produced by this step is ToolExecutionStep.TOOL_OUTPUT. It is renamed to the more meaningful name in HR_DATA_CONTEXT. This name can be used in a DataFlowEdge to access the output value of the step.

 1from wayflowcore.property import StringProperty
 2
 3# A tool which will run a query on the HR system and return some data.
 4@tool(description_mode="only_docstring", output_descriptors=[StringProperty(HR_DATA_CONTEXT)])
 5def search_hr_database(query: str) -> str:
 6    """Function that searches the HR database for employee benefits.
 7
 8    Parameters
 9    ----------
10    query:
11        a query string
12
13    Returns
14    -------
15        a JSON response
16
17    """
18    # Returns mock data.
19    return '{"John Smith": {"benefits": "Unlimited PTO", "salary": "$1,000"}, "Mary Jones": {"benefits": "25 days", "salary": "$10,000"}}'
20
21# Step that runs the lookup of a query using the tool.
22hr_lookup_step = ToolExecutionStep(
23    name="hr_lookup_step",
24    tool=search_hr_database,
25)

LLM_ANSWER_STEP#

The PromptExecutionStep executes a prompt using an LLM. It requires a prompt template and an LLM(s) to do so.

As in the USER_INPUT_STEP, the LLM’s output_descriptor is replaced with a more meaningful name. The default output_descriptor for the output of PromptExecutionStep is PromptExecutionStep.OUTPUT. It is renamed to the value held in QUERY_ANSWER. This name can be used in a DataFlowEdge to access the output value of the step.

 1# The template for the prompt to be used by the LLM. Notice the use of parameters
 2# such as, {{ user_question }}. The template is evaluated using the parameters that
 3# are passed into the PromptExecutionStep.
 4hrassistant_prompt_template = dedent(
 5    """
 6    You are a knowledgeable, factual, and helpful HR assistant that can answer simple \
 7    HR-related questions like salary and benefits.
 8    Your task:
 9        - Based on the HR data given below, answer the user's question
10    Important:
11        - Be helpful and concise in your messages
12        - Do not tell the user any details not mentioned in the tool response, let's be factual.
13
14    Here is the User question:
15    - {{ user_question }}
16
17    Here is the HR data:
18    - {{ hr_data_context }}
19    """
20)
21
22# Step that evaluates the prompt template and then passes the prompt to the LLM.
23from wayflowcore.property import StringProperty
24
25llm_answer_step = PromptExecutionStep(
26    name="llm_answer_step",
27    prompt_template=hrassistant_prompt_template,
28    llm=llm,
29    output_descriptors=[StringProperty(QUERY_ANSWER)],
30)

USER_OUTPUT_STEP#

The OutputMessageStep displays information to the user. It uses a message template to generate the output.

1# Step that outputs the answer to the user's query.
2user_output_step = OutputMessageStep(
3    name="user_output_step",
4    message_template="My Assistant's Response: {{ answer }}",
5)

Step transitions#

The Flow is almost done, you just need to specify the control flow, i.e., the transitions between the steps defined earlier. These are straightforward here as you are building a “sequential” flow. Note that the final step, displaying the answer to the user, transitions CompleteStep() - a step that acts as a termination point for the flow. Alternatively, it can also transition back to the USER_INPUT_STEP, giving the user another opportunity to chat with the fixed-flow assistant.

1# Define the transitions between the steps.
2control_flow_edges = [
3    ControlFlowEdge(source_step=start_step, destination_step=user_input_step),
4    ControlFlowEdge(source_step=user_input_step, destination_step=hr_lookup_step),
5    ControlFlowEdge(source_step=hr_lookup_step, destination_step=llm_answer_step),
6    ControlFlowEdge(source_step=llm_answer_step, destination_step=user_output_step),
7    # Note: you can use a CompleteStep as the termination of the flow.
8    ControlFlowEdge(source_step=user_output_step, destination_step=CompleteStep(name="final_step")),
9]

In addition to defining the transitions, you must specify the data flow, i.e., how values are passed from one step to the next. This is done using DataFlowEdges. Each DataFlowEdge has a source_step which defines the source step for the value, a source_output that is the name of the value, a destination_step that defines which step the value will be consumed by, and a destination_input which is the name of the input parameter for the destination step which will consume the value.

 1# Define the data flows between steps.
 2data_flow_edges = [
 3    DataFlowEdge(
 4        source_step=user_input_step,
 5        source_output=HR_QUERY,
 6        destination_step=hr_lookup_step,
 7        destination_input=TOOL_QUERY,
 8    ),
 9    DataFlowEdge(
10        source_step=user_input_step,
11        source_output=HR_QUERY,
12        destination_step=llm_answer_step,
13        destination_input=USER_QUESTION,
14    ),
15    DataFlowEdge(
16        source_step=hr_lookup_step,
17        source_output=HR_DATA_CONTEXT,
18        destination_step=llm_answer_step,
19        destination_input=HR_DATA_CONTEXT,
20    ),
21    DataFlowEdge(
22        source_step=llm_answer_step,
23        source_output=QUERY_ANSWER,
24        destination_step=user_output_step,
25        destination_input=QUERY_ANSWER,
26    ),
27]

Creating the assistant#

Finally, you create the flow by using the Flow class.

You set the initial step to be the begin_step and pass in control_flow_edges and data_flow_edges.

1# Create the flow passing in the steps, the name of the step to start with, the control_flow_edges and the data_flow_edges.
2assistant = Flow(
3    begin_step=start_step,
4    control_flow_edges=control_flow_edges,
5    data_flow_edges=data_flow_edges,
6)

This completes the fixed-flow HR assistant.

Creating the assistant with the Flow Builder#

You can build the exact same flow with the chainable Flow Builder API. This keeps control and data wiring concise and readable.

 1# Create the same assistant using the Flow Builder API.
 2assistant = (
 3    FlowBuilder()
 4    .add_sequence([user_input_step, hr_lookup_step, llm_answer_step, user_output_step])
 5    .set_entry_point(user_input_step)
 6    # Link the final step to completion
 7    .set_finish_points(user_output_step)
 8    # Wire the data connections
 9    .add_data_edge(user_input_step, hr_lookup_step, (HR_QUERY, TOOL_QUERY))
10    .add_data_edge(user_input_step, llm_answer_step, (HR_QUERY, USER_QUESTION))
11    .add_data_edge(hr_lookup_step, llm_answer_step, (HR_DATA_CONTEXT, HR_DATA_CONTEXT))
12    .add_data_edge(llm_answer_step, user_output_step, (QUERY_ANSWER, QUERY_ANSWER))
13    .build()
14)

API Reference: FlowBuilder

See also

For more information see the how-to guides about how to use the Flow builder

Running the assistant#

Before we run the assistant, what are some questions that you could ask it? The following questions can be answered from the dummy HR data and are a good starting point.

  1. What is the salary for John Smith?

  2. Does John Smith earn more that Mary Jones?

  3. How much annual leave does John Smith get?

But, we can also ask the assistant questions that it shouldn’t be able to answer, because it hasn’t been given any data that is relevant to the question:

  1. How much does Jones Jones earn?

  2. What is Mary Jones favorite color?

So with some questions ready you can now run the assistant. Within the example code below you will pass one of these questions to the assistant.

Note

It is possible to create an assistant that answers one question and then returns to the beginning to start over. This could be done by connecting the final step back to the user input step in the final ControlFlowEdge, as shown below.

ControlFlowEdge(
   source_step=user_output_step,
   destination_step=user_input_step
),

Run the code below to run the assistant. It will ask the assistant a single one of the above question and exit.

 1# Start a conversation.
 2conversation = assistant.start_conversation()
 3
 4# Execute the assistant.
 5# This will print out the message to the user, then stop at the user input step.
 6conversation.execute()
 7
 8# Ask a question of the assistant by appending a user message.
 9conversation.append_user_message("Does John Smith earn more that Mary Jones?")
10
11# Execute the assistant again. Continues from the UserInputStep.
12# As there are no other steps the flow will run to the end.
13status = conversation.execute()
14
15# "output_message" is the default key name for the output value
16# of the OutputMessageStep.
17from wayflowcore.executors.executionstatus import FinishedStatus
18if isinstance(status, FinishedStatus):
19    answer = status.output_values[OutputMessageStep.OUTPUT]
20    print(answer)
21else:
22    print(
23        f"Incorrect execution status, expected FinishedStatus, got {status.__class__.__name__}"
24    )

The process can be summarized as follows.

The HR Assistant first prints the welcome message defined above. Next, it captures the user’s question - here you pass in a question using conversation.append_user_message. It processes the input through the predefined steps and transitions, and finally returns the output.

Congratulations, you have built your first fixed-flow assistant!

Agent Spec Exporting/Loading#

You can export the assistant configuration to its Agent Spec configuration using the AgentSpecExporter.

from wayflowcore.agentspec import AgentSpecExporter

serialized_assistant = AgentSpecExporter().to_json(assistant)

Here is what the Agent Spec representation will look like ↓

Click here to see the assistant configuration.
{
  "component_type": "Flow",
  "id": "bf3ca5b3-39bc-40ca-9d9a-a387bee07031",
  "name": "flow_e057bcd6__auto",
  "description": "",
  "metadata": {
    "__metadata_info__": {}
  },
  "inputs": [],
  "outputs": [
    {
      "description": "the input value provided by the user",
      "type": "string",
      "title": "user_query"
    },
    {
      "type": "string",
      "title": "hr_data_context"
    },
    {
      "type": "string",
      "title": "answer"
    },
    {
      "description": "the message added to the messages list",
      "type": "string",
      "title": "output_message"
    }
  ],
  "start_node": {
    "$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
  },
  "nodes": [
    {
      "$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
    },
    {
      "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
    },
    {
      "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
    },
    {
      "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
    },
    {
      "$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
    },
    {
      "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
    }
  ],
  "control_flow_connections": [
    {
      "component_type": "ControlFlowEdge",
      "id": "a71aebf8-0416-4811-bb89-6cf4627dc170",
      "name": "start_step_to_user_input_step_control_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "from_node": {
        "$component_ref": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16"
      },
      "from_branch": null,
      "to_node": {
        "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
      }
    },
    {
      "component_type": "ControlFlowEdge",
      "id": "b2e0fb5c-8b57-4c32-9261-bd036ee607c1",
      "name": "user_input_step_to_hr_lookup_step_control_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "from_node": {
        "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
      },
      "from_branch": null,
      "to_node": {
        "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
      }
    },
    {
      "component_type": "ControlFlowEdge",
      "id": "4d1bea5b-0c63-4042-891b-aca964e1640f",
      "name": "hr_lookup_step_to_llm_answer_step_control_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "from_node": {
        "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
      },
      "from_branch": null,
      "to_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      }
    },
    {
      "component_type": "ControlFlowEdge",
      "id": "9323cd7b-5da2-46b6-8333-46ac19d22d4d",
      "name": "llm_answer_step_to_user_output_step_control_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "from_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      },
      "from_branch": null,
      "to_node": {
        "$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
      }
    },
    {
      "component_type": "ControlFlowEdge",
      "id": "e8075a13-3e48-4611-b68f-aa10423309ec",
      "name": "user_output_step_to_final_step_control_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "from_node": {
        "$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
      },
      "from_branch": null,
      "to_node": {
        "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
      }
    }
  ],
  "data_flow_connections": [
    {
      "component_type": "DataFlowEdge",
      "id": "1a76f7f9-19b0-438e-aa3e-274dc06d4aa2",
      "name": "user_input_step_user_query_to_hr_lookup_step_query_data_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "source_node": {
        "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
      },
      "source_output": "user_query",
      "destination_node": {
        "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
      },
      "destination_input": "query"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "f4a11a6e-0b79-4ede-8673-30ad1b79af2d",
      "name": "user_input_step_user_query_to_llm_answer_step_user_question_data_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "source_node": {
        "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
      },
      "source_output": "user_query",
      "destination_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      },
      "destination_input": "user_question"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "ee8b7cdd-762a-4202-b320-d71dab6d6318",
      "name": "hr_lookup_step_hr_data_context_to_llm_answer_step_hr_data_context_data_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "source_node": {
        "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
      },
      "source_output": "hr_data_context",
      "destination_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      },
      "destination_input": "hr_data_context"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "ac2e45dc-75c2-427f-8c75-1114aeda295d",
      "name": "llm_answer_step_answer_to_user_output_step_answer_data_flow_edge",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "source_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      },
      "source_output": "answer",
      "destination_node": {
        "$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
      },
      "destination_input": "answer"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "cae80dba-f95a-4a24-9b93-684928a23b11",
      "name": "user_input_step_user_query_to_final_step_user_query_data_flow_edge",
      "description": null,
      "metadata": {},
      "source_node": {
        "$component_ref": "c79bea59-c54b-4a54-8ac5-df6daaa60d22"
      },
      "source_output": "user_query",
      "destination_node": {
        "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
      },
      "destination_input": "user_query"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "fe0c03de-5f80-4575-8d0e-605211ab9b3c",
      "name": "hr_lookup_step_hr_data_context_to_final_step_hr_data_context_data_flow_edge",
      "description": null,
      "metadata": {},
      "source_node": {
        "$component_ref": "819ae9c0-4055-4fe6-ab05-f03c45775206"
      },
      "source_output": "hr_data_context",
      "destination_node": {
        "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
      },
      "destination_input": "hr_data_context"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "a22200bc-dbba-4658-aeb1-0ddced3beb3e",
      "name": "llm_answer_step_answer_to_final_step_answer_data_flow_edge",
      "description": null,
      "metadata": {},
      "source_node": {
        "$component_ref": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6"
      },
      "source_output": "answer",
      "destination_node": {
        "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
      },
      "destination_input": "answer"
    },
    {
      "component_type": "DataFlowEdge",
      "id": "de05873e-c565-41be-a3f8-f9c6600e4f57",
      "name": "user_output_step_output_message_to_final_step_output_message_data_flow_edge",
      "description": null,
      "metadata": {},
      "source_node": {
        "$component_ref": "01c4a5ff-6568-42a0-a8de-8efaec780712"
      },
      "source_output": "output_message",
      "destination_node": {
        "$component_ref": "953ccf06-5550-4597-9518-df1043269693"
      },
      "destination_input": "output_message"
    }
  ],
  "$referenced_components": {
    "819ae9c0-4055-4fe6-ab05-f03c45775206": {
      "component_type": "ExtendedToolNode",
      "id": "819ae9c0-4055-4fe6-ab05-f03c45775206",
      "name": "hr_lookup_step",
      "description": "",
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [
        {
          "type": "string",
          "title": "query"
        }
      ],
      "outputs": [
        {
          "type": "string",
          "title": "hr_data_context"
        }
      ],
      "branches": [
        "next"
      ],
      "tool": {
        "component_type": "ServerTool",
        "id": "6ca8f0ef-2496-4c9b-bbea-ca6e5969c07e",
        "name": "search_hr_database",
        "description": "Function that searches the HR database for employee benefits.\n\nParameters\n----------\nquery:\n    a query string\n\nReturns\n-------\n    a JSON response",
        "metadata": {
          "__metadata_info__": {}
        },
        "inputs": [
          {
            "type": "string",
            "title": "query"
          }
        ],
        "outputs": [
          {
            "type": "string",
            "title": "hr_data_context"
          }
        ]
      },
      "input_mapping": {},
      "output_mapping": {},
      "raise_exceptions": false,
      "component_plugin_name": "NodesPlugin",
      "component_plugin_version": "25.4.0.dev0"
    },
    "c79bea59-c54b-4a54-8ac5-df6daaa60d22": {
      "component_type": "PluginInputMessageNode",
      "id": "c79bea59-c54b-4a54-8ac5-df6daaa60d22",
      "name": "user_input_step",
      "description": "",
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [],
      "outputs": [
        {
          "description": "the input value provided by the user",
          "type": "string",
          "title": "user_query"
        }
      ],
      "branches": [
        "next"
      ],
      "input_mapping": {},
      "output_mapping": {
        "user_provided_input": "user_query"
      },
      "message_template": "\nI am an HR Assistant, designed to answer your questions about HR matters.\nWhat kinds of questions do you have today?\nExample of HR topics:\n- Employee benefits\n- Salaries\n- Career advancement\n",
      "rephrase": false,
      "llm_config": null,
      "component_plugin_name": "NodesPlugin",
      "component_plugin_version": "25.4.0.dev0"
    },
    "d9b795cd-3949-4acb-9399-6d9f0a10d7e6": {
      "component_type": "LlmNode",
      "id": "d9b795cd-3949-4acb-9399-6d9f0a10d7e6",
      "name": "llm_answer_step",
      "description": "",
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [
        {
          "description": "\"user_question\" input variable for the template",
          "type": "string",
          "title": "user_question"
        },
        {
          "description": "\"hr_data_context\" input variable for the template",
          "type": "string",
          "title": "hr_data_context"
        }
      ],
      "outputs": [
        {
          "type": "string",
          "title": "answer"
        }
      ],
      "branches": [
        "next"
      ],
      "llm_config": {
        "component_type": "VllmConfig",
        "id": "22ce9b88-6c3f-40a4-88f9-2f81a7960993",
        "name": "LLAMA_MODEL_ID",
        "description": null,
        "metadata": {
          "__metadata_info__": {}
        },
        "default_generation_parameters": null,
        "url": "LLAMA_API_URL",
        "model_id": "LLAMA_MODEL_ID"
      },
      "prompt_template": "\nYou are a knowledgeable, factual, and helpful HR assistant that can answer simple     HR-related questions like salary and benefits.\nYour task:\n    - Based on the HR data given below, answer the user's question\nImportant:\n    - Be helpful and concise in your messages\n    - Do not tell the user any details not mentioned in the tool response, let's be factual.\n\nHere is the User question:\n- {{ user_question }}\n\nHere is the HR data:\n- {{ hr_data_context }}\n"
    },
    "01c4a5ff-6568-42a0-a8de-8efaec780712": {
      "component_type": "PluginOutputMessageNode",
      "id": "01c4a5ff-6568-42a0-a8de-8efaec780712",
      "name": "user_output_step",
      "description": "",
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [
        {
          "description": "\"answer\" input variable for the template",
          "type": "string",
          "title": "answer"
        }
      ],
      "outputs": [
        {
          "description": "the message added to the messages list",
          "type": "string",
          "title": "output_message"
        }
      ],
      "branches": [
        "next"
      ],
      "expose_message_as_output": true,
      "message": "My Assistant's Response: {{ answer }}",
      "input_mapping": {},
      "output_mapping": {},
      "message_type": "AGENT",
      "rephrase": false,
      "llm_config": null,
      "component_plugin_name": "NodesPlugin",
      "component_plugin_version": "25.4.0.dev0"
    },
    "953ccf06-5550-4597-9518-df1043269693": {
      "component_type": "EndNode",
      "id": "953ccf06-5550-4597-9518-df1043269693",
      "name": "final_step",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [
        {
          "description": "the input value provided by the user",
          "type": "string",
          "title": "user_query"
        },
        {
          "type": "string",
          "title": "hr_data_context"
        },
        {
          "type": "string",
          "title": "answer"
        },
        {
          "description": "the message added to the messages list",
          "type": "string",
          "title": "output_message"
        }
      ],
      "outputs": [
        {
          "description": "the input value provided by the user",
          "type": "string",
          "title": "user_query"
        },
        {
          "type": "string",
          "title": "hr_data_context"
        },
        {
          "type": "string",
          "title": "answer"
        },
        {
          "description": "the message added to the messages list",
          "type": "string",
          "title": "output_message"
        }
      ],
      "branches": [],
      "branch_name": "final_step"
    },
    "91594dfc-44fa-41d9-91ba-52ae7b0b0f16": {
      "component_type": "StartNode",
      "id": "91594dfc-44fa-41d9-91ba-52ae7b0b0f16",
      "name": "start_step",
      "description": "",
      "metadata": {
        "__metadata_info__": {}
      },
      "inputs": [],
      "outputs": [],
      "branches": [
        "next"
      ]
    }
  },
  "agentspec_version": "25.4.1"
}

You can then load the configuration back to an assistant using the AgentSpecLoader.

from wayflowcore.agentspec import AgentSpecLoader

tool_registry = {"search_hr_database": search_hr_database}

assistant = AgentSpecLoader(tool_registry=tool_registry).load_json(serialized_assistant)

Note

This guide uses the following extension/plugin Agent Spec components:

  • PluginInputMessageNode

  • PluginOutputMessageNode

See the list of available Agent Spec extension/plugin components in the API Reference

Next steps#

In this tutorial, you learned how to build a fixed-flow assistant. To continue learning, check out:

Full code#

Click on the card at the top of this page to download the full code for this guide or copy the code below.

  1# Copyright © 2025 Oracle and/or its affiliates.
  2#
  3# This software is under the Apache License 2.0
  4# %%[markdown]
  5# Tutorial - Build a Fixed-Flow Assistant
  6# ---------------------------------------
  7
  8# How to use:
  9# Create a new Python virtual environment and install the latest WayFlow version.
 10# ```bash
 11# python -m venv venv-wayflowcore
 12# source venv-wayflowcore/bin/activate
 13# pip install --upgrade pip
 14# pip install "wayflowcore==26.1" 
 15# ```
 16
 17# You can now run the script
 18# 1. As a Python file:
 19# ```bash
 20# python tutorial_flow.py
 21# ```
 22# 2. As a Notebook (in VSCode):
 23# When viewing the file,
 24#  - press the keys Ctrl + Enter to run the selected cell
 25#  - or Shift + Enter to run the selected cell and move to the cell below# (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0) or Universal Permissive License
 26# (UPL) 1.0 (LICENSE-UPL or https://oss.oracle.com/licenses/upl), at your option.
 27
 28
 29
 30
 31# %%[markdown]
 32## Imports
 33
 34# %%
 35from textwrap import dedent
 36
 37from wayflowcore.controlconnection import ControlFlowEdge
 38from wayflowcore.dataconnection import DataFlowEdge
 39from wayflowcore.flow import Flow
 40from wayflowcore.flowbuilder import FlowBuilder
 41
 42# Create an LLM model to use later in the tutorial.
 43from wayflowcore.models import VllmModel
 44from wayflowcore.steps import (
 45    CompleteStep,
 46    InputMessageStep,
 47    OutputMessageStep,
 48    PromptExecutionStep,
 49    StartStep,
 50    ToolExecutionStep,
 51)
 52from wayflowcore.tools import tool
 53
 54# LLM model configuration
 55
 56llm = VllmModel(
 57    model_id="LLAMA_MODEL_ID",
 58    host_port="LLAMA_API_URL",
 59)
 60
 61
 62# %%[markdown]
 63## Define value names
 64
 65# %%
 66# Names for the input parameters of the steps in the flow.
 67HR_QUERY = "user_query"
 68TOOL_QUERY = "query"
 69HR_DATA_CONTEXT = "hr_data_context"
 70QUERY_ANSWER = "answer"
 71USER_QUESTION = "user_question"
 72
 73# %%[markdown]
 74## Define start step
 75
 76# %%
 77# A start step. This is where the flow starts.
 78start_step = StartStep(name="start_step", input_descriptors=None)
 79
 80# %%[markdown]
 81## Define user input step
 82
 83# %%
 84user_input_message_template = dedent(
 85    """
 86    I am an HR Assistant, designed to answer your questions about HR matters.
 87    What kinds of questions do you have today?
 88    Example of HR topics:
 89    - Employee benefits
 90    - Salaries
 91    - Career advancement
 92    """
 93)
 94
 95user_input_step = InputMessageStep(
 96    name="user_input_step",
 97    message_template=user_input_message_template,
 98    output_mapping={InputMessageStep.USER_PROVIDED_INPUT: HR_QUERY},
 99)
100
101# %%[markdown]
102## Define HR lookup step
103
104# %%
105from wayflowcore.property import StringProperty
106
107# A tool which will run a query on the HR system and return some data.
108@tool(description_mode="only_docstring", output_descriptors=[StringProperty(HR_DATA_CONTEXT)])
109def search_hr_database(query: str) -> str:
110    """Function that searches the HR database for employee benefits.
111
112    Parameters
113    ----------
114    query:
115        a query string
116
117    Returns
118    -------
119        a JSON response
120
121    """
122    # Returns mock data.
123    return '{"John Smith": {"benefits": "Unlimited PTO", "salary": "$1,000"}, "Mary Jones": {"benefits": "25 days", "salary": "$10,000"}}'
124
125# Step that runs the lookup of a query using the tool.
126hr_lookup_step = ToolExecutionStep(
127    name="hr_lookup_step",
128    tool=search_hr_database,
129)
130
131# %%[markdown]
132## Define llm answer step
133
134# %%
135# The template for the prompt to be used by the LLM. Notice the use of parameters
136# such as, {{ user_question }}. The template is evaluated using the parameters that
137# are passed into the PromptExecutionStep.
138hrassistant_prompt_template = dedent(
139    """
140    You are a knowledgeable, factual, and helpful HR assistant that can answer simple \
141    HR-related questions like salary and benefits.
142    Your task:
143        - Based on the HR data given below, answer the user's question
144    Important:
145        - Be helpful and concise in your messages
146        - Do not tell the user any details not mentioned in the tool response, let's be factual.
147
148    Here is the User question:
149    - {{ user_question }}
150
151    Here is the HR data:
152    - {{ hr_data_context }}
153    """
154)
155
156# Step that evaluates the prompt template and then passes the prompt to the LLM.
157from wayflowcore.property import StringProperty
158
159llm_answer_step = PromptExecutionStep(
160    name="llm_answer_step",
161    prompt_template=hrassistant_prompt_template,
162    llm=llm,
163    output_descriptors=[StringProperty(QUERY_ANSWER)],
164)
165
166
167# %%[markdown]
168## Define user output step
169
170# %%
171# Step that outputs the answer to the user's query.
172user_output_step = OutputMessageStep(
173    name="user_output_step",
174    message_template="My Assistant's Response: {{ answer }}",
175)
176
177# %%[markdown]
178## Define flow transitions
179
180# %%
181# Define the transitions between the steps.
182control_flow_edges = [
183    ControlFlowEdge(source_step=start_step, destination_step=user_input_step),
184    ControlFlowEdge(source_step=user_input_step, destination_step=hr_lookup_step),
185    ControlFlowEdge(source_step=hr_lookup_step, destination_step=llm_answer_step),
186    ControlFlowEdge(source_step=llm_answer_step, destination_step=user_output_step),
187    # Note: you can use a CompleteStep as the termination of the flow.
188    ControlFlowEdge(source_step=user_output_step, destination_step=CompleteStep(name="final_step")),
189]
190
191# %%[markdown]
192## Define data transitions
193
194# %%
195# Define the data flows between steps.
196data_flow_edges = [
197    DataFlowEdge(
198        source_step=user_input_step,
199        source_output=HR_QUERY,
200        destination_step=hr_lookup_step,
201        destination_input=TOOL_QUERY,
202    ),
203    DataFlowEdge(
204        source_step=user_input_step,
205        source_output=HR_QUERY,
206        destination_step=llm_answer_step,
207        destination_input=USER_QUESTION,
208    ),
209    DataFlowEdge(
210        source_step=hr_lookup_step,
211        source_output=HR_DATA_CONTEXT,
212        destination_step=llm_answer_step,
213        destination_input=HR_DATA_CONTEXT,
214    ),
215    DataFlowEdge(
216        source_step=llm_answer_step,
217        source_output=QUERY_ANSWER,
218        destination_step=user_output_step,
219        destination_input=QUERY_ANSWER,
220    ),
221]
222
223# %%[markdown]
224## Create assistant
225
226# %%
227# Create the flow passing in the steps, the name of the step to start with, the control_flow_edges and the data_flow_edges.
228assistant = Flow(
229    begin_step=start_step,
230    control_flow_edges=control_flow_edges,
231    data_flow_edges=data_flow_edges,
232)
233
234# %%[markdown]
235## Create assistant FlowBuilder
236
237# %%
238# Create the same assistant using the Flow Builder API.
239assistant = (
240    FlowBuilder()
241    .add_sequence([user_input_step, hr_lookup_step, llm_answer_step, user_output_step])
242    .set_entry_point(user_input_step)
243    # Link the final step to completion
244    .set_finish_points(user_output_step)
245    # Wire the data connections
246    .add_data_edge(user_input_step, hr_lookup_step, (HR_QUERY, TOOL_QUERY))
247    .add_data_edge(user_input_step, llm_answer_step, (HR_QUERY, USER_QUESTION))
248    .add_data_edge(hr_lookup_step, llm_answer_step, (HR_DATA_CONTEXT, HR_DATA_CONTEXT))
249    .add_data_edge(llm_answer_step, user_output_step, (QUERY_ANSWER, QUERY_ANSWER))
250    .build()
251)
252
253# %%[markdown]
254## Run assistant
255
256# %%
257# Start a conversation.
258conversation = assistant.start_conversation()
259
260# Execute the assistant.
261# This will print out the message to the user, then stop at the user input step.
262conversation.execute()
263
264# Ask a question of the assistant by appending a user message.
265conversation.append_user_message("Does John Smith earn more that Mary Jones?")
266
267# Execute the assistant again. Continues from the UserInputStep.
268# As there are no other steps the flow will run to the end.
269status = conversation.execute()
270
271# "output_message" is the default key name for the output value
272# of the OutputMessageStep.
273from wayflowcore.executors.executionstatus import FinishedStatus
274if isinstance(status, FinishedStatus):
275    answer = status.output_values[OutputMessageStep.OUTPUT]
276    print(answer)
277else:
278    print(
279        f"Incorrect execution status, expected FinishedStatus, got {status.__class__.__name__}"
280    )
281
282# %%[markdown]
283## Export config to Agent Spec
284
285# %%
286from wayflowcore.agentspec import AgentSpecExporter
287
288serialized_assistant = AgentSpecExporter().to_json(assistant)
289
290# %%[markdown]
291## Load Agent Spec config
292
293# %%
294from wayflowcore.agentspec import AgentSpecLoader
295
296tool_registry = {"search_hr_database": search_hr_database}
297
298assistant = AgentSpecLoader(tool_registry=tool_registry).load_json(serialized_assistant)