WayFlow Glossary#
This glossary introduces the key terms and concepts used across the WayFlow library.
Assistant#
WayFlow enables the creation of AI-powered assistants including:
Flows which are structured assistants that follow a pre-defined task completion process;
Agents which are conversational assistants that can autonomously plan, think, act, and execute tools to complete tasks in a flexible manner.
WayFlow assistants can be composed together and configured to solve complex tasks with varying degrees of autonomy, ranging from fully self-directed to highly prescriptive, allowing for a spectrum of flexibility in task completion.
See the Tutorials and Use-Case Examples to learn to build WayFlow assistants.
Agent#
An Agent is a type of LLM-powered assistant that can interact with users, leverage external tools, and interact with other WayFlow assistants to take specific actions in order to solve user requests through conversational interfaces.
A simple agentic system may involve a single Agent interacting with a human user. More advanced assistants may also involve the use of multiple agents. Finally, Agents can be integrated in Flows with the AgentExecutionStep.
To learn more about Agents, see the tutorial Build a Simple Conversational Assistant with Agents, or read the API reference.
Branching#
Branching is the ability of a Flow to conditionally transition between different steps based on specific input values or conditions. Developers can then create more dynamic and adaptive workflows that respond to varying scenarios. Branching is achieved through the use of the BranchingStep, which defines multiple possible branches and maps input values to specific steps.
Read the guide How to Create Conditional Transitions in Flows for how to use branching. For more information, read the API reference.
Client Tool#
See tools
Context Provider#
Context providers are callable components that are used to provide dynamic contextual information to WayFlow assistants. They are useful to connect external datasources to an assistant.
For instance, giving the current time of the day to an assistant can be achieved with a context provider.
Read about the different types of context providers in the API reference.
Control Flow Edge#
A Control Flow Edge is a connector that represents a directional link between two steps in a Flow. It specifies a possible transition between a specific branch of a source step and a destination step.
This concept enables assistant developers to explicitly define the expected transitions that can occur within a Flow.
Read more about control flow edges in the API reference.
Composability#
Composability refers to the ability of WayFlow assistants to be decomposed into smaller components, combined with other components, and rearranged to form new assistants that can solve a wide range of tasks. By supporting composability, you can create complex agentic systems from simpler building blocks.
WayFlow supports four types of agentic patterns:
Calling Agents in Flows: Integrate conversational capabilities into structured workflows.
Calling Agents in Agents: Combine multiple agents to execute complex tasks autonomously.
Calling Flows in Agents: Use structured workflows as tools within conversational agents.
Calling Flows in Flows: Create nested workflows to model complex business processes.
Conversation#
A Conversation is a stateful object that represents the execution state of a WayFlow assistant. It stores the list of messages as well as information produced during the assistant execution (for example, tool calls, inputs/outputs produced by steps in a flow).
The conversation object can be modified by the assistant through the execute
method which updates the conversation state based on the assistant’s logic.
It also serves as the interface from which end users can interact with WayFlow assistants (for example, by getting the current list of messages, appending a user message, and so on).
The usual code flow when executing WayFlow assistants would be as follows:
A new conversation is created using the
start_conversation
method from Flows and Agents, with optional inputs.Then in the main execution loop:
The user may interact with the assistant (for example, by adding a new message).
The assistant execution is started/resumed with the
execute
method.
For more information about how the Conversation is used, see the Tutorials, or read the API reference.
Data Flow Edge#
A Data Flow Edge is a connector that represents a logical link between steps or context providers within a Flow. It defines how data is propagated from the output of one step, or context provider, to the input of another step.
This concept enables assistant developers to explicitly define the expected orchestration of data flows within Flows.
Read more about data flow edges in the API reference.
Execution Interrupts#
An ExecutionInterrupt is a mechanism that allows assistant developers to intervene in the standard execution of an assistant, providing the ability to stop or pause the execution when specific events or conditions are met, and execute a custom callback function in response.
For example, the execution can be interrupted when a time limit is reached or when a maximum number of tokens is exceeded, triggering a callback to handle the interruption.
Read more about execution interrupts in the API reference.
Execution Status#
The ExecutionStatus is a runtime indicator of an assistant’s execution state in WayFlow. This status provides information about the assistant’s current activity, such as whether it has finished its execution, is waiting for user input, or is waiting on a tool execution result from a Client tool.
The ExecutionStatus
is used in execution loops of Agents and Flows to properly manage the conversation with the assistant.
Read more about the types of execution statuses and their use in the API reference.
Flow#
A Flow is a type of structured assistant composed of individual steps that are connected to form a coherent sequence of actions. Each step in a Flow is designed to perform a specific function, similar to functions in programming.
Flows can have loops, conditional transitions, and multiple end points. Flows can also integrate sub-flows and Agents to enable more complex capabilities.
A Flow can be used to tackle a wide range of business processes and other tasks in a controllable and efficient way.
Read the tutorial how to Build a Simple Fixed-Flow Assistant with Flows, or see the available How-to Guides about Flows. Also, check the API reference.
Generation Config#
The LLM generation config is the set of parameters that control the output of a Large Language Model (LLM) in WayFlow.
These parameters include the maximum number of tokens to generate (max_tokens
), the sampling temperature
, and the probability threshold for nucleus sampling (top_p
).
Learn more about the LLM generation config in the How to Specify the Generation Configuration when Using LLMs or read the API reference.
Large Language Model (LLM)#
A Large Language Model is a type deep neural network trained on vast amounts of text data that can understand, generate, and manipulate human language through pattern recognition and statistical relationships. It processes input text through multiple layers of neural networks, using specific mechanisms to understand context and relationships between words.
Modern LLMs contain billions of parameters and often require dedicated hardware for both training and inference. As such, they are typically hosted through APIs by their respective providers, allowing for ease of integration and access.
Notably, WayFlow does not handle the inference of LLMs on its server, instead relying on these external APIs to leverage the power of LLMs. This approach allows WayFlow to remain lightweight while still providing access to the capabilities of these powerful models.
Read our guide How to Use LLMs from Different LLM Providers, or see the API reference for the list of supported models.
Message#
A Message is a core concept in WayFlow, representing a unit of communication between users and assistants. It provides a structured way to hold information and can contain various types of data including text, tool requests, results, as well as other metadata.
Messages are used throughout the library to hold information and facilitate communication between different components. The list of messages generated during an assistant execution can be accessed directly from a Conversation.
Read more about messages in the API reference.
Prompt Engineering and Optimization#
Prompt engineering and optimization is the systematic process of designing, refining, and improving prompts to achieve more accurate, reliable, and desired outputs from language models. It involves iterative testing and refinement of prompt structures, careful consideration of context windows, and strategic use of examples and formatting.
Methods such as Automated Prompt Engineering can help improve prompts by using algorithms to optimize the prompt performance on a specific metric.
Prompt Engineering Styles#
Technique |
Description |
Example |
---|---|---|
Zero-shot |
No example, just task |
“Summarize this article.” |
Few-shot |
Provide examples |
“Q: What is 2+2? A: 4…” |
Chain-of-thought |
Encourage step-by-step thinking |
“Let’s think step-by-step…” |
Role prompting |
Assign a persona |
“You are an expert lawyer…” |
Constraint-based |
Set strict formats or word limits |
Answer in JSON with keys ‘title’…” |
Prompt Template#
A prompt template is a standardized prompt structure with placeholders for variable inputs, designed to maintain consistency across similar queries while allowing for customization. WayFlow uses jinja-style placeholders to specify the input variables to the prompt (for more information check the reference of jinja2).
See the Tutorials and Use-Case Examples for concrete examples, or check the TemplateRenderingStep API reference.
Prompt templates can be used in WayFlow components that use LLMs, such as Agents and the PromptExecutionStep.
Properties#
A Property is a metadata descriptor that provides information about an input/output value of a component (Tools, Steps, Flows, and Agents) in a WayFlow assistant. Properties can represent various data types such as boolean, float, integer, string, as well as nested types such as list, dict, or object.
Properties include attributes such as name, description, and default value, which help to clarify the purpose and behavior of the component, making it easier to understand and interact with the component.
To learn more about the use of properties, read the guide How to Change Input and Output Descriptors of Components, and check the API reference.
Remote Tool#
See tools
Retrieval Augmented Generation (RAG)#
Retrieval Augmented Generation (RAG) is a technique to enhance LLM outputs by first retrieving relevant information from a knowledge base and then incorporating it into the generation process. This approach enhances the model’s ability to access and utilize specific information beyond its training data.
RAG systems typically involves a retrieval component that searches for relevant information and a generation component that incorporates this information into the final output.
Serialization#
In WayFlow, serialization refers to the ability to capture the current configuration of an assistant and represent it in a compact, human-readable form. This allows assistants to be easily shared, stored, or deployed across different environments, while maintaining their functionality and consistency.
Read the guide How to Serialize and Deserialize Flows and Agents or check the API reference for more information.
Server Tool#
See tools
Step#
A Step is an atomic element of a Flow that encapsulates a specific piece of logic or functionality. WayFlow proposes a variety of steps with functionalities ranging from LLM generation and tool use to branching, data extraction, and much more. By composing the steps together, WayFlow enables the creation of powerful structured assistants to solve diverse use cases efficiently and reliably.
Check the list of available steps in the API reference.
Structured Generation#
Structured generation is the process of controlling LLM outputs to conform to specific formats, schemas, or patterns, ensuring consistency and machine-readability of generated content. It involves techniques for guiding the model to produce outputs that follow predetermined structures while maintaining natural language fluency.
This approach is particularly valuable for generating data in formats like JSON, XML, or other structured representations.
For more information, see the guide How to Do Structured LLM Generation in Flows.
Tools#
WayFlow support three types of tools:
Server Tool#
A Server Tool is the simplest type of tool available in WayFlow. It is simply defined with the signature of the tool to execute including:
A tool name.
A tool description.
The names, types, and optional default values for the input parameters of the tool.
A Python callable, which is the callable to invoke upon the tool execution.
The output type.
See the guide How to Build Assistants with Tools for how to use Server tools.
For more information about the ServerTool
, read the API reference.
Client Tool#
A Client tool is a type of tool that can be built in WayFlow. Contrary to the Server Tool which is directly executed on the server side, upon execution the client tool returns a ToolRequest to be executed on the client side, which then sends the execution result back to the assistant.
See the guide How to Build Assistants with Tools. For more information about the ClientTool
, read the API reference.
Remote Tool#
A Remote tool is a type of tool that can be used in WayFlow to perform API calls.
For more information about the RemoteTool
, read the API reference.
Tokens#
Tokens are the fundamental units of text processing in LLMs, representing words, parts of words, or characters that the model uses to understand and generate language.
They form the basis for the model’s context window size and directly impact processing costs and performance.
It is worth noting that there are two types of tokens relevant to LLMs: input tokens and output tokens. Input tokens refer to the tokens that are fed into the model as input, whereas output tokens are the tokens generated by the model as output. In general, output tokens are more expensive than input tokens. An example of pricing can be $3 per 1M input tokens, and $10 per 1M output tokens.
Variable#
A Variable is a flow-scoped data container that enables the storage and retrieval of data throughout a Flow. Variables act as the shared state or context of a Flow (often referred to as state in other frameworks), providing a way to decouple data from specific steps and make it available for use within multiple parts of the Flow. They can be accessed and modified throughout a Flow using VariableReadStep and VariableWriteStep operations.
Note that Variables are complementary to the value stored in the input/output dictionary which is specific to the steps execution.
Learn more about Variables in the API reference.