PromptTemplate#
This page presents all APIs and classes related to prompt Templates.
- class wayflowcore.templates.template.PromptTemplate(messages, output_parser=None, input_descriptors=None, pre_rendering_transforms=None, post_rendering_transforms=None, tools=None, native_tool_calling=True, response_format=None, native_structured_generation=True, generation_config=None, _partial_values=<factory>, *, id=<factory>, __metadata_info__=<factory>, name='', description=None)#
Represents a flexible and extensible template for constructing prompts to be sent to large language models (LLMs).
The PromptTemplate class enables the definition of prompt messages with variable placeholders, supports both native and custom tool calling, and allows for structured output generation. It manages input descriptors, message transforms (pre- and post chat_history rendering), and partial formatting for efficiency. The class also integrates with output parsers, tools and llm generation configurations.
- Parameters:
messages (Sequence[Message | MessageAsDictT]) –
output_parser (OutputParser | List[OutputParser] | None) –
input_descriptors (List[Property] | None) –
pre_rendering_transforms (List[MessageTransform] | None) –
post_rendering_transforms (List[MessageTransform] | None) –
tools (List[Tool] | None) –
native_tool_calling (bool) –
response_format (Property | None) –
native_structured_generation (bool) –
generation_config (LlmGenerationConfig | None) –
_partial_values (Dict[str, Any]) –
id (str) –
__metadata_info__ (Dict[str, Any]) –
name (str) –
description (str | None) –
- CHAT_HISTORY_PLACEHOLDER: ClassVar[Message] = Message(id='6cfb283e-832f-4d4b-bdbd-00a181a20a40', __metadata_info__={}, role='system', contents=[TextContent(content='$$__CHAT_HISTORY_PLACEHOLDER__$$')], tool_requests=None, tool_result=None, display_only=False, sender=None, recipients=set())#
Message placeholder in case the chat history is formatted as a chat.
- CHAT_HISTORY_PLACEHOLDER_NAME: ClassVar[str] = '__CHAT_HISTORY__'#
Reserved name of the placeholder for the chat history, if rendered in one message.
- RESPONSE_FORMAT_PLACEHOLDER_NAME: ClassVar[str] = '__RESPONSE_FORMAT__'#
Reserved name of the placeholder for the expected output format. Only used if non-native structured generation, to be able to specify the JSON format anywhere in the prompt.
- TOOL_PLACEHOLDER_NAME: ClassVar[str] = '__TOOLS__'#
Reserved name of the placeholder for tools.
- copy()#
Returns a copy of the template.
- Return type:
- format(inputs=None)#
Formats the prompt into a list of messages to pass to the LLM
- Parameters:
inputs (Dict[str, Any] | None) –
- Return type:
Prompt
- classmethod from_string(template, output_parser=None, input_descriptors=None, pre_rendering_transforms=None, post_rendering_transforms=None, tools=None, native_tool_calling=True, response_format=None, native_structured_generation=True, generation_config=None)#
Creates a prompt template from a string.
- Parameters:
template (str) –
output_parser (OutputParser | None) –
input_descriptors (List[Property] | None) –
pre_rendering_transforms (List[MessageTransform] | None) –
post_rendering_transforms (List[MessageTransform] | None) –
tools (List[Tool] | None) –
native_tool_calling (bool) –
response_format (Property | None) –
native_structured_generation (bool) –
generation_config (LlmGenerationConfig | None) –
- Return type:
- generation_config: LlmGenerationConfig | None = None#
Parameters to configure the generation.
- input_descriptors: List[Property] | None = None#
Input descriptors that will be picked up by PromptExecutionStep or AgentExecutionStep. Resolved by default from the variables present in the messages.
- native_structured_generation: bool = True#
Whether to use native structured generation or not. All llm providers might not support it.
- native_tool_calling: bool = True#
Whether to use the native tool calling of the model or not. All llm providers might not support it.
- output_parser: OutputParser | List[OutputParser] | None = None#
Post-processing applied on the raw output of the LLM.
- post_rendering_transforms: List[MessageTransform] | None = None#
Message transform applied on the rendered list of messages.
- pre_rendering_transforms: List[MessageTransform] | None = None#
Message transform applied before rendering the list of messages into the template.
- tools: List[Tool] | None = None#
Tools to use in the prompt.
- with_additional_post_rendering_transform(transform)#
Appends an additional post rendering transform to this template.
- Parameters:
transform (MessageTransform) –
- Return type:
- with_generation_config(generation_config, override=True)#
Override: Whether the template config should be overridden or should overridden this config.
- Parameters:
generation_config (LlmGenerationConfig | None) –
override (bool) –
- Return type:
- with_output_parser(output_parser)#
Replaces the output parser of this template.
- Parameters:
output_parser (OutputParser | List[OutputParser]) –
- Return type:
- with_partial(inputs)#
Partially formats the prompt with the given inputs (to avoid formatting everything at each call, if some inputs do not change). These inputs are not rendered directly, but stored for a later call to format().
- Parameters:
inputs (Dict[str, Any]) –
- Return type:
- with_response_format(response_format)#
Returns a copy of the template equipped with a given response format.
- Parameters:
response_format (Property | None) –
- Return type:
- with_tools(tools)#
Returns a copy of the template equipped with the given tools.
- Parameters:
tools (List[Tool] | None) –
- Return type:
OutputParser#
- class wayflowcore.outputparser.OutputParser(__metadata_info__=None, id=None)#
Abstract base class for output parsers that process LLM outputs.
- Parameters:
__metadata_info__ (Dict[str, Any] | None) –
id (str | None) –
- abstract parse_output(content)#
Parses the LLM raw output
- abstract async parse_output_streaming(content)#
Can parse the result returned by streaming By default does nothing until the message has been completely generated, but can implement specific stream methods if we want to stream something specific
- Parameters:
content (Any) –
- Return type:
Any
- class wayflowcore.outputparser.RegexOutputParser(regex_pattern, strict=True, *, id=<factory>, __metadata_info__=<factory>)#
Parses some text with Regex, potentially several regex to fill a dict
Examples
>>> import re >>> from wayflowcore.messagelist import Message >>> from wayflowcore.outputparser import RegexOutputParser, RegexPattern >>> RegexOutputParser( ... regex_pattern=RegexPattern(pattern=r"Solution is: (.*)", flags=re.DOTALL) ... ).parse_output(Message(content="Solution is: Bern is the capital of Switzerland")).content 'Bern is the capital of Switzerland'
>>> RegexOutputParser( ... regex_pattern={ ... 'thought': "THOUGHT: (.*) ACTION:", ... 'action': "ACTION: (.*)", ... } ... ).parse_output(Message("THOUGHT: blahblah ACTION: doing")).content '{"thought": "blahblah", "action": "doing"}'
- Parameters:
regex_pattern (str | RegexPattern | Dict[str, str | RegexPattern]) –
strict (bool) –
id (str) –
__metadata_info__ (Dict[str, Any]) –
- parse_output(message)#
Parses the LLM raw output
- async parse_output_streaming(content)#
Can parse the result returned by streaming By default does nothing until the message has been completely generated, but can implement specific stream methods if we want to stream something specific
- Parameters:
content (Any) –
- Return type:
Any
- regex_pattern: str | RegexPattern | Dict[str, str | RegexPattern]#
Regex pattern to use
- strict: bool = True#
Whether to return empty string if no match is found or return the raw text
- class wayflowcore.outputparser.RegexPattern(pattern, match='first', flags=None)#
Represents a regex pattern and matching options for output parsing.
- Parameters:
pattern (str) –
match (Literal['first', 'last']) –
flags (int | RegexFlag | None) –
- flags: int | RegexFlag | None = None#
Potential regex flags to use (re.DOTALL for multiline matching for example)
- static from_str(pattern, flags=RegexFlag.DOTALL)#
- Parameters:
pattern (str | RegexPattern) –
flags (int | RegexFlag | None) –
- Return type:
- match: Literal['first', 'last'] = 'first'#
Whether to take the first match or the last match
- pattern: str#
Regex pattern to match
- class wayflowcore.outputparser.JsonOutputParser(properties=None, *, id=<factory>, __metadata_info__=<factory>)#
Parses output as JSON, repairing and serializing as needed.
- Parameters:
properties (Dict[str, str] | None) –
id (str) –
__metadata_info__ (Dict[str, Any]) –
- parse_output(content)#
Parses the LLM raw output
- async parse_output_streaming(content)#
Can parse the result returned by streaming By default does nothing until the message has been completely generated, but can implement specific stream methods if we want to stream something specific
- Parameters:
content (Any) –
- Return type:
Any
- properties: Dict[str, str] | None = None#
Dictionary of property names and jq queries to manipulate the loaded JSON
- class wayflowcore.outputparser.ToolOutputParser(tools=None, *, id=<factory>, __metadata_info__=<factory>)#
Base parser for tool requests
- Parameters:
tools (List[Tool] | None) –
id (str) –
__metadata_info__ (Dict[str, Any]) –
- parse_output(message)#
Separates the raw output into thoughts and calls, and then parses the calls into ToolRequests
- async parse_output_streaming(content)#
Can parse the result returned by streaming By default does nothing until the message has been completely generated, but can implement specific stream methods if we want to stream something specific
- Parameters:
content (Any) –
- Return type:
Any
- parse_thoughts_and_calls(raw_txt)#
Default function to separate thoughts and tool calls
- Parameters:
raw_txt (str) –
- Return type:
Tuple[str, str]
- abstract parse_tool_request_from_str(raw_txt)#
- Parameters:
raw_txt (str) –
- Return type:
List[ToolRequest]
- tools: List[Tool] | None = None#
- with_tools(tools)#
Enhances the tool parser with some validation of the parsed tool calls according to specific tools
- Parameters:
tools (List[Tool] | None) –
- Return type:
Message transforms#
- class wayflowcore.transforms.MessageTransform#
Abstract base class for message transforms.
Subclasses should implement the __call__ method to transform a list of Message objects and return a new list of Message objects, typically for preprocessing or postprocessing message flows in the system.
- class wayflowcore.transforms.CoalesceSystemMessagesTransform#
Transform that merges consecutive system messages at the start of a message list into a single system message. This is useful for reducing redundancy and ensuring that only one system message appears at the beginning of the conversation.
- class wayflowcore.transforms.RemoveEmptyNonUserMessageTransform#
Transform that removes messages which are empty and not from the user.
Any message with empty content and no tool requests, except for user messages, will be filtered out from the message list.
This is useful in case the template contains optional messages, which will be discarded if their content is empty (with a string template such as “{% if __PLAN__ %}{{ __PLAN__ }}{% endif %}”).
- class wayflowcore.transforms.AppendTrailingSystemMessageToUserMessageTransform#
Transform that appends the content of a trailing system message to the previous user message.
If the last message in the list is a system message and the one before it is a user message, this transform merges the system message content into the user message, reducing message clutter.
This is useful if the underlying LLM does not support system messages at the end.
Helpers#
- wayflowcore.templates.structuredgeneration.adapt_prompt_template_for_json_structured_generation(prompt_template)#
Adapts a prompt template for native structured generation to one that leverages a special system prompt and a JSON Output Parser.
- Parameters:
prompt_template (PromptTemplate) – The prompt template to adapt
- Returns:
The new prompt template, with the special system prompt and output parsers configured
- Return type:
- Raises:
ValueError – If the prompt template is already configured to use non-native structured generation, or the prompt template has no response format.