How to Use Advanced Prompting Techniques#

python-icon Download Python Script

Python script/notebook for this guide.

Prompt templates how-to script

Prerequisites

This guide assumes familiarity with:

PromptTemplate is a powerful way to configure how to prompt various LLMs. Although WayFlow has very good defaults for most tasks and LLM providers, some specific use cases or providers require adapting prompt formatting to improve efficiency (e.g., token cost) or performance. To address this, WayFlow introduces the concept of PromptTemplate, which enables you to define prompts in a very generic way, applicable across different scenarios.

This guide will show you how to:

  • Recap how to use variables in prompts

  • Learn how to configure a custom PromptTemplate

  • Learn how to configure a prompt template for an Agent

  • Learn best practices for writing prompts

Basics of prompting#

WayFlow leverages LLMs to perform chat completion, meaning that given a list of messages (which can be system (for instructions to the LLM), agent (a message generated by the LLM), and user (a message from the user)), it returns a new message.

To perform agentic tasks, LLMs are equipped with tools, which are defined functions with names and arguments that the LLM may decide to call. Prompting the LLM to generate such calls can be done using native_tool_calling, where the LLM provider handles how tools are presented to the model and how tool calls are parsed, or through custom tool calling, where the presentation of tools to the model is controlled externally and the raw text output from the model must be parsed manually.

In many use cases, generating raw text is not enough because it is difficult to use in further pipeline steps. LLMs can produce specific structured output, often through structured_generation. Most LLM providers support this feature natively (the provider takes care of formatting the expected response and parsing the LLM output), but you can also do it in a custom way, using a specific prompt combined with custom parsing.

The PromptTemplate abstraction is designed so that configuring all these parameters is as simple as possible, while guaranteeing maximum customization. Using PromptTemplate usually involves 3 steps:

  • Create the basic template. This is done using either the constructor (PromptTemplate()) or the helper function from a string (PromptTemplate.from_string()).

  • Equip the template with partial values, tools, or response_format. This can be done dynamically, and is useful when these elements are unknown at template creation but can be reused across multiple generations.

  • Render the template: use template.format() to fill all variables and create a Prompt object, which can then be passed to an LLM (e.g., llm.generate(prompt)). This object contains all the information needed for the LLM generation.

Configure the LLM#

WayFlow supports several LLM API providers. Select an LLM from the options below:

from wayflowcore.models import OCIGenAIModel

if __name__ == "__main__":

    llm = OCIGenAIModel(
        model_id="provider.model-id",
        service_endpoint="https://url-to-service-endpoint.com",
        compartment_id="compartment-id",
        auth_type="API_KEY",
    )

Basic text prompt with Regex parsing#

One possible use case is prompting an LLM and extracting some information from the raw text output. This is especially relevant in a prompt template, since the parsing method highly depends on how the LLM is prompted in the first place.

The example below shows how to write a simple Chain-of-Thoughts prompt. The RegexOutputParser is used to configure how output is extracted from the raw text.

 1import re
 2
 3from wayflowcore.outputparser import JsonOutputParser, RegexOutputParser, RegexPattern
 4from wayflowcore.templates import PromptTemplate
 5
 6prompt_template = PromptTemplate.from_string(
 7    template="What is the result of 100+(454-3). Think step by step and then give your answer between <result>...</result> delimiters",
 8    output_parser=RegexOutputParser(
 9        regex_pattern=RegexPattern(pattern=r"<result>(.*)</result>", flags=re.DOTALL)
10    ),
11)
12prompt = prompt_template.format()  # no inputs needed since the template has no variable
13result = llm.generate(prompt).message.content
14print(result)
15# 551

API reference: PromptTemplate, RegexOutputParser, RegexPattern, JsonOutputParser.

Prompt with chat history#

In many cases—especially when working with agents—it is necessary to format a list of messages within a prompt. There are two ways to achieve this. Assume a list of messages is available:

1from wayflowcore.messagelist import Message, MessageType
2
3messages = [
4    Message(content="What is the capital of Switzerland?", message_type=MessageType.USER),
5    Message(content="The capital of Switzerland is Bern?", message_type=MessageType.AGENT),
6    Message(content="Really? I thought it was Zurich?", message_type=MessageType.USER),
7]

The first way is to integrate the history of messages directly into the prompt’s list of messages. This helps the model by presenting the entire conversation as a single context. For example, to format the conversation as messages within a template, you can use the CHAT_HISTORY_PLACEHOLDER:

 1prompt_template = PromptTemplate(
 2    messages=[
 3        {"role": "system", "content": "You are a helpful assistant. Answer the user questions"},
 4        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
 5    ]
 6)
 7
 8prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
 9print(prompt.messages)
10# [
11#   Message(content='You are a helpful assistant. Answer the user questions', message_type=MessageType.SYSTEM),
12#   Message(content='What is the capital of Switzerland?', message_type=MessageType.USER),
13#   Message(content='The capital of Switzerland is Bern?', message_type=MessageType.AGENT),
14#   Message(content='Really? I thought it was Zurich?', message_type=MessageType.USER)
15# ]
16result = llm.generate(prompt).message.content
17print(result)
18# While Zurich is a major city in Switzerland and home to many international organizations, the capital is indeed Bern (also known as Berne).

You can also format the chat history directly in a message, by using the CHAT_HISTORY_PLACEHOLDER_NAME placeholder (whose underlying value is the string "__CHAT_HISTORY__" as shown in the code snippet below):

 1prompt_text = """You are a helpful assistant. Answer the user questions.
 2For context, the conversation was:
 3{% for msg in __CHAT_HISTORY__ %}
 4{{ msg.message_type.value }} >> {{msg.content}}
 5{%- endfor %}
 6
 7Just answer the user question.
 8"""
 9prompt_template = PromptTemplate(messages=[{"role": "system", "content": prompt_text}])
10
11prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
12print(prompt.messages)
13# [Message(content="""You are a helpful assistant. Answer the user questions.
14# For context, the conversation was:
15#
16# USER >> What is the capital of Switzerland?
17# AGENT >> The capital of Switzerland is Bern?
18# USER >> Really? I thought it was Zurich?
19#
20# Just answer the user question.""", message_type=MessageType.SYSTEM]
21result = llm.generate(prompt).message.content
22print(result)
23# While Zurich is a major city and financial hub in Switzerland, the capital is indeed Bern.

If you need to filter some part of the chat history before rendering it in the template, you may use some pre-rendering message transforms:

 1from typing import List
 2from wayflowcore.transforms import MessageTransform
 3
 4class OnlyLastChatMessageTransform(MessageTransform):
 5    def __call__(self, messages: List[Message]) -> List[Message]:
 6        if len(messages) == 0:
 7            return []
 8        return [messages[-1]]
 9
10prompt_template = PromptTemplate(
11    messages=[
12        {"role": "system", "content": "You are a helpful assistant. Answer the user questions"},
13        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
14    ],
15    pre_rendering_transforms=[OnlyLastChatMessageTransform()],
16)
17prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
18print(prompt.messages)
19# [
20#   Message(content='You are a helpful assistant. Answer the user questions', message_type=MessageType.SYSTEM),
21#   Message(content='Really? I thought it was Zurich?', message_type=MessageType.USER)
22# ]

API reference: MessageTransform.

This will ensure only the last chat history message is formatted in the prompt.

Configure how to use tools in templates#

Assume a tool is available and should be used for LLM generation.

1from typing import Annotated
2from wayflowcore.tools import tool
3
4@tool
5def some_tool(param1: Annotated[str, "name of the user"]) -> Annotated[str, "tool_output"]:
6    """Performs some action"""
7    return "some_tool_output"

If the LLM provider supports native_tool_calling, integrating it into templates is straightforward:

 1template = PromptTemplate(
 2    messages=[
 3        {"role": "system", "content": "You are a helpful assistant"},
 4        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
 5    ],
 6)
 7template = template.with_tools([some_tool])
 8prompt = template.format(
 9    inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: [Message("call the some_output tool")]}
10)
11print(prompt.tools)
12# [ServerTool()]
13response = llm.generate(prompt).message
14print(response)
15# Message(content='', message_type=<MessageType.TOOL_REQUEST, tool_requests=[ToolRequest(name='some_output', args={'param1': 'call the some_output tool'}, tool_request_id='chatcmpl-tool-ae924a4829324411add8760d3ae265bd')])

In this case, the template uses native tool calling, so the prompt includes separate tools that are passed to the LLM endpoint. The provider directly parses the output and returns a ToolRequest.

Sometimes, the provider’s native tool calling might not be available or may not work as needed (for example, when performing Chain-of-Thought reasoning with Llama native models, which do not support this feature). In such cases, tool calling can be configured directly within the prompt template.

 1from wayflowcore.models.llmgenerationconfig import LlmGenerationConfig
 2from wayflowcore.templates.reacttemplates import (
 3    REACT_SYSTEM_TEMPLATE,
 4    ReactToolOutputParser,
 5    _ReactMergeToolRequestAndCallsTransform,
 6)
 7from wayflowcore.transforms import (
 8    CoalesceSystemMessagesTransform,
 9    RemoveEmptyNonUserMessageTransform,
10)
11
12REACT_CHAT_TEMPLATE = PromptTemplate(
13    messages=[
14        {"role": "system", "content": REACT_SYSTEM_TEMPLATE},
15        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
16    ],
17    native_tool_calling=False,
18    post_rendering_transforms=[
19        _ReactMergeToolRequestAndCallsTransform(),
20        CoalesceSystemMessagesTransform(),
21        RemoveEmptyNonUserMessageTransform(),
22    ],
23    output_parser=ReactToolOutputParser(),
24    generation_config=LlmGenerationConfig(stop=["## Observation"]),
25)
26template = REACT_CHAT_TEMPLATE.with_tools([some_tool])
27prompt = template.format(
28    inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: [Message("call the some_output tool")]}
29)
30print(prompt.tools)
31# [ServerTool()]
32response = llm.generate(prompt).message
33print(response)
34# Message(content='', message_type=MessageType.TOOL_REQUEST, tool_requests=[ToolRequest(name='some_tool', args={'param1': 'call the some_output tool'}, tool_request_id='chatcmpl-tool-69c7e27e55474501be0dfc2509e5d4f2')]

For custom tool calling, the important parameters to configure are:

  • native_tool_calling indicates that the tools will be formatted in the prompt. This requires the __TOOLS__ placeholder in one of the messages.

  • post_rendering_transforms: transformations applied to messages just before passing them to the LLM. These allow transformations specific to a tool-calling technique or an LLM provider’s requirements. The following post-transforms are used here:

    • _ReactMergeToolRequestAndCallsTransform formats tool calls and tool requests into standard user or agent messages.

    • CoalesceSystemMessagesTransform allows merging consecutive system messages into a single system message, as required by some LLM providers.

    • RemoveEmptyNonUserMessageTransform removes empty agent or system messages, since some LLMs do not support them.

  • output_parser: specifies how the raw LLM text output is parsed into a tool request. Several examples can be found in the wayflowcore.templates subpackage.

  • generation_config: for this ReAct-style template, specific generation parameters are added to help reduce hallucinations.

Configure how to use structured generation in templates#

Structured generation can be handled using either native_tool_calling (if the LLM provider supports it) or custom methods. Assume the goal is to generate the following output:

 1from wayflowcore.property import ObjectProperty, StringProperty
 2
 3output = ObjectProperty(
 4    name="output",
 5    description="information about a person",
 6    properties={
 7        "name": StringProperty(description="name of the person"),
 8        "age": StringProperty(description="age of the person"),
 9    },
10)

API reference: ObjectProperty, StringProperty.

With native structured generation#

Some providers allow passing the desired output format separately and ensure the generated output follows the given format. If the LLM provider supports it, then using a template is straightforward:

 1template = PromptTemplate.from_string(
 2    template="Extract information about a person. The person is 65 years old, named Johnny",
 3    response_format=output,
 4)
 5prompt = template.format()
 6print(prompt.response_format)
 7# ObjectProperty(...)
 8response = llm.generate(prompt).message
 9print(response)
10# Message(content='{"name": "Johnny", "age": "65"}', message_type=MessageType.AGENT)

With custom structured generation#

If the LLM provider does not support native structured generation, it is possible to prompt the LLM to produce output in the desired format manually:

 1text_template = """Extract information about a person. The person is 65 years old, named Johnny.
 2Just return a json document that respects this JSON Schema:
 3{{__RESPONSE_FORMAT__.to_json_schema() | tojson }}
 4
 5Reminder: only output the required json document, no need to repeat the title of the description, just the properties are required!
 6"""
 7
 8template = PromptTemplate(
 9    messages=[{"role": "user", "content": text_template}],
10    native_structured_generation=False,
11    output_parser=JsonOutputParser(),
12    response_format=output,
13)
14prompt = template.format()
15# ^no input needed since __RESPONSE_FORMAT__ is filled with `response_format`
16print(prompt.response_format)
17# None  # it is not passed separately, but will be taken care of by the output parser
18print(prompt.messages)
19# [Message(content="""Extract information about a person. The person is 65 years old, named Johnny.
20# Just return a json document that respects this JSON Schema:
21# {"type": "object", "properties": {"name": {"type": "string", "description": "name of the person"}, "age": {"type": "string", "description": "age of the person"}}, "title": "output", "description": "information about a person"}
22#
23# Reminder: only output the required json document, no need to repeat the title of the description, just the properties are required!""", message_type=MessageType.USER)]
24response = llm.generate(prompt).message
25print(response)
26# Message(content='{"name": "Johnny", "age": "65"}', message_type=MessageType.AGENT)

The main parameters to configure are:

  • native_structured_generation=False: prevents the response_format from being passed separately. If a __RESPONSE_FORMAT__ placeholder is used, the response_format will be inserted there.

  • output_parser: to correctly parse the raw LLM output into the expected object. In this example, a simple output parser is used that can repair malformed JSON output.

You can also use custom output parser to ensure the expected format. In such cases, it is beneficial to include the response_format in the template to clearly indicate the structured output format being generated. For example, parsing the content of the chain-of-thoughts can be done as follows:

 1prompt_template = PromptTemplate.from_string(
 2    template="What is the result of 100+(454-3). Think step by step and then give your answer between <result>...</result> delimiters",
 3    output_parser=RegexOutputParser(
 4        regex_pattern={
 5            "thoughts": RegexPattern(pattern=r"(.*)<result>", flags=re.DOTALL),
 6            "result": RegexPattern(pattern=r"<result>(.*)</result>", flags=re.DOTALL),
 7        }
 8    ),
 9    response_format=ObjectProperty(
10        properties={
11            "thoughts": StringProperty(description="step by step thinking of the LLM"),
12            "result": StringProperty(description="result of the computation"),
13        }
14    ),
15)
16prompt = prompt_template.format()  # no inputs needed since the template has no variable
17result = llm.generate(prompt).message.content
18print(result)
19# {"thoughts": "To solve the expression step by step:\n\n1. Evaluate the expression inside the parentheses: 454 - 3 = 451\n2. Add the result to 100: 100 + 451 = 551\n\nSo, the final result is:\n\n", "result": "551"}

Tip

Wayflow also provides a helper function to set up a simple JSON structured generation prompt and the associated output parser from an existing prompt template leveraging native structured generation. Check out the helper method’s API Reference.

Agent Spec Exporting/Loading#

You can export the assistant configuration to its Agent Spec configuration using the AgentSpecExporter.

from wayflowcore.agent import Agent
from wayflowcore.agentspec import AgentSpecExporter, AgentSpecLoader
from wayflowcore.agentspec.components.template import prompttemplate_serialization_plugin, prompttemplate_deserialization_plugin
assistant = Agent(llm=llm, agent_template=prompt_template)
serialized_assistant = AgentSpecExporter(plugins=[prompttemplate_serialization_plugin]).to_json(assistant)

Here is what the Agent Spec representation will look like ↓

Click here to see the assistant configuration.
{
  "component_type": "ExtendedAgent",
  "id": "0d75db5b-68bd-42b5-8a9f-79f2f3f5f85c",
  "name": "agent_6a2dfe98",
  "description": "",
  "metadata": {
    "__metadata_info__": {
      "name": "agent_6a2dfe98",
      "description": ""
    }
  },
  "inputs": [],
  "outputs": [],
  "llm_config": {
    "component_type": "VllmConfig",
    "id": "8534806a-5201-4c59-afee-e8a46ac8a17d",
    "name": "LLAMA_MODEL_ID",
    "description": null,
    "metadata": {
      "__metadata_info__": {}
    },
    "default_generation_parameters": null,
    "url": "LLAMA_API_URL",
    "model_id": "LLAMA_MODEL_ID"
  },
  "system_prompt": "",
  "tools": [],
  "toolboxes": [],
  "context_providers": null,
  "can_finish_conversation": false,
  "max_iterations": 10,
  "initial_message": "Hi! How can I help you?",
  "caller_input_mode": "always",
  "agents": [],
  "flows": [],
  "agent_template": {
    "component_type": "PluginPromptTemplate",
    "id": "d3a8c0b1-722a-44a1-a2a7-c5aa727742e0",
    "name": "prompt_template",
    "description": "",
    "metadata": {
      "__metadata_info__": {}
    },
    "messages": [
      {
        "role": "user",
        "contents": [
          {
            "type": "text",
            "content": "What is the result of 100+(454-3). Think step by step and then give your answer between <result>...</result> delimiters"
          }
        ],
        "tool_requests": null,
        "tool_result": null,
        "display_only": false,
        "sender": null,
        "recipients": [],
        "time_created": "2025-08-15T13:31:44.684406+00:00",
        "time_updated": "2025-08-15T13:31:44.684406+00:00"
      }
    ],
    "output_parser": {
      "component_type": "PluginRegexOutputParser",
      "id": "09cdd631-0189-44bd-800f-4e924ab1ff15",
      "name": "regex_outputparser",
      "description": null,
      "metadata": {
        "__metadata_info__": {}
      },
      "regex_pattern": {
        "result": {
          "pattern": "<result>(.*)</result>",
          "match": "first",
          "flags": 16
        },
        "thoughts": {
          "pattern": "(.*)<result>",
          "match": "first",
          "flags": 16
        }
      },
      "strict": true,
      "component_plugin_name": "OutputParserPlugin",
      "component_plugin_version": "25.4.0.dev0"
    },
    "inputs": [],
    "pre_rendering_transforms": null,
    "post_rendering_transforms": null,
    "tools": null,
    "native_tool_calling": true,
    "response_format": {
      "type": "object",
      "properties": {
        "result": {
          "description": "result of the computation",
          "type": "string"
        },
        "thoughts": {
          "description": "step by step thinking of the LLM",
          "type": "string"
        }
      }
    },
    "native_structured_generation": true,
    "generation_config": null,
    "component_plugin_name": "PromptTemplatePlugin",
    "component_plugin_version": "25.4.0.dev0"
  },
  "component_plugin_name": "AgentPlugin",
  "component_plugin_version": "25.4.0.dev0",
  "agentspec_version": "25.4.1"
}

You can then load the configuration back to an assistant using the AgentSpecLoader.

new_agent: Agent = AgentSpecLoader(plugins=[prompttemplate_deserialization_plugin]).load_json(serialized_assistant)

Note

This guide uses the following extension/plugin Agent Spec components:

  • PluginPromptTemplate

  • PluginRegexOutputParser

  • ExtendedAgent

See the list of available Agent Spec extension/plugin components in the API Reference

Note

By default, all WayFlow serialization/deserialization plugins are used when loading or exporting from/to Agent Spec. Passing a list of plugins overrides the default ones.

Next steps#

Having learned how to use the PromptTemplate, you may now proceed to using it in PromptExecutionStep or Agents.

Full code#

Click on the card at the top of this page to download the full code for this guide or copy the code below.

  1# Copyright © 2025 Oracle and/or its affiliates.
  2#
  3# This software is under the Universal Permissive License
  4# %%[markdown]
  5# Code Example - How to Use Advanced Prompting Techniques
  6# -------------------------------------------------------
  7
  8# How to use:
  9# Create a new Python virtual environment and install the latest WayFlow version.
 10# ```bash
 11# python -m venv venv-wayflowcore
 12# source venv-wayflowcore/bin/activate
 13# pip install --upgrade pip
 14# pip install "wayflowcore==26.1" 
 15# ```
 16
 17# You can now run the script
 18# 1. As a Python file:
 19# ```bash
 20# python howto_prompttemplate.py
 21# ```
 22# 2. As a Notebook (in VSCode):
 23# When viewing the file,
 24#  - press the keys Ctrl + Enter to run the selected cell
 25#  - or Shift + Enter to run the selected cell and move to the cell below# (UPL) 1.0 (LICENSE-UPL or https://oss.oracle.com/licenses/upl) or Apache License
 26# 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0), at your option.
 27
 28
 29
 30# %%[markdown]
 31## Configure your LLM
 32
 33# %%
 34from wayflowcore.models import VllmModel
 35
 36llm = VllmModel(
 37    model_id="model-id",
 38    host_port="VLLM_HOST_PORT",
 39)
 40
 41# %%[markdown]
 42## Basic text prompt with Regex parsing
 43
 44# %%
 45import re
 46
 47from wayflowcore.outputparser import JsonOutputParser, RegexOutputParser, RegexPattern
 48from wayflowcore.templates import PromptTemplate
 49
 50prompt_template = PromptTemplate.from_string(
 51    template="What is the result of 100+(454-3). Think step by step and then give your answer between <result>...</result> delimiters",
 52    output_parser=RegexOutputParser(
 53        regex_pattern=RegexPattern(pattern=r"<result>(.*)</result>", flags=re.DOTALL)
 54    ),
 55)
 56prompt = prompt_template.format()  # no inputs needed since the template has no variable
 57result = llm.generate(prompt).message.content
 58print(result)
 59# 551
 60
 61# %%[markdown]
 62## Prompt with chat history
 63
 64# %%
 65from wayflowcore.messagelist import Message, MessageType
 66
 67messages = [
 68    Message(content="What is the capital of Switzerland?", message_type=MessageType.USER),
 69    Message(content="The capital of Switzerland is Bern?", message_type=MessageType.AGENT),
 70    Message(content="Really? I thought it was Zurich?", message_type=MessageType.USER),
 71]
 72
 73# %%[markdown]
 74### As inlined messages
 75
 76# %%
 77prompt_template = PromptTemplate(
 78    messages=[
 79        {"role": "system", "content": "You are a helpful assistant. Answer the user questions"},
 80        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
 81    ]
 82)
 83
 84prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
 85print(prompt.messages)
 86# [
 87#   Message(content='You are a helpful assistant. Answer the user questions', message_type=MessageType.SYSTEM),
 88#   Message(content='What is the capital of Switzerland?', message_type=MessageType.USER),
 89#   Message(content='The capital of Switzerland is Bern?', message_type=MessageType.AGENT),
 90#   Message(content='Really? I thought it was Zurich?', message_type=MessageType.USER)
 91# ]
 92result = llm.generate(prompt).message.content
 93print(result)
 94# While Zurich is a major city in Switzerland and home to many international organizations, the capital is indeed Bern (also known as Berne).
 95
 96# %%[markdown]
 97### In the system prompt
 98
 99# %%
100prompt_text = """You are a helpful assistant. Answer the user questions.
101For context, the conversation was:
102{% for msg in __CHAT_HISTORY__ %}
103{{ msg.message_type.value }} >> {{msg.content}}
104{%- endfor %}
105
106Just answer the user question.
107"""
108prompt_template = PromptTemplate(messages=[{"role": "system", "content": prompt_text}])
109
110prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
111print(prompt.messages)
112# [Message(content="""You are a helpful assistant. Answer the user questions.
113# For context, the conversation was:
114#
115# USER >> What is the capital of Switzerland?
116# AGENT >> The capital of Switzerland is Bern?
117# USER >> Really? I thought it was Zurich?
118#
119# Just answer the user question.""", message_type=MessageType.SYSTEM]
120result = llm.generate(prompt).message.content
121print(result)
122# While Zurich is a major city and financial hub in Switzerland, the capital is indeed Bern.
123
124# %%[markdown]
125### With message transform
126
127# %%
128from typing import List
129from wayflowcore.transforms import MessageTransform
130
131class OnlyLastChatMessageTransform(MessageTransform):
132    def __call__(self, messages: List[Message]) -> List[Message]:
133        if len(messages) == 0:
134            return []
135        return [messages[-1]]
136
137prompt_template = PromptTemplate(
138    messages=[
139        {"role": "system", "content": "You are a helpful assistant. Answer the user questions"},
140        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
141    ],
142    pre_rendering_transforms=[OnlyLastChatMessageTransform()],
143)
144prompt = prompt_template.format(inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: messages})
145print(prompt.messages)
146# [
147#   Message(content='You are a helpful assistant. Answer the user questions', message_type=MessageType.SYSTEM),
148#   Message(content='Really? I thought it was Zurich?', message_type=MessageType.USER)
149# ]
150
151# %%[markdown]
152## Configure how to use tools in templates
153
154# %%
155from typing import Annotated
156from wayflowcore.tools import tool
157
158@tool
159def some_tool(param1: Annotated[str, "name of the user"]) -> Annotated[str, "tool_output"]:
160    """Performs some action"""
161    return "some_tool_output"
162
163# %%[markdown]
164### With native tool calling
165
166# %%
167template = PromptTemplate(
168    messages=[
169        {"role": "system", "content": "You are a helpful assistant"},
170        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
171    ],
172)
173template = template.with_tools([some_tool])
174prompt = template.format(
175    inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: [Message("call the some_output tool")]}
176)
177print(prompt.tools)
178# [ServerTool()]
179response = llm.generate(prompt).message
180print(response)
181# Message(content='', message_type=<MessageType.TOOL_REQUEST, tool_requests=[ToolRequest(name='some_output', args={'param1': 'call the some_output tool'}, tool_request_id='chatcmpl-tool-ae924a4829324411add8760d3ae265bd')])
182
183# %%[markdown]
184### With custom tool calling
185
186# %%
187from wayflowcore.models.llmgenerationconfig import LlmGenerationConfig
188from wayflowcore.templates.reacttemplates import (
189    REACT_SYSTEM_TEMPLATE,
190    ReactToolOutputParser,
191    _ReactMergeToolRequestAndCallsTransform,
192)
193from wayflowcore.transforms import (
194    CoalesceSystemMessagesTransform,
195    RemoveEmptyNonUserMessageTransform,
196)
197
198REACT_CHAT_TEMPLATE = PromptTemplate(
199    messages=[
200        {"role": "system", "content": REACT_SYSTEM_TEMPLATE},
201        PromptTemplate.CHAT_HISTORY_PLACEHOLDER,
202    ],
203    native_tool_calling=False,
204    post_rendering_transforms=[
205        _ReactMergeToolRequestAndCallsTransform(),
206        CoalesceSystemMessagesTransform(),
207        RemoveEmptyNonUserMessageTransform(),
208    ],
209    output_parser=ReactToolOutputParser(),
210    generation_config=LlmGenerationConfig(stop=["## Observation"]),
211)
212template = REACT_CHAT_TEMPLATE.with_tools([some_tool])
213prompt = template.format(
214    inputs={PromptTemplate.CHAT_HISTORY_PLACEHOLDER_NAME: [Message("call the some_output tool")]}
215)
216print(prompt.tools)
217# [ServerTool()]
218response = llm.generate(prompt).message
219print(response)
220# Message(content='', message_type=MessageType.TOOL_REQUEST, tool_requests=[ToolRequest(name='some_tool', args={'param1': 'call the some_output tool'}, tool_request_id='chatcmpl-tool-69c7e27e55474501be0dfc2509e5d4f2')]
221
222# %%[markdown]
223## Configure how to use structured generation in templates
224
225# %%
226from wayflowcore.property import ObjectProperty, StringProperty
227
228output = ObjectProperty(
229    name="output",
230    description="information about a person",
231    properties={
232        "name": StringProperty(description="name of the person"),
233        "age": StringProperty(description="age of the person"),
234    },
235)
236
237# %%[markdown]
238### With native structured generation
239
240# %%
241template = PromptTemplate.from_string(
242    template="Extract information about a person. The person is 65 years old, named Johnny",
243    response_format=output,
244)
245prompt = template.format()
246print(prompt.response_format)
247# ObjectProperty(...)
248response = llm.generate(prompt).message
249print(response)
250# Message(content='{"name": "Johnny", "age": "65"}', message_type=MessageType.AGENT)
251
252# %%[markdown]
253### With custom structured generation
254
255# %%
256text_template = """Extract information about a person. The person is 65 years old, named Johnny.
257Just return a json document that respects this JSON Schema:
258{{__RESPONSE_FORMAT__.to_json_schema() | tojson }}
259
260Reminder: only output the required json document, no need to repeat the title of the description, just the properties are required!
261"""
262
263template = PromptTemplate(
264    messages=[{"role": "user", "content": text_template}],
265    native_structured_generation=False,
266    output_parser=JsonOutputParser(),
267    response_format=output,
268)
269prompt = template.format()
270# ^no input needed since __RESPONSE_FORMAT__ is filled with `response_format`
271print(prompt.response_format)
272# None  # it is not passed separately, but will be taken care of by the output parser
273print(prompt.messages)
274# [Message(content="""Extract information about a person. The person is 65 years old, named Johnny.
275# Just return a json document that respects this JSON Schema:
276# {"type": "object", "properties": {"name": {"type": "string", "description": "name of the person"}, "age": {"type": "string", "description": "age of the person"}}, "title": "output", "description": "information about a person"}
277#
278# Reminder: only output the required json document, no need to repeat the title of the description, just the properties are required!""", message_type=MessageType.USER)]
279response = llm.generate(prompt).message
280print(response)
281# Message(content='{"name": "Johnny", "age": "65"}', message_type=MessageType.AGENT)
282
283# %%[markdown]
284### With additional output parser
285
286# %%
287prompt_template = PromptTemplate.from_string(
288    template="What is the result of 100+(454-3). Think step by step and then give your answer between <result>...</result> delimiters",
289    output_parser=RegexOutputParser(
290        regex_pattern={
291            "thoughts": RegexPattern(pattern=r"(.*)<result>", flags=re.DOTALL),
292            "result": RegexPattern(pattern=r"<result>(.*)</result>", flags=re.DOTALL),
293        }
294    ),
295    response_format=ObjectProperty(
296        properties={
297            "thoughts": StringProperty(description="step by step thinking of the LLM"),
298            "result": StringProperty(description="result of the computation"),
299        }
300    ),
301)
302prompt = prompt_template.format()  # no inputs needed since the template has no variable
303result = llm.generate(prompt).message.content
304print(result)
305# {"thoughts": "To solve the expression step by step:\n\n1. Evaluate the expression inside the parentheses: 454 - 3 = 451\n2. Add the result to 100: 100 + 451 = 551\n\nSo, the final result is:\n\n", "result": "551"}
306
307# %%[markdown]
308## Export config to Agent Spec
309
310# %%
311from wayflowcore.agent import Agent
312from wayflowcore.agentspec import AgentSpecExporter, AgentSpecLoader
313from wayflowcore.agentspec.components.template import prompttemplate_serialization_plugin, prompttemplate_deserialization_plugin
314assistant = Agent(llm=llm, agent_template=prompt_template)
315serialized_assistant = AgentSpecExporter(plugins=[prompttemplate_serialization_plugin]).to_json(assistant)
316
317# %%[markdown]
318## Load Agent Spec config
319
320# %%
321new_agent: Agent = AgentSpecLoader(plugins=[prompttemplate_deserialization_plugin]).load_json(serialized_assistant)