<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agents and Flows :: Oracle AI Optimizer &amp; Toolkit</title><link>https://oracle.github.io/ai-optimizer/main/agents/index.html</link><description>The AI Optimizer uses Oracle AgentSpec to define its AI agents and flows as portable, serializable configurations. These configurations are loaded into LangGraph for execution.
What is AgentSpec? AgentSpec is Oracle’s Open Agent Specification. It provides a standard way to define two primary building blocks: Agents and Flows.
Agents are LLM-powered conversational assistants, optionally equipped with tools (e.g., the ReAct pattern). They are defined with a system prompt, an LLM configuration, and a set of tools.</description><generator>Hugo</generator><language>en-us</language><atom:link href="https://oracle.github.io/ai-optimizer/main/agents/index.xml" rel="self" type="application/rss+xml"/><item><title>Combined Session</title><link>https://oracle.github.io/ai-optimizer/main/agents/combined/index.html</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/agents/combined/index.html</guid><description>The Combined session is an orchestrator that routes queries to VecSearch, NL2SQL, or both. Unlike the other agents and flows, it does not have an AgentSpec definition — it coordinates existing sub-sessions at runtime.
flowchart TD query["User query"] --&gt; classify["LLM classification call"] classify --&gt; route{"Route decision"} route --&gt;|nl2sql| nl2sql["NL2SQL Agent"] route --&gt;|vecsearch| vecsearch["VecSearch Flow"] route --&gt;|both| parallel["Run both in parallel"] nl2sql --&gt; answer_sql["Return NL2SQL answer"] vecsearch --&gt; answer_vs["Return VecSearch answer"] parallel --&gt; nl2sql_p["NL2SQL Agent"] parallel --&gt; vecsearch_p["VecSearch Flow"] nl2sql_p --&gt; synth["LLM synthesizes results"] vecsearch_p --&gt; synth synth --&gt; answer_both["Return combined answer"] The classifier prompts the LLM to respond with exactly one word — nl2sql, vecsearch, or both — based on the user’s question. Unrecognized responses default to both. When routed to a single tool, the query is dispatched directly to the corresponding sub-session. When routed to both, the sub-sessions run in parallel. The results are then fed into a synthesis LLM call to produce a unified response. The system prompt is fetched from the MCP server (optimizer_tools-default). If unavailable, a default instruction is used. Token usage from the classifier, sub-sessions, and synthesis calls is aggregated. Requires both a configured VecSearch flow and an NL2SQL agent to be available.</description></item><item><title>LLM-Only Agent</title><link>https://oracle.github.io/ai-optimizer/main/agents/llm-only/index.html</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/agents/llm-only/index.html</guid><description>The LLM-Only agent provides a pure conversational experience with no tools or external data sources. It is the simplest agent in the AI Optimizer.
flowchart TD prompt["Fetch system prompt (optimizer_basic-default)"] --&gt; build["Build AgentSpec (no tools)"] build --&gt; load["Load into runtime"] load --&gt; session["Create chat session"] session --&gt; input["User message"] input --&gt; history{"Chat history enabled?"} history --&gt;|Yes| stateful["Append to persistent conversation"] history --&gt;|No| stateless["Stateless turn (no history impact)"] stateful --&gt; execute["Execute LLM call"] stateless --&gt; execute execute --&gt; reply["Return response"] The system prompt is fetched from the MCP server (optimizer_basic-default). If unavailable, a default instruction is used. build_llm_only_agentspec creates a portable AgentSpec Agent with no tools — pure LLM conversation. The session manages conversation state. When chat_history is enabled, turns are appended to persistent history. When disabled, turns are stateless and do not affect history. Failed turns are fully rolled back — the user message and any partial response are removed so subsequent turns are not corrupted.</description></item><item><title>NL2SQL Agent</title><link>https://oracle.github.io/ai-optimizer/main/agents/nl2sql/index.html</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/agents/nl2sql/index.html</guid><description>The NL2SQL agent enables natural language queries against structured data in Oracle Database via the SQLcl MCP server.
flowchart TD prompt["Fetch system prompt (optimizer_nl2sql-tools-default)"] --&gt; build["Build AgentSpec with MCPToolBox"] build --&gt; load["Load into runtime"] load --&gt; session["Create NL2SQL session with DB connection context"] session --&gt; input["User query"] input --&gt; agent["Agent autonomously selects and calls SQLcl MCP tools"] agent --&gt; reply["Return natural language answer"] NL2SQL uses an Agent with dynamic MCP tool discovery — it autonomously decides which SQLcl tools to call using the ReAct pattern, rather than following a fixed pipeline. build_nl2sql_agentspec creates a portable AgentSpec Agent with an MCPToolBox that discovers available SQLcl tools at runtime (e.g. sqlcl_connect, sqlcl_schema-information, sqlcl_run-sql). The session augments the agent’s system prompt with the configured database connection name, model, and thread ID so the LLM passes them to sqlcl_* tool calls. The system prompt is fetched from the MCP server (optimizer_nl2sql-tools-default). If unavailable, a default instruction is used. Requires a configured SQLcl MCP Server and a database connection.</description></item><item><title>VecSearch Flow</title><link>https://oracle.github.io/ai-optimizer/main/agents/vecsearch/index.html</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/agents/vecsearch/index.html</guid><description>The VecSearch flow implements a Retrieval-Augmented Generation (RAG) pipeline with conditional nodes for query rephrasing, store discovery, retrieval, and document grading. It is exposed through the AgentSpec REST API under the name vecsearch_flow.
flowchart TD prompt["Fetch system prompt (optimizer_vs-tools-default)"] --&gt; build["Build AgentSpec Flow"] build --&gt; load["Load into runtime"] load --&gt; session["Create flow session"] session --&gt; input["User query"] input --&gt; rephrase{"Rephrase enabled?"} rephrase --&gt;|Yes| rephrase_node["Rephrase query (optimizer_vs-rephrase)"] rephrase --&gt;|No| discovery rephrase_node --&gt; discovery{"Discovery enabled?"} discovery --&gt;|Yes| discovery_node["Discover vector stores (optimizer_vs-discovery)"] discovery --&gt;|No| retriever discovery_node --&gt; retriever["Retrieve documents (optimizer_vs-retriever)"] retriever --&gt; grade{"Grading enabled?"} grade --&gt;|Yes| grade_node["Grade relevance (optimizer_vs-grade)"] grade --&gt;|No| format grade_node --&gt; format["LLM generates answer from retrieved documents"] format --&gt; finish["Return answer"] The system prompt is fetched from the MCP server (optimizer_vs-tools-default). If unavailable, a default instruction is used. build_vecsearch_flow creates a portable AgentSpec Flow with conditional nodes based on the user’s vector search settings (rephrase, discovery, grade). Rephrase (optimizer_vs-rephrase) rewrites the user question using conversation history to improve retrieval quality. Discovery (optimizer_vs-discovery) lists available vector stores when AutoRAG is enabled, allowing the LLM to select the most relevant store. Retriever (optimizer_vs-retriever) performs the core vector similarity search and always runs. Grade (optimizer_vs-grade) filters retrieved documents for relevance before answer generation. The final LLM node generates the answer using the system prompt and the retrieved (or graded) documents. Unlike NL2SQL, VecSearch does not require a database connection name — it operates entirely through vector search MCP tools.</description></item></channel></rss>