<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Observability :: Oracle AI Optimizer &amp; Toolkit</title><link>https://oracle.github.io/ai-optimizer/main/observability/index.html</link><description>The AI Optimizer Server can emit OpenTelemetry traces and logs to any OTLP-compatible backend (e.g. SigNoz, Jaeger, Grafana Tempo). Telemetry is opt-in — disabled by default and activated entirely via environment variables.
What Is Instrumented Telemetry covers HTTP traffic, LangChain/LangGraph orchestration, LLM invocations, and application logs on the server:</description><generator>Hugo</generator><language>en-us</language><atom:link href="https://oracle.github.io/ai-optimizer/main/observability/index.xml" rel="self" type="application/rss+xml"/><item><title>SigNoz Quickstart</title><link>https://oracle.github.io/ai-optimizer/main/observability/signoz/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/observability/signoz/index.html</guid><description>This page walks through standing up a self-hosted SigNoz instance on a single host and pointing the AI Optimizer Server at it. Once complete, server requests appear as traces in the SigNoz UI within seconds of being issued.
The same data path works against any other OTLP backend (Jaeger, Grafana Tempo, vendor-managed receivers) — only the install steps and endpoint URL differ.</description></item><item><title>Reading Traces</title><link>https://oracle.github.io/ai-optimizer/main/observability/reading-traces/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://oracle.github.io/ai-optimizer/main/observability/reading-traces/index.html</guid><description>Once telemetry is flowing into a backend, the question becomes: what do you do with it? This page covers the practical workflow for using traces and correlated logs to understand the AI Optimizer server’s behavior — debugging requests, watching production health, and reasoning about LLM cost.</description></item></channel></rss>