How to Use LLMs from Different LLM Providers#

Agent Spec supports several LLM providers, each one having its own LlmConfig component. The available LLMs are:

Their configuration is specified directly to their respective class constructor. This guide will show you how to configure LLMs from different LLM providers with examples and notes on usage.

OciGenAiConfig#

OCI GenAI Configuration refers to model served by OCI Generative AI.

Parameters

model_id: str#

Name of the model to use. A list of the available models is given in Oracle OCI Documentation under the Model Retirement Dates (On-Demand Mode) section.

compartment_id: str#

The OCID (Oracle Cloud Identifier) of a compartment within your tenancy.

serving_mode: str#

The mode how the model specified is served:

  • ON_DEMAND: the model is hosted in a shared environment;

  • DEDICATED: the model is deployed in a customer-dedicated environment.

default_generation_parameters: dict, null#

Default parameters for text generation with this model.

Example:

default_generation_parameters = LlmGenerationConfig(max_tokens=256, temperature=0.8)
client_config: OciClientConfig, null#

OCI client config to authenticate the OCI service. See the below examples for the usage and more information.

OCI Client Configuration#

OCI GenAI models require a client configuration that contains all the settings needed to perform the authentication to use OCI services. The OciClientConfig holds these settings.

Parameters

service_endpoint: str#

The endpoint URL for the OCIGenAI service. Make sure you set the region right. For doing so, make sure that the Region where your private key is created, is aligned with the region mention in the service_endpoint.

auth_type: str#

The authentication type to use, e.g., API_KEY, SECURITY_TOKEN, INSTANCE_PRINCIPAL (It means that you need to execute the code from a compartment enabled for OCIGenAI.), RESOURCE_PRINCIPAL.

Based on the type of authentication the user wants to adopt, different specifications of the OciClientConfig are defined. Indeed, the OciClientConfig component is abstract, and should not be used directly. In the following sections we show what client extensions are available and their specific parameters.

Examples

from pyagentspec.llms import OciGenAiConfig
from pyagentspec.llms import LlmGenerationConfig
from pyagentspec.llms.ociclientconfig import OciClientConfigWithApiKey

# Get the list of available models from:
# https://docs.oracle.com/en-us/iaas/Content/generative-ai/deprecating.htm#
# under the "Model Retirement Dates (On-Demand Mode)" section.
OCIGENAI_MODEL_ID = "xai.grok-3"
# Typical service endpoint for OCI GenAI service inference
# <oci region> can be "us-chicago-1" and can also be found in your ~/.oci/config file
OCIGENAI_ENDPOINT = "https://inference.generativeai.<oci region>.oci.oraclecloud.com"
# <compartment_id> can be obtained from your personal OCI account (not the key config file).
# Please find it under "Identity > Compartments" on the OCI console website after logging in to your user account.
COMPARTMENT_ID = "ocid1.compartment.oc1..<compartment_id>"

generation_config = LlmGenerationConfig(max_tokens=256, temperature=0.8)

llm = OciGenAiConfig(
    name="oci-genai-grok3",
    model_id=OCIGENAI_MODEL_ID,
    compartment_id=COMPARTMENT_ID,
    client_config=OciClientConfigWithApiKey(
        name="client_config",
        service_endpoint=OCIGENAI_ENDPOINT,
        auth_file_location="~/.oci/config",
        auth_profile="DEFAULT",
    ),
    default_generation_parameters=generation_config,
)

OciClientConfigWithSecurityToken#

Client configuration that should be used if users want to use authentication through security token.

Parameters

auth_file_location: str#

The location of the authentication file from which the authentication information should be retrieved. The default location is ~/.oci/config.

auth_profile: str#

The name of the profile to use, among the ones defined in the authentication file. The default profile name is DEFAULT.

OciClientConfigWithApiKey#

Client configuration that should be used if users want to use authentication with API key. The parameters required are the same defined for the OciClientConfigWithSecurityToken.

OciClientConfigWithInstancePrincipal#

Client configuration that should be used if users want to use instance principal authentication. No additional parameters are required.

OciClientConfigWithResourcePrincipal#

Client configuration that should be used if users want to use resource principal authentication. No additional parameters are required.

OpenAiConfig#

OpenAI Models are powered by OpenAI. You can refer to one of those models by using the OpenAiConfig Component.

Parameters

model_id: str#

Name of the model to use.

default_generation_parameters: dict, null#

Default parameters for text generation with this model.

Important

Ensure that the OPENAI_API_KEY is set beforehand to access this model. A list of available OpenAI models can be found at the following link: OpenAI Models.

Examples

from pyagentspec.llms import OpenAiConfig

generation_config = LlmGenerationConfig(max_tokens=256, temperature=0.7, top_p=0.9)

llm = OpenAiConfig(
    name="openai-gpt-5",
    model_id="gpt-5",
    default_generation_parameters=generation_config,
)

VllmConfig#

vLLM Models are models hosted with a vLLM server. The VllmConfig allows users to use this type of models in their agents and flows.

Parameters

model_id: str#

Name of the model to use.

url: str#

Hostname and port of the vLLM server where the model is hosted.

default_generation_parameters: dict, null#

Default parameters for text generation with this model.

Examples

from pyagentspec.llms import VllmConfig

generation_config = LlmGenerationConfig(max_tokens=512, temperature=1.0, top_p=1.0)

llm = VllmConfig(
    name="vllm-llama-4-maverick",
    model_id="llama-4-maverick",
    url="http://url.to.my.vllm.server/llama4mav",
    default_generation_parameters=generation_config,
)

OllamaConfig#

Ollama Models are powered by a locally hosted Ollama server. The VllmConfig allows users to use this type of models in their agents and flows.

Parameters

model_id: str#

Name of the model to use.

url: str#

Hostname and port of the vLLM server where the model is hosted.

default_generation_parameters: dict, null#

Default parameters for text generation with this model.

Examples

from pyagentspec.llms import OllamaConfig

generation_config = LlmGenerationConfig(max_tokens=512, temperature=0.9, top_p=0.9)

llm = OllamaConfig(
    name="ollama-llama-4",
    model_id="llama-4-maverick",
    url="http://url.to.my.ollama.server/llama4mav",
    default_generation_parameters=generation_config
)

Recap#

This guide provides detailed descriptions of each model type supported by Agent Spec, demonstrating how to declare them using PyAgentSpec syntax.

Below is the complete code from this guide.
 1from pyagentspec.llms import OciGenAiConfig
 2from pyagentspec.llms import LlmGenerationConfig
 3from pyagentspec.llms.ociclientconfig import OciClientConfigWithApiKey
 4
 5# Get the list of available models from:
 6# https://docs.oracle.com/en-us/iaas/Content/generative-ai/deprecating.htm#
 7# under the "Model Retirement Dates (On-Demand Mode)" section.
 8OCIGENAI_MODEL_ID = "xai.grok-3"
 9# Typical service endpoint for OCI GenAI service inference
10# <oci region> can be "us-chicago-1" and can also be found in your ~/.oci/config file
11OCIGENAI_ENDPOINT = "https://inference.generativeai.<oci region>.oci.oraclecloud.com"
12# <compartment_id> can be obtained from your personal OCI account (not the key config file).
13# Please find it under "Identity > Compartments" on the OCI console website after logging in to your user account.
14COMPARTMENT_ID = "ocid1.compartment.oc1..<compartment_id>"
15
16generation_config = LlmGenerationConfig(max_tokens=256, temperature=0.8)
17
18llm = OciGenAiConfig(
19    name="oci-genai-grok3",
20    model_id=OCIGENAI_MODEL_ID,
21    compartment_id=COMPARTMENT_ID,
22    client_config=OciClientConfigWithApiKey(
23        name="client_config",
24        service_endpoint=OCIGENAI_ENDPOINT,
25        auth_file_location="~/.oci/config",
26        auth_profile="DEFAULT",
27    ),
28    default_generation_parameters=generation_config,
29)
30
31from pyagentspec.llms import VllmConfig
32
33generation_config = LlmGenerationConfig(max_tokens=512, temperature=1.0, top_p=1.0)
34
35llm = VllmConfig(
36    name="vllm-llama-4-maverick",
37    model_id="llama-4-maverick",
38    url="http://url.to.my.vllm.server/llama4mav",
39    default_generation_parameters=generation_config,
40)
41
42from pyagentspec.llms import OpenAiConfig
43
44generation_config = LlmGenerationConfig(max_tokens=256, temperature=0.7, top_p=0.9)
45
46llm = OpenAiConfig(
47    name="openai-gpt-5",
48    model_id="gpt-5",
49    default_generation_parameters=generation_config,
50)
51
52from pyagentspec.llms import OllamaConfig
53
54generation_config = LlmGenerationConfig(max_tokens=512, temperature=0.9, top_p=0.9)
55
56llm = OllamaConfig(
57    name="ollama-llama-4",
58    model_id="llama-4-maverick",
59    url="http://url.to.my.ollama.server/llama4mav",
60    default_generation_parameters=generation_config
61)

Next steps#

Having learned how to configure LLMs from different providers, you may now proceed to: