guardrails_ai.sdk.chat_completions_api

Classes

ChatApi

Namespaced chat API, mirroring the OpenAI chat namespace.

CompletionsApi

Guarded chat completions, mirroring the OpenAI completions API.

GuardedChatCompletion

GuardedChatCompletionChunk

Module Contents

class guardrails_ai.sdk.chat_completions_api.ChatApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int)

Bases: guardrails_ai.sdk.abstract_client.Client

Namespaced chat API, mirroring the OpenAI chat namespace.

Accessed via client.guards.chat.

completions: CompletionsApi
headers: dict[str, str]
http_client: httpx.AsyncClient
max_retries: int
class guardrails_ai.sdk.chat_completions_api.CompletionsApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int)

Bases: guardrails_ai.sdk.abstract_client.Client

Guarded chat completions, mirroring the OpenAI completions API.

Accessed via client.guards.chat.completions.

async create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsStreaming]) openai.AsyncStream[GuardedChatCompletionChunk]
async create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsNonStreaming]) GuardedChatCompletion

Create a guarded chat completion.

Proxies the request through the Guardrails API so that the response is validated by the named Guard before being returned to the caller.

Parameters:
  • guard_id – The unique id of the Guard to apply to the completion.

  • stream – If True, returns an async stream of GuardedChatCompletionChunk objects. Defaults to False.

  • **kwargs – Additional keyword arguments forwarded to the OpenAI chat.completions.create call (e.g. model, messages).

Returns:

A GuardedChatCompletion when stream=False, or an AsyncStream[GuardedChatCompletionChunk] when stream=True.

Example:

# Non-streaming
response = await client.guards.chat.completions.create(
    guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx",
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Streaming
async for chunk in await client.guards.chat.completions.create(
    guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx",
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
    stream=True,
):
    print(chunk)
headers: dict[str, str]
http_client: httpx.AsyncClient
max_retries: int
class guardrails_ai.sdk.chat_completions_api.GuardedChatCompletion

Bases: openai.types.chat.ChatCompletion

guardrails: guardrails_ai.types.ValidationOutcome | None
class guardrails_ai.sdk.chat_completions_api.GuardedChatCompletionChunk

Bases: openai.types.chat.ChatCompletionChunk

guardrails: guardrails_ai.types.ValidationOutcome | None