guardrails_ai.sdk.chat_completions_api¶
Classes¶
Namespaced chat API, mirroring the OpenAI |
|
Guarded chat completions, mirroring the OpenAI completions API. |
|
Module Contents¶
- class guardrails_ai.sdk.chat_completions_api.ChatApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int)¶
Bases:
guardrails_ai.sdk.abstract_client.ClientNamespaced chat API, mirroring the OpenAI
chatnamespace.Accessed via
client.guards.chat.- completions: CompletionsApi¶
- http_client: httpx.AsyncClient¶
- class guardrails_ai.sdk.chat_completions_api.CompletionsApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int)¶
Bases:
guardrails_ai.sdk.abstract_client.ClientGuarded chat completions, mirroring the OpenAI completions API.
Accessed via
client.guards.chat.completions.- async create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsStreaming]) openai.AsyncStream[GuardedChatCompletionChunk]¶
- async create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsNonStreaming]) GuardedChatCompletion
Create a guarded chat completion.
Proxies the request through the Guardrails API so that the response is validated by the named Guard before being returned to the caller.
- Parameters:
guard_id – The unique id of the Guard to apply to the completion.
stream – If
True, returns an async stream ofGuardedChatCompletionChunkobjects. Defaults toFalse.**kwargs – Additional keyword arguments forwarded to the OpenAI
chat.completions.createcall (e.g.model,messages).
- Returns:
A
GuardedChatCompletionwhenstream=False, or anAsyncStream[GuardedChatCompletionChunk]whenstream=True.
Example:
# Non-streaming response = await client.guards.chat.completions.create( guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx", model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], ) # Streaming async for chunk in await client.guards.chat.completions.create( guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx", model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], stream=True, ): print(chunk)
- http_client: httpx.AsyncClient¶
- class guardrails_ai.sdk.chat_completions_api.GuardedChatCompletion¶
Bases:
openai.types.chat.ChatCompletion- guardrails: guardrails_ai.types.ValidationOutcome | None¶
- class guardrails_ai.sdk.chat_completions_api.GuardedChatCompletionChunk¶
Bases:
openai.types.chat.ChatCompletionChunk- guardrails: guardrails_ai.types.ValidationOutcome | None¶