guardrails_ai.sdk.chat_completions_api ====================================== .. py:module:: guardrails_ai.sdk.chat_completions_api Classes ------- .. autoapisummary:: guardrails_ai.sdk.chat_completions_api.ChatApi guardrails_ai.sdk.chat_completions_api.CompletionsApi guardrails_ai.sdk.chat_completions_api.GuardedChatCompletion guardrails_ai.sdk.chat_completions_api.GuardedChatCompletionChunk Module Contents --------------- .. py:class:: ChatApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int) Bases: :py:obj:`guardrails_ai.sdk.abstract_client.Client` Namespaced chat API, mirroring the OpenAI ``chat`` namespace. Accessed via ``client.guards.chat``. .. py:attribute:: completions :type: CompletionsApi .. py:attribute:: headers :type: dict[str, str] .. py:attribute:: http_client :type: httpx.AsyncClient .. py:attribute:: max_retries :type: int .. py:class:: CompletionsApi(*, http_client: httpx.AsyncClient, headers: dict[str, str], max_retries: int) Bases: :py:obj:`guardrails_ai.sdk.abstract_client.Client` Guarded chat completions, mirroring the OpenAI completions API. Accessed via ``client.guards.chat.completions``. .. py:method:: create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsStreaming]) -> openai.AsyncStream[GuardedChatCompletionChunk] create(guard_id: str, **kwargs: Unpack[openai.types.chat.completion_create_params.CompletionCreateParamsNonStreaming]) -> GuardedChatCompletion :async: Create a guarded chat completion. Proxies the request through the Guardrails API so that the response is validated by the named Guard before being returned to the caller. :param guard_id: The unique id of the Guard to apply to the completion. :param stream: If ``True``, returns an async stream of ``GuardedChatCompletionChunk`` objects. Defaults to ``False``. :param \*\*kwargs: Additional keyword arguments forwarded to the OpenAI ``chat.completions.create`` call (e.g. ``model``, ``messages``). :returns: A ``GuardedChatCompletion`` when ``stream=False``, or an ``AsyncStream[GuardedChatCompletionChunk]`` when ``stream=True``. Example:: # Non-streaming response = await client.guards.chat.completions.create( guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx", model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], ) # Streaming async for chunk in await client.guards.chat.completions.create( guard_id="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx", model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], stream=True, ): print(chunk) .. py:attribute:: headers :type: dict[str, str] .. py:attribute:: http_client :type: httpx.AsyncClient .. py:attribute:: max_retries :type: int .. py:class:: GuardedChatCompletion Bases: :py:obj:`openai.types.chat.ChatCompletion` .. py:attribute:: guardrails :type: Optional[guardrails_ai.types.ValidationOutcome] .. py:class:: GuardedChatCompletionChunk Bases: :py:obj:`openai.types.chat.ChatCompletionChunk` .. py:attribute:: guardrails :type: Optional[guardrails_ai.types.ValidationOutcome]