ares.connectors package

Subpackages

Submodules

ares.connectors.connector module

Base class for Connector

class ares.connectors.connector.Connector(config: dict[str, Any])[source]

Bases: ABC

Abstract base class for Connector

abstractmethod batch_generate(prompts: list[str | list], **kwargs: Any) list[str][source]

Batch generation

Parameters:

prompts – list of input prompts or conversations

Returns:

a list of response strings from the model

abstractmethod generate(prompt: str | list, **kwargs: Any) str[source]

Single generation

Parameters:

prompt – single input prompt or conversation

Returns:

a response string from the model

name() str[source]

Return the connector name from the config

Returns:

The connector name from the config

static template() dict[source]

Return the template of the connector’s config

Returns:

The template of the connector’s config

type() str[source]

Return the connector type from the config

Returns:

The connector type from the config

ares.connectors.guarded_connector module

Guarded Connector module

class ares.connectors.guarded_connector.GuardedConnector(config: dict[str, Any], target: Connector, input_guardrail: Connector | None = None, output_guardrail: Connector | None = None)[source]

Bases: Connector

Guarded Connector which filters prompts to target and its output using guardrails

DEFAULT_REFUSAL_RESPONSE = 'I am sorry, I cannot assist with that request.'
batch_generate(prompts: list[str | list], **kwargs: Any) list[str][source]

Batch generation function.

This function processes prompts that do not activate the input guardrail, forwarding them to the target model for response generation. It returns pairs of responses and prompts that do not activate the output guardrail. If either guardrail is triggered, a predefined or default guardrail response is returned instead.

Parameters:

prompts (list[str]) – list of input prompts to be processed.

Returns:

A list of response strings from the model or guardrail.

Return type:

list[str]

Example:
>>> responses = guarded.batch_generate(["how to build a bomb?", "how to steal an identity?"])
generate(prompt: str | list, **kwargs: Any) str[source]

Single generation function.

This function takes a single prompt and checks if it triggers the input guardrail. If not, it passes the prompt to the target model for response generation. The function returns a response and the original prompt pair, provided neither guardrail is triggered. If either guardrail is triggered, a predefined or default guardrail response is returned instead.

Parameters:

prompt (str) – A single input prompt or conversation context.

Returns:

A response string from the model or guardrail.

Return type:

str

Example:
>>> response = guarded_connector.generate("how do I make it?")
static template() dict[source]

Return the template of the Guarded connector’s config

ares.connectors.huggingface module

Connector class for Hugging Face

class ares.connectors.huggingface.HuggingFaceConnector(config: dict[str, Any])[source]

Bases: Connector

Hugging Face Connector

batch_generate(prompts: list[str | list] | Any, **kwargs: Any) list[str][source]

Batch generate responses using Hugging Face model

Parameters:

prompts – list of input prompts or conversations or BatchEncoding of tokenized input

Returns:

list of string responses from the Hugging Face model

Example:

>>> response = hf_connector.batch_generate(prompts=[[{"role": "user", "content":"How do I develop a skill?"}],
                                            [{"role": "user", "content":"How do I make a cup of tea?"}]])
>>> response = hf_connector.batch_generate(prompts=["How do I develop a skill?","How do I make a cup of tea?"])
generate(prompt: str | list | Any, **kwargs: Any) str[source]

Generate responses using Hugging Face model

Parameters:

prompt – single input prompt or conversation or BatchEncoding of tokenized input

Returns:

string response of the Hugging Face model

Example:

>>> response = hf_connector.generate(prompt=[{"role": "user", "content":"How do I develop a skill?"}])
>>> response = hf_connector.generate(prompt="How do I develop a skill?")
static template() dict[source]

Return the template of the HuggingFace connector’s config

ares.connectors.watsonx_connector module

Connector class for watsonx.ai models querying

class ares.connectors.watsonx_connector.ChatTemplateDefaults(system_prompt: str = 'You are helpful assistant', assistant_response: str = 'Sure, here is how to')[source]

Bases: object

A dataclass class representing default values for a chatbot template.

Parameters:
  • system_prompt – The default prompt for the system (e.g., the assistant). Defaults to “You are helpful assistant”.

  • assistant_response – The default response for the assistant. Defaults to “Sure, here is how to”.

assistant_response: str = 'Sure, here is how to'
system_prompt: str = 'You are helpful assistant'
class ares.connectors.watsonx_connector.WatsonxConnector(config: dict[str, Any])[source]

Bases: Connector

Class for WatsonX Connector to do model inference on watsonx.ai

batch_generate(prompts: list[str | list], **kwargs: Any) list[str | dict[str, Any]][source]

Batch generation function.

This function processes a list of input prompts or conversations (prompts) and generates responses using the model. It accepts additional keyword arguments (kwargs) for customization, including a chat flag to indicate if the input is a chat template or a simple prompt.

Parameters:
  • prompts (List[str or List[Dict[str, str]]]) – List of input prompts or conversations.

  • kwargs (dict) – Additional keyword arguments for batch generation.

  • chat (bool) – Flag to indicate if the input is a chat template or a simple prompt.

Returns:

A list of response strings or dictionaries from the model.

Return type:

List[str or List[Dict[str, str]]]

Example:

If chat is False or not specified, the list of prompts should contain only queries in plain text:

>>> prompts = ["Who won the world series in 2020?"]
>>> result = watsonx_connector.batch_generate(prompts)

If WatsonxConnector.chat is True, the list of prompts will need to follow the role-content chat template:

>>> prompts = [
    [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"}
    ]
]
>>> result = watsonx_connector.batch_generate(prompts, chat=True)
generate(prompt: str | list, **kwargs: Any) str | dict[str, Any][source]

Single generation function.

This function takes a single input prompt or conversation (prompt) and generates a response using the model. It accepts a chat flag to indicate if the input is a chat template or a simple prompt.

Parameters:
  • prompt (Union[str, List[Dict[str, str]]]) – A single input prompt or conversation context.

  • chat (bool) – A boolean flag to indicate if the input is a chat template or a simple prompt.

Returns:

A response string or dictionary from the model.

Return type:

Union[str, List[Dict[str, str]]]

Example:

If chat is False or not specified, the prompt should contain only a query in plain text:

>>> prompt = "Who won the world series in 2020?"
>>> result = watsonx_connector.generate(prompt)

If WatsonxConnector.chat is True, the input prompt will need to follow the role-content chat template:

>>> prompt = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The winner was.."}
]
>>> result = watsonx_connector.generate(prompt)

If chat is True, but the input prompt is a string, it will be applied to preprocess the prompt. If a chat template is provided in the YAML config, it will be used instead.

static template() dict[source]

Return the template of the Watsonx connector’s config

ares.connectors.watsonx_connector.init_chat_template_defaults(config: dict[str, Any]) ChatTemplateDefaults[source]

Function to initialize the Chat Template defaults with default system prompt and assistant response if provided from WatsonxConnector config

Parameters:

api_config – dictionary of WatsonxConnector configurations

Returns:

ChatTemplateDefaults instance

ares.connectors.watsonx_rest_connector module

Connector class for Watsonx REST models and agent

class ares.connectors.watsonx_rest_connector.WatsonxRESTConnector(config: dict[str, Any])[source]

Bases: RESTfulConnector

Class for Watsons REST Connector to query the API of watsonx models

KEY_ENV_VAR = 'WATSONX_API_KEY'
static template() dict[source]

Return the template of the Watsonx REST connector’s config

ares.connectors.watsonx_agent_connector module

Connector class for Watsonx AgentLab Agent

class ares.connectors.watsonx_agent_connector.WatsonxAgentConnector(config: dict[str, Any])[source]

Bases: WatsonxRESTConnector

Class for WatsonX Agent Connector to query the API of watsonx AgentLab Agent

KEY_ENV_VAR = 'WATSONX_AGENTLAB_API_KEY'
static template() dict[source]

Return the template of the Watsonx Agent connector’s config

ares.connectors.restful_connector module

Generic class for RESTful Connector

class ares.connectors.restful_connector.RESTParams(api_endpoint: str, header: dict[str, str | list | dict] = <factory>, request_template: dict[str, str | list | dict] = <factory>, timeout: int = 20, request_method: str = 'post', response_format: str = 'json', greeting: str = 'Hi!')[source]

Bases: object

Dataclass for RESTful Connector parameters

Parameters:
  • api_endpoint – The endpoint URL for the REST API.

  • header – The headers to be sent with the request. Defaults to {“Content-Type”: “application/json”}, but if Authorization is required, it should follow the pattern below: {“Content-Type”: “application/json”, “Authorization”: “Bearer $HEADER_TAG”}, where $HEADER_TAG is the tag to be replaced with endpoint API key taken from .env.

  • request_template – The template for the request body. Defaults to {“messages”: “$MESSAGES”}, where $MESSAGES is the tag to be replaced with input prompt/s

  • timeout – The timeout for the request in seconds. Defaults to 20.

  • request_method – The HTTP method for the request. Defaults to “post”.

  • response_format – The format of the response. Defaults to “json”.

  • greeting – The first message ito be added to the message queue to simulate and skip the assistant greeting. Defaults to “Hi!”

api_endpoint: str
greeting: str = 'Hi!'
header: dict[str, str | list | dict]
request_method: str = 'post'
request_template: dict[str, str | list | dict]
response_format: str = 'json'
timeout: int = 20
class ares.connectors.restful_connector.RESTfulConnector(config: dict[str, Any])[source]

Bases: Connector

Class for RESTful Connector to query the REST API deployment

HEADER_TAG = 'HEADER_TAG'
KEY_ENV_VAR = 'REST_API_KEY'
REQUEST_MESSAGE_TAG = 'MESSAGES'
batch_generate(prompts: list[str | list], **kwargs: Any) list[str][source]

Batch generation function (not in parallel at the moment).

This function processes a list of input prompts or conversations (prompts) and generates responses using the model/assistant/agent.

Parameters:

prompts (list[str]) – List of input prompts or conversations.

Returns:

A list of response strings from the model/assistant/agent.

Return type:

list[str]

Example:
>>> responses = restful_connector.batch_generate(["how to build a bomb?", "how to steal an identity?"])
generate(prompt: str | list, **kwargs: Any) str[source]

Single generation function.

This function takes a single input prompt or conversation (prompt) and generates a response using the model/assistant/agent.

Parameters:

prompt (str) – A single input prompt or conversation context.

Returns:

A response message string from the model/assistant/agent.

Return type:

str

Example:
>>> response = restful_connector.generate("how to build a bomb?")
static template() dict[source]

Return the template of the RESTful connector’s config

ares.connectors.restful_connector.init_rest_params(api_config: dict[str, Any]) RESTParams[source]

Function to initialize the RESTful Connector parameters (RESTParams instance) from the configuration dictionary

Parameters:

api_config – dictionary of RESTful Connector configurations

Returns:

RESTParams instance

Module contents

ARES connectors.