Skip to content

adbc_poolhouse

adbc_poolhouse

Connection pooling for ADBC drivers from typed warehouse configs.

BaseWarehouseConfig

Bases: BaseSettings, ABC

Base class for all warehouse config models.

Provides pool tuning fields with library defaults. Not intended to be instantiated directly — use a concrete subclass (e.g. DuckDBConfig).

Pool tuning fields are inherited by all concrete configs, and each concrete config's env_prefix applies to these fields automatically. For example, DUCKDB_POOL_SIZE populates DuckDBConfig.pool_size.

pool_size class-attribute instance-attribute

pool_size: int = 5

Number of connections to keep open in the pool. Default: 5.

max_overflow class-attribute instance-attribute

max_overflow: int = 3

Connections allowed above pool_size when pool is exhausted. Default: 3.

timeout class-attribute instance-attribute

timeout: int = 30

Seconds to wait for a connection before raising TimeoutError. Default: 30.

recycle class-attribute instance-attribute

recycle: int = 3600

Seconds before a connection is closed and replaced. Default: 3600.

to_adbc_kwargs abstractmethod

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Subclasses must override this method to provide backend-specific serialization.

Source code in src/adbc_poolhouse/_base_config.py
@abstractmethod
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Subclasses must override this method to provide backend-specific
    serialization.
    """
    ...

WarehouseConfig

Bases: Protocol

Structural type for warehouse config objects.

Any class with these attributes and methods can be passed to create_pool or managed_pool. The built-in config classes all satisfy this protocol through BaseWarehouseConfig.

Third-party authors: inherit from BaseWarehouseConfig for pool-tuning defaults and _resolve_driver_path, or implement the full protocol from scratch.

pool_size instance-attribute

pool_size: int

Number of connections to keep open in the pool.

max_overflow instance-attribute

max_overflow: int

Connections allowed above pool_size when the pool is exhausted.

timeout instance-attribute

timeout: int

Seconds to wait for a connection before raising TimeoutError.

recycle instance-attribute

recycle: int

Seconds before a connection is closed and replaced.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Source code in src/adbc_poolhouse/_base_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """Convert config to ADBC driver connection kwargs."""
    ...

BigQueryConfig

Bases: BaseWarehouseConfig

BigQuery warehouse configuration.

Supports SDK default auth (ADC), JSON credential file, JSON credential string, and user authentication flows.

Pool tuning fields are inherited and loaded from BIGQUERY_* env vars.

auth_type class-attribute instance-attribute

auth_type: str | None = None

Auth method: 'bigquery' (SDK default/ADC), 'json_credential_file', 'json_credential_string', 'user_authentication'. Env: BIGQUERY_AUTH_TYPE.

auth_credentials class-attribute instance-attribute

auth_credentials: SecretStr | None = None

JSON credentials file path or encoded credential string, depending on auth_type. Env: BIGQUERY_AUTH_CREDENTIALS.

auth_client_id class-attribute instance-attribute

auth_client_id: str | None = None

OAuth client ID for user_authentication flow. Env: BIGQUERY_AUTH_CLIENT_ID.

auth_client_secret class-attribute instance-attribute

auth_client_secret: SecretStr | None = None

OAuth client secret for user_authentication flow. Env: BIGQUERY_AUTH_CLIENT_SECRET.

auth_refresh_token class-attribute instance-attribute

auth_refresh_token: SecretStr | None = None

OAuth refresh token for user_authentication flow. Env: BIGQUERY_AUTH_REFRESH_TOKEN.

project_id class-attribute instance-attribute

project_id: str | None = None

GCP project ID. Env: BIGQUERY_PROJECT_ID.

dataset_id class-attribute instance-attribute

dataset_id: str | None = None

Default dataset. Env: BIGQUERY_DATASET_ID.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

All keys use the adbc.bigquery.sql.* prefix verified from the adbc_driver_bigquery.DatabaseOptions enum. Only non-None fields are included.

Returns:

Type Description
dict[str, str]

Dict of ADBC driver kwargs. Empty when no fields are set.

Source code in src/adbc_poolhouse/_bigquery_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    All keys use the ``adbc.bigquery.sql.*`` prefix verified from the
    ``adbc_driver_bigquery.DatabaseOptions`` enum. Only non-None fields
    are included.

    Returns:
        Dict of ADBC driver kwargs. Empty when no fields are set.
    """
    kwargs: dict[str, str] = {}
    if self.auth_type is not None:
        kwargs["adbc.bigquery.sql.auth_type"] = self.auth_type
    if self.auth_credentials is not None:
        kwargs["adbc.bigquery.sql.auth_credentials"] = self.auth_credentials.get_secret_value()
    if self.auth_client_id is not None:
        kwargs["adbc.bigquery.sql.auth.client_id"] = self.auth_client_id
    if self.auth_client_secret is not None:
        kwargs["adbc.bigquery.sql.auth.client_secret"] = (
            self.auth_client_secret.get_secret_value()
        )
    if self.auth_refresh_token is not None:
        kwargs["adbc.bigquery.sql.auth.refresh_token"] = (
            self.auth_refresh_token.get_secret_value()
        )
    if self.project_id is not None:
        kwargs["adbc.bigquery.sql.project_id"] = self.project_id
    if self.dataset_id is not None:
        kwargs["adbc.bigquery.sql.dataset_id"] = self.dataset_id
    return kwargs

ClickHouseConfig

Bases: BaseWarehouseConfig

ClickHouse warehouse configuration.

Uses the Columnar ADBC ClickHouse driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry:

dbc install --pre clickhouse

The --pre flag is required — only alpha releases are available (v0.1.0-alpha.1).

Supports two connection modes:

  • URI mode: set uri with the full ClickHouse connection string.
  • Decomposed mode: set host and username together. password, database, and port are optional. port defaults to 8123 (HTTP interface).

At least one mode must be fully specified — construction raises ConfigurationError if neither is provided.

Note: The field name is username, not user. The Columnar ClickHouse driver uses username as the kwarg key. Passing user causes a silent auth failure.

Pool tuning fields are inherited and loaded from CLICKHOUSE_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: SecretStr | None = None

Full ClickHouse connection URI. May contain credentials — stored as SecretStr. Env: CLICKHOUSE_URI.

host class-attribute instance-attribute

host: str | None = None

ClickHouse hostname. Alternative to embedding host in URI. Env: CLICKHOUSE_HOST.

port class-attribute instance-attribute

port: int = 8123

ClickHouse HTTP interface port. Default: 8123. Env: CLICKHOUSE_PORT.

username class-attribute instance-attribute

username: str | None = None

ClickHouse username. Maps to the username driver kwarg (not user). Env: CLICKHOUSE_USERNAME.

password class-attribute instance-attribute

password: SecretStr | None = None

ClickHouse password. Optional. Env: CLICKHOUSE_PASSWORD.

database class-attribute instance-attribute

database: str | None = None

ClickHouse database name. Optional. Env: CLICKHOUSE_DATABASE.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Supports two modes:

  • URI mode (uri set): returns {uri: ...} with the secret value extracted.
  • Decomposed mode (host + username set): returns individual kwargs with port as a string. password and database are omitted when None.

Returns:

Type Description
dict[str, str]

Dict of ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_clickhouse_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Supports two modes:

    - URI mode (``uri`` set): returns ``{uri: ...}`` with the secret
      value extracted.
    - Decomposed mode (``host`` + ``username`` set): returns individual
      kwargs with ``port`` as a string. ``password`` and ``database``
      are omitted when ``None``.

    Returns:
        Dict of ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    if self.uri is not None:
        return {"uri": self.uri.get_secret_value()}

    # Decomposed mode -- model_validator guarantees host and username are set
    assert self.host is not None
    assert self.username is not None

    result: dict[str, str] = {
        "username": self.username,
        "host": self.host,
        "port": str(self.port),
    }
    if self.password is not None:
        result["password"] = self.password.get_secret_value()  # pragma: allowlist secret
    if self.database is not None:
        result["database"] = self.database
    return result

check_connection_spec

check_connection_spec() -> Self

Raise ConfigurationError if neither uri nor minimum decomposed fields are set.

Source code in src/adbc_poolhouse/_clickhouse_config.py
@model_validator(mode="after")
def check_connection_spec(self) -> Self:
    """Raise ConfigurationError if neither uri nor minimum decomposed fields are set."""
    has_uri = self.uri is not None
    has_decomposed = self.host is not None and self.username is not None
    if not has_uri and not has_decomposed:
        raise ConfigurationError(
            "ClickHouseConfig requires either 'uri' or at minimum "
            "'host' and 'username'. Got none of these."
        )
    return self

DatabricksConfig

Bases: BaseWarehouseConfig

Databricks warehouse configuration.

Uses the Columnar ADBC Databricks driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry.

Supports PAT (personal access token) and OAuth (U2M and M2M) auth. Supports two connection modes:

  • URI mode: set uri with the full DSN string.
  • Decomposed mode: set host, http_path, and token together.

At least one mode must be fully specified — construction raises ConfigurationError if neither is provided.

Pool tuning fields are inherited and loaded from DATABRICKS_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: SecretStr | None = None

Full connection URI: databricks://token:@:443/ May contain credentials — stored as SecretStr. Env: DATABRICKS_URI.

host class-attribute instance-attribute

host: str | None = None

Databricks workspace hostname (e.g. 'adb-xxx.azuredatabricks.net'). Alternative to embedding host in URI. Env: DATABRICKS_HOST.

http_path class-attribute instance-attribute

http_path: str | None = None

SQL warehouse HTTP path (e.g. '/sql/1.0/warehouses/abc123'). Env: DATABRICKS_HTTP_PATH.

token class-attribute instance-attribute

token: SecretStr | None = None

Personal access token for PAT auth. Env: DATABRICKS_TOKEN.

auth_type class-attribute instance-attribute

auth_type: str | None = None

OAuth auth type: 'OAuthU2M' (browser-based) or 'OAuthM2M' (service principal). Omit for PAT auth. Env: DATABRICKS_AUTH_TYPE.

client_id class-attribute instance-attribute

client_id: str | None = None

OAuth M2M service principal client ID. Env: DATABRICKS_CLIENT_ID.

client_secret class-attribute instance-attribute

client_secret: SecretStr | None = None

OAuth M2M service principal client secret. Env: DATABRICKS_CLIENT_SECRET.

catalog class-attribute instance-attribute

catalog: str | None = None

Default Unity Catalog. Env: DATABRICKS_CATALOG.

schema_ class-attribute instance-attribute

schema_: str | None = Field(
    default=None, validation_alias="schema", alias="schema"
)

Default schema. Python attribute is schema_ to avoid Pydantic conflicts. Env: DATABRICKS_SCHEMA.

check_connection_spec

check_connection_spec() -> Self

Raise ConfigurationError if neither uri nor all minimum decomposed fields are set.

Source code in src/adbc_poolhouse/_databricks_config.py
@model_validator(mode="after")
def check_connection_spec(self) -> Self:
    """Raise ConfigurationError if neither uri nor all minimum decomposed fields are set."""
    has_uri = self.uri is not None
    has_decomposed = (
        self.host is not None and self.http_path is not None and self.token is not None
    )
    if not has_uri and not has_decomposed:
        raise ConfigurationError(
            "DatabricksConfig requires either 'uri' or all three of "
            "'host', 'http_path', and 'token'. Got none of these."
        )
    return self

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert Databricks config fields to ADBC driver kwargs.

Supports two modes:

  • URI mode (uri set): extracts SecretStr value and returns {"uri": ...}.
  • Decomposed mode: builds databricks://token:{encoded}@{host}:443{http_path} from host, http_path, and token. Token is URL-encoded via urllib.parse.quote with safe="".

Returns:

Type Description
dict[str, str]

ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_databricks_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert Databricks config fields to ADBC driver kwargs.

    Supports two modes:

    - **URI mode** (``uri`` set): extracts ``SecretStr`` value and returns
      ``{"uri": ...}``.
    - **Decomposed mode**: builds ``databricks://token:{encoded}@{host}:443{http_path}``
      from ``host``, ``http_path``, and ``token``. Token is URL-encoded via
      `urllib.parse.quote` with ``safe=""``.

    Returns:
        ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    if self.uri is not None:
        return {"uri": self.uri.get_secret_value()}

    # Decomposed mode -- model_validator guarantees all three are set.
    assert self.host is not None
    assert self.http_path is not None
    assert self.token is not None

    encoded_token = quote(self.token.get_secret_value(), safe="")
    uri = f"databricks://token:{encoded_token}@{self.host}:443{self.http_path}"
    return {"uri": uri}

DuckDBConfig

Bases: BaseWarehouseConfig

DuckDB warehouse configuration.

Covers all DuckDB ADBC connection parameters. Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited from BaseWarehouseConfig and loaded from DUCKDB_* environment variables.

Example
DuckDBConfig(database="/data/warehouse.db", pool_size=5)
DuckDBConfig()  # in-memory, pool_size=1 enforced

database class-attribute instance-attribute

database: str = ':memory:'

File path or ':memory:'. Env: DUCKDB_DATABASE.

pool_size class-attribute instance-attribute

pool_size: int = 1

Number of connections in the pool. Default 1 for in-memory DuckDB.

In-memory DuckDB databases are isolated per connection — each pool connection gets a different empty DB. Use pool_size=1 for ':memory:', or set database to a file path if you need pool_size > 1. Setting pool_size > 1 with database=':memory:' raises ValidationError. Env: DUCKDB_POOL_SIZE.

read_only class-attribute instance-attribute

read_only: bool = False

Open the database in read-only mode. Env: DUCKDB_READ_ONLY.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Returns:

Type Description
dict[str, str]

Dict with 'path' key (always) and 'access_mode' set to

dict[str, str]

'READ_ONLY' when read_only is True.

Source code in src/adbc_poolhouse/_duckdb_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Returns:
        Dict with ``'path'`` key (always) and ``'access_mode'`` set to
        ``'READ_ONLY'`` when ``read_only`` is True.
    """
    result: dict[str, str] = {"path": self.database}
    if self.read_only:
        result["access_mode"] = "READ_ONLY"
    return result

ConfigurationError

Bases: PoolhouseError, ValueError

Raised when a config model contains invalid field values.

Inherits from both PoolhouseError (library hierarchy) and ValueError (pydantic model_validator compatibility). When raised inside a pydantic @model_validator, pydantic wraps it in ValidationError --- which itself inherits from ValueError --- satisfying 'raises ValueError' test expectations.

Example
DuckDBConfig(database=":memory:", pool_size=2)
# raises pydantic.ValidationError (wraps ConfigurationError)

PoolhouseError

Bases: Exception

Base exception for all adbc-poolhouse errors.

All library-specific exceptions inherit from this class. Consumers can use except PoolhouseError to catch any library error.

FlightSQLConfig

Bases: BaseWarehouseConfig

FlightSQL warehouse configuration.

Connects to any Apache Arrow Flight SQL server (e.g. Dremio, InfluxDB, DuckDB server mode, custom Flight SQL implementations).

Pool tuning fields are inherited and loaded from FLIGHTSQL_* env vars.

uri class-attribute instance-attribute

uri: str | None = None

gRPC endpoint URI. Env: FLIGHTSQL_URI. Format: grpc://host:port (plaintext) or grpc+tls://host:port (TLS).

username class-attribute instance-attribute

username: str | None = None

Username for HTTP-style basic auth. Env: FLIGHTSQL_USERNAME.

password class-attribute instance-attribute

password: SecretStr | None = None

Password for HTTP-style basic auth. Env: FLIGHTSQL_PASSWORD.

authorization_header class-attribute instance-attribute

authorization_header: SecretStr | None = None

Custom authorization header value (overrides username/password if set). Env: FLIGHTSQL_AUTHORIZATION_HEADER.

mtls_cert_chain class-attribute instance-attribute

mtls_cert_chain: str | None = None

mTLS certificate chain (PEM). Env: FLIGHTSQL_MTLS_CERT_CHAIN.

mtls_private_key class-attribute instance-attribute

mtls_private_key: SecretStr | None = None

mTLS private key (PEM). Env: FLIGHTSQL_MTLS_PRIVATE_KEY.

tls_root_certs class-attribute instance-attribute

tls_root_certs: str | None = None

Root CA certificate(s) in PEM format. Env: FLIGHTSQL_TLS_ROOT_CERTS.

tls_skip_verify class-attribute instance-attribute

tls_skip_verify: bool = False

Disable TLS certificate verification. Env: FLIGHTSQL_TLS_SKIP_VERIFY.

tls_override_hostname class-attribute instance-attribute

tls_override_hostname: str | None = None

Override the TLS hostname for SNI. Env: FLIGHTSQL_TLS_OVERRIDE_HOSTNAME.

connect_timeout class-attribute instance-attribute

connect_timeout: float | None = None

Connection timeout in seconds. Env: FLIGHTSQL_CONNECT_TIMEOUT.

query_timeout class-attribute instance-attribute

query_timeout: float | None = None

Query execution timeout in seconds. Env: FLIGHTSQL_QUERY_TIMEOUT.

fetch_timeout class-attribute instance-attribute

fetch_timeout: float | None = None

Result fetch timeout in seconds. Env: FLIGHTSQL_FETCH_TIMEOUT.

update_timeout class-attribute instance-attribute

update_timeout: float | None = None

DML update timeout in seconds. Env: FLIGHTSQL_UPDATE_TIMEOUT.

authority class-attribute instance-attribute

authority: str | None = None

Override gRPC authority header. Env: FLIGHTSQL_AUTHORITY.

max_msg_size class-attribute instance-attribute

max_msg_size: int | None = None

Maximum gRPC message size in bytes (driver default: 16 MiB). Env: FLIGHTSQL_MAX_MSG_SIZE.

with_cookie_middleware: bool = False

Enable gRPC cookie middleware (required by some servers for session management). Env: FLIGHTSQL_WITH_COOKIE_MIDDLEWARE.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Maps FlightSQL config fields to their adbc.flight.sql.* key equivalents. Boolean defaults (tls_skip_verify, with_cookie_middleware) are always included as 'true'/ 'false' strings. Optional fields are omitted when None.

Returns:

Type Description
dict[str, str]

Dict of ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_flightsql_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Maps FlightSQL config fields to their ``adbc.flight.sql.*`` key
    equivalents. Boolean defaults (``tls_skip_verify``,
    ``with_cookie_middleware``) are always included as ``'true'``/
    ``'false'`` strings. Optional fields are omitted when ``None``.

    Returns:
        Dict of ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    kwargs: dict[str, str] = {}

    # Connection endpoint
    if self.uri is not None:
        kwargs["uri"] = self.uri

    # Authentication
    if self.username is not None:
        kwargs["username"] = self.username
    if self.password is not None:
        kwargs["password"] = self.password.get_secret_value()  # pragma: allowlist secret
    if self.authorization_header is not None:
        kwargs["adbc.flight.sql.authorization_header"] = (
            self.authorization_header.get_secret_value()
        )

    # mTLS
    if self.mtls_cert_chain is not None:
        kwargs["adbc.flight.sql.client_option.mtls_cert_chain"] = self.mtls_cert_chain
    if self.mtls_private_key is not None:
        kwargs["adbc.flight.sql.client_option.mtls_private_key"] = (
            self.mtls_private_key.get_secret_value()
        )

    # TLS
    if self.tls_root_certs is not None:
        kwargs["adbc.flight.sql.client_option.tls_root_certs"] = self.tls_root_certs
    kwargs["adbc.flight.sql.client_option.tls_skip_verify"] = str(self.tls_skip_verify).lower()
    if self.tls_override_hostname is not None:
        kwargs["adbc.flight.sql.client_option.tls_override_hostname"] = (
            self.tls_override_hostname
        )

    # Timeouts
    if self.connect_timeout is not None:
        kwargs["adbc.flight.sql.rpc.timeout_seconds.connect"] = str(self.connect_timeout)
    if self.query_timeout is not None:
        kwargs["adbc.flight.sql.rpc.timeout_seconds.query"] = str(self.query_timeout)
    if self.fetch_timeout is not None:
        kwargs["adbc.flight.sql.rpc.timeout_seconds.fetch"] = str(self.fetch_timeout)
    if self.update_timeout is not None:
        kwargs["adbc.flight.sql.rpc.timeout_seconds.update"] = str(self.update_timeout)

    # gRPC options
    if self.authority is not None:
        kwargs["adbc.flight.sql.client_option.authority"] = self.authority
    if self.max_msg_size is not None:
        kwargs["adbc.flight.sql.client_option.with_max_msg_size"] = str(self.max_msg_size)
    kwargs["adbc.flight.sql.rpc.with_cookie_middleware"] = str(
        self.with_cookie_middleware
    ).lower()

    return kwargs

MSSQLConfig

Bases: BaseWarehouseConfig

Microsoft SQL Server / Azure SQL / Azure Fabric / Synapse Analytics configuration.

Uses the Columnar ADBC MSSQL driver (Foundry-distributed, not on PyPI). One class covers all Microsoft SQL variants via optional variant-specific fields: - SQL Server: use host + port + instance (or URI) - Azure SQL: use host + port, optionally fedauth for Entra ID / Azure AD auth - Azure Fabric / Synapse Analytics: use fedauth for managed identity or service principal authentication

Pool tuning fields are inherited and loaded from MSSQL_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: str | None = None

Connection URI. Format: mssql://user:pass@host[:port][/instance][?params] # pragma: allowlist secret Also accepts the sqlserver:// scheme. Env: MSSQL_URI.

host class-attribute instance-attribute

host: str | None = None

Hostname or IP address. Alternative to URI-based connection. Env: MSSQL_HOST.

port class-attribute instance-attribute

port: int | None = None

Port number. Default: 1433. Env: MSSQL_PORT.

instance class-attribute instance-attribute

instance: str | None = None

SQL Server named instance (e.g. 'SQLExpress'). Env: MSSQL_INSTANCE.

user class-attribute instance-attribute

user: str | None = None

SQL auth username. Env: MSSQL_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

SQL auth password. Env: MSSQL_PASSWORD.

database class-attribute instance-attribute

database: str | None = None

Target database name. Env: MSSQL_DATABASE.

trust_server_certificate class-attribute instance-attribute

trust_server_certificate: bool = False

Accept self-signed TLS certificates. Enable for local development. Env: MSSQL_TRUST_SERVER_CERTIFICATE.

connection_timeout class-attribute instance-attribute

connection_timeout: int | None = None

Connection timeout in seconds. Env: MSSQL_CONNECTION_TIMEOUT.

fedauth class-attribute instance-attribute

fedauth: str | None = None

Federated authentication method for Entra ID / Azure AD. Used for Azure SQL, Azure Fabric, and Synapse Analytics. Values: 'ActiveDirectoryPassword', 'ActiveDirectoryMsi', 'ActiveDirectoryServicePrincipal', 'ActiveDirectoryInteractive'. Env: MSSQL_FEDAUTH.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Supports two modes:

  • URI mode (uri set): returns {uri: ...}.
  • Decomposed mode: maps individual fields to their ADBC key equivalents. trust_server_certificate is always included as a 'true'/'false' string.

Returns:

Type Description
dict[str, str]

Dict of ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_mssql_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Supports two modes:

    - URI mode (``uri`` set): returns ``{uri: ...}``.
    - Decomposed mode: maps individual fields to their ADBC key
      equivalents. ``trust_server_certificate`` is always included
      as a ``'true'``/``'false'`` string.

    Returns:
        Dict of ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    kwargs: dict[str, str] = {}

    # URI-first: if uri is set, use it as the primary connection spec
    if self.uri is not None:
        kwargs["uri"] = self.uri
        return kwargs

    # Decomposed fields (include only if not None)
    if self.host is not None:
        kwargs["host"] = self.host
    if self.port is not None:
        kwargs["port"] = str(self.port)
    if self.instance is not None:
        kwargs["instance"] = self.instance
    if self.user is not None:
        kwargs["username"] = self.user
    if self.password is not None:
        kwargs["password"] = self.password.get_secret_value()  # pragma: allowlist secret
    if self.database is not None:
        kwargs["database"] = self.database

    # Boolean flag (always include)
    kwargs["trustServerCertificate"] = str(self.trust_server_certificate).lower()

    if self.connection_timeout is not None:
        kwargs["connectionTimeout"] = str(self.connection_timeout)
    if self.fedauth is not None:
        kwargs["fedauth"] = self.fedauth

    return kwargs

MySQLConfig

Bases: BaseWarehouseConfig

MySQL warehouse configuration.

Uses the Columnar ADBC MySQL driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry (see DEVELOP.md for setup instructions).

Supports two connection modes:

  • URI mode: set uri with the full MySQL connection string.
  • Decomposed mode: set host, user, and database together. password is optional — MySQL supports passwordless connections. port defaults to 3306.

At least one mode must be fully specified — construction raises ConfigurationError if neither is provided.

Pool tuning fields are inherited and loaded from MYSQL_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: SecretStr | None = None

Full MySQL connection URI. May contain credentials — stored as SecretStr. Env: MYSQL_URI.

host class-attribute instance-attribute

host: str | None = None

MySQL hostname. Alternative to embedding host in URI. Env: MYSQL_HOST.

port class-attribute instance-attribute

port: int = 3306

MySQL port. Default: 3306. Env: MYSQL_PORT.

user class-attribute instance-attribute

user: str | None = None

MySQL username. Env: MYSQL_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

MySQL password. Optional — MySQL supports passwordless connections. Env: MYSQL_PASSWORD.

database class-attribute instance-attribute

database: str | None = None

MySQL database name. Env: MYSQL_DATABASE.

check_connection_spec

check_connection_spec() -> Self

Raise ConfigurationError if neither uri nor all minimum decomposed fields are set.

Source code in src/adbc_poolhouse/_mysql_config.py
@model_validator(mode="after")
def check_connection_spec(self) -> Self:
    """Raise ConfigurationError if neither uri nor all minimum decomposed fields are set."""
    has_uri = self.uri is not None
    has_decomposed = (
        self.host is not None and self.user is not None and self.database is not None
    )
    if not has_uri and not has_decomposed:
        raise ConfigurationError(
            "MySQLConfig requires either 'uri' or all of 'host', 'user', "
            "and 'database'. Got none of these."
        )
    return self

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert MySQL config fields to ADBC driver kwargs.

Supports two modes:

  • URI mode (uri set): extracts SecretStr value and returns {"uri": ...}.
  • Decomposed mode: builds a Go DSN from user, password, host, port, and database. Password is URL-encoded via urllib.parse.quote with safe="".

Returns:

Type Description
dict[str, str]

ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_mysql_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert MySQL config fields to ADBC driver kwargs.

    Supports two modes:

    - **URI mode** (``uri`` set): extracts ``SecretStr`` value and returns
      ``{"uri": ...}``.
    - **Decomposed mode**: builds a Go DSN from ``user``, ``password``,
      ``host``, ``port``, and ``database``. Password is URL-encoded via
      `urllib.parse.quote` with ``safe=""``.

    Returns:
        ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    if self.uri is not None:
        return {"uri": self.uri.get_secret_value()}

    # Decomposed mode -- model_validator guarantees host, user, database.
    assert self.host is not None
    assert self.user is not None
    assert self.database is not None

    if self.password is not None:
        encoded_pass = quote(self.password.get_secret_value(), safe="")
        uri = f"{self.user}:{encoded_pass}@tcp({self.host}:{self.port})/{self.database}"
    else:
        uri = f"{self.user}@tcp({self.host}:{self.port})/{self.database}"

    return {"uri": uri}

PostgreSQLConfig

Bases: BaseWarehouseConfig

PostgreSQL warehouse configuration.

The PostgreSQL ADBC driver wraps libpq. Specify the connection either as a full URI or via individual fields. If neither is provided, libpq falls back to its own environment variables (PGHOST, PGUSER, etc.).

Pool tuning fields are inherited and loaded from POSTGRESQL_* env vars.

Example
PostgreSQLConfig(uri="postgresql://me:s3cret@host/mydb")  # pragma: allowlist secret
PostgreSQLConfig(host="db.example.com", user="me", database="mydb")

uri class-attribute instance-attribute

uri: str | None = None

libpq connection URI. Takes precedence over individual fields. Format: postgresql://[user[:password]@][host][:port][/dbname][?params] Env: POSTGRESQL_URI.

host class-attribute instance-attribute

host: str | None = None

Database hostname or IP address. Env: POSTGRESQL_HOST.

port class-attribute instance-attribute

port: int | None = None

Database port. Defaults to 5432 when omitted. Env: POSTGRESQL_PORT.

user class-attribute instance-attribute

user: str | None = None

Database username. Env: POSTGRESQL_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

Database password. Env: POSTGRESQL_PASSWORD.

database class-attribute instance-attribute

database: str | None = None

Database name. Env: POSTGRESQL_DATABASE.

sslmode class-attribute instance-attribute

sslmode: str | None = None

SSL mode. Accepted values: disable, allow, prefer, require, verify-ca, verify-full. Env: POSTGRESQL_SSLMODE.

use_copy class-attribute instance-attribute

use_copy: bool = True

Use PostgreSQL COPY protocol for bulk query execution (driver default: True). Disable if COPY triggers permission errors. Env: POSTGRESQL_USE_COPY.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert PostgreSQL config fields to ADBC driver kwargs.

Supports three modes:

  • URI mode (uri set): passed directly as {"uri": ...}.
  • Decomposed mode: builds a libpq URI from host, port, user, password, database, and sslmode. Password is URL-encoded via urllib.parse.quote with safe="".
  • Empty mode: returns {} so libpq resolves from env vars.

Returns:

Type Description
dict[str, str]

ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_postgresql_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert PostgreSQL config fields to ADBC driver kwargs.

    Supports three modes:

    - **URI mode** (``uri`` set): passed directly as ``{"uri": ...}``.
    - **Decomposed mode**: builds a libpq URI from ``host``, ``port``,
      ``user``, ``password``, ``database``, and ``sslmode``. Password is
      URL-encoded via `urllib.parse.quote` with ``safe=""``.
    - **Empty mode**: returns ``{}`` so libpq resolves from env vars.

    Returns:
        ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    if self.uri is not None:
        return {"uri": self.uri}

    # Decomposed mode -- build URI only if at least one field is set.
    has_fields = any([self.host, self.user, self.password, self.database, self.sslmode])
    if not has_fields:
        return {}

    uri = "postgresql://"

    if self.user is not None:
        uri += quote(self.user, safe="")
        if self.password is not None:
            uri += ":" + quote(self.password.get_secret_value(), safe="")
        uri += "@"

    if self.host is not None:
        uri += self.host

    if self.port is not None:
        uri += f":{self.port}"

    if self.database is not None:
        uri += f"/{self.database}"

    if self.sslmode is not None:
        uri += f"?sslmode={self.sslmode}"

    return {"uri": uri}

RedshiftConfig

Bases: BaseWarehouseConfig

Redshift warehouse configuration.

Uses the Columnar ADBC Redshift driver (Foundry-distributed, not on PyPI). Supports provisioned clusters (standard and IAM auth) and Redshift Serverless.

Pool tuning fields are inherited and loaded from REDSHIFT_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: str | None = None

Connection URI: redshift://[user:password@]host[:port]/dbname[?params] Use redshift:///dbname for automatic endpoint discovery. Env: REDSHIFT_URI.

host class-attribute instance-attribute

host: str | None = None

Redshift cluster hostname. Alternative to URI. Env: REDSHIFT_HOST.

port class-attribute instance-attribute

port: int | None = None

Port number. Default: 5439. Env: REDSHIFT_PORT.

user class-attribute instance-attribute

user: str | None = None

Database username. Env: REDSHIFT_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

Database password. Env: REDSHIFT_PASSWORD.

database class-attribute instance-attribute

database: str | None = None

Target database name. Env: REDSHIFT_DATABASE.

cluster_type class-attribute instance-attribute

cluster_type: str | None = None

Cluster variant: 'redshift' (standard), 'redshift-iam', or 'redshift-serverless'. Env: REDSHIFT_CLUSTER_TYPE.

cluster_identifier class-attribute instance-attribute

cluster_identifier: str | None = None

Provisioned cluster identifier (required for IAM auth). Env: REDSHIFT_CLUSTER_IDENTIFIER.

workgroup_name class-attribute instance-attribute

workgroup_name: str | None = None

Serverless workgroup name. Env: REDSHIFT_WORKGROUP_NAME.

aws_region class-attribute instance-attribute

aws_region: str | None = None

AWS region (e.g. 'us-west-2'). Env: REDSHIFT_AWS_REGION.

aws_access_key_id class-attribute instance-attribute

aws_access_key_id: str | None = None

AWS IAM access key ID. Env: REDSHIFT_AWS_ACCESS_KEY_ID.

aws_secret_access_key class-attribute instance-attribute

aws_secret_access_key: SecretStr | None = None

AWS IAM secret access key. Env: REDSHIFT_AWS_SECRET_ACCESS_KEY.

sslmode class-attribute instance-attribute

sslmode: str | None = None

SSL mode (e.g. 'require', 'verify-full'). Env: REDSHIFT_SSLMODE.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert Redshift config fields to ADBC driver kwargs.

Supports two connection modes:

  • URI mode (uri set): passed directly as {"uri": ...}.
  • Decomposed mode: builds a redshift:// URI from host, port, user, password, database, and sslmode. Password is URL-encoded via urllib.parse.quote with safe="".

IAM and cluster fields (cluster_type, cluster_identifier, workgroup_name, aws_region, aws_access_key_id, aws_secret_access_key) are always translated as separate driver kwargs when set, regardless of connection mode.

Returns:

Type Description
dict[str, str]

ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_redshift_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert Redshift config fields to ADBC driver kwargs.

    Supports two connection modes:

    - **URI mode** (``uri`` set): passed directly as ``{"uri": ...}``.
    - **Decomposed mode**: builds a ``redshift://`` URI from ``host``,
      ``port``, ``user``, ``password``, ``database``, and ``sslmode``.
      Password is URL-encoded via `urllib.parse.quote` with
      ``safe=""``.

    IAM and cluster fields (``cluster_type``, ``cluster_identifier``,
    ``workgroup_name``, ``aws_region``, ``aws_access_key_id``,
    ``aws_secret_access_key``) are always translated as separate driver
    kwargs when set, regardless of connection mode.

    Returns:
        ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    kwargs: dict[str, str] = {}

    # URI: explicit passthrough or build from individual fields
    if self.uri is not None:
        kwargs["uri"] = self.uri
    elif any([self.host, self.user, self.password, self.database, self.sslmode]):
        kwargs["uri"] = self._build_uri()

    # IAM/cluster params
    if self.cluster_type is not None:
        kwargs["redshift.cluster_type"] = self.cluster_type
    if self.cluster_identifier is not None:
        kwargs["redshift.cluster_identifier"] = self.cluster_identifier
    if self.workgroup_name is not None:
        kwargs["redshift.workgroup_name"] = self.workgroup_name
    if self.aws_region is not None:
        kwargs["aws_region"] = self.aws_region
    if self.aws_access_key_id is not None:
        kwargs["aws_access_key_id"] = self.aws_access_key_id
    if self.aws_secret_access_key is not None:
        kwargs["aws_secret_access_key"] = self.aws_secret_access_key.get_secret_value()

    return kwargs

SnowflakeConfig

Bases: BaseWarehouseConfig

Snowflake warehouse configuration.

Supports all authentication methods provided by adbc-driver-snowflake: password, JWT (private_key_path / private_key_pem), external browser, OAuth, MFA, Okta, PAT, and workload identity federation (WIF).

Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited and loaded from SNOWFLAKE_* environment variables.

Example
SnowflakeConfig(account="myorg-myaccount", user="me", password="...")
SnowflakeConfig(account="myorg", user="me", private_key_path=Path("/keys/rsa.p8"))

account instance-attribute

account: str

Snowflake account identifier (e.g. 'myorg-myaccount'). Env: SNOWFLAKE_ACCOUNT.

user class-attribute instance-attribute

user: str | None = None

Username. Required for most auth methods. Env: SNOWFLAKE_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

Password for basic auth. Env: SNOWFLAKE_PASSWORD.

auth_type class-attribute instance-attribute

auth_type: str | None = None

Auth method: auth_jwt, auth_ext_browser, auth_oauth, auth_mfa, auth_okta, auth_pat, auth_wif. Env: SNOWFLAKE_AUTH_TYPE.

private_key_path class-attribute instance-attribute

private_key_path: Path | None = None

File path to a PKCS1 or PKCS8 private key file. Mutually exclusive with private_key_pem. Env: SNOWFLAKE_PRIVATE_KEY_PATH.

private_key_pem class-attribute instance-attribute

private_key_pem: SecretStr | None = None

Inline PEM-encoded PKCS8 private key (encrypted or unencrypted). Mutually exclusive with private_key_path. Env: SNOWFLAKE_PRIVATE_KEY_PEM.

private_key_passphrase class-attribute instance-attribute

private_key_passphrase: SecretStr | None = None

Passphrase to decrypt an encrypted PKCS8 key. Env: SNOWFLAKE_PRIVATE_KEY_PASSPHRASE.

jwt_expire_timeout class-attribute instance-attribute

jwt_expire_timeout: str | None = None

JWT expiry duration (e.g. '300ms', '1m30s'). Env: SNOWFLAKE_JWT_EXPIRE_TIMEOUT.

oauth_token class-attribute instance-attribute

oauth_token: SecretStr | None = None

Bearer token for auth_oauth. Env: SNOWFLAKE_OAUTH_TOKEN.

okta_url class-attribute instance-attribute

okta_url: str | None = None

Okta server URL required for auth_okta. Env: SNOWFLAKE_OKTA_URL.

identity_provider class-attribute instance-attribute

identity_provider: str | None = None

Identity provider for auth_wif. Env: SNOWFLAKE_IDENTITY_PROVIDER.

database class-attribute instance-attribute

database: str | None = None

Default database. Env: SNOWFLAKE_DATABASE.

schema_ class-attribute instance-attribute

schema_: str | None = Field(
    default=None, validation_alias="schema", alias="schema"
)

Default schema. Python attribute is schema_ to avoid Pydantic conflicts; env var is SNOWFLAKE_SCHEMA. Env: SNOWFLAKE_SCHEMA.

warehouse class-attribute instance-attribute

warehouse: str | None = None

Snowflake virtual warehouse. Env: SNOWFLAKE_WAREHOUSE.

role class-attribute instance-attribute

role: str | None = None

Snowflake role. Env: SNOWFLAKE_ROLE.

region class-attribute instance-attribute

region: str | None = None

Snowflake region (if not embedded in account). Env: SNOWFLAKE_REGION.

host class-attribute instance-attribute

host: str | None = None

Explicit hostname (alternative to account-derived URI). Env: SNOWFLAKE_HOST.

port class-attribute instance-attribute

port: int | None = None

Connection port. Env: SNOWFLAKE_PORT.

protocol class-attribute instance-attribute

protocol: str | None = None

Protocol: 'http' or 'https'. Env: SNOWFLAKE_PROTOCOL.

login_timeout class-attribute instance-attribute

login_timeout: str | None = None

Login retry timeout duration string. Env: SNOWFLAKE_LOGIN_TIMEOUT.

request_timeout class-attribute instance-attribute

request_timeout: str | None = None

Request retry timeout duration string. Env: SNOWFLAKE_REQUEST_TIMEOUT.

client_timeout class-attribute instance-attribute

client_timeout: str | None = None

Network roundtrip timeout duration string. Env: SNOWFLAKE_CLIENT_TIMEOUT.

tls_skip_verify class-attribute instance-attribute

tls_skip_verify: bool = False

Disable TLS certificate verification. Env: SNOWFLAKE_TLS_SKIP_VERIFY.

ocsp_fail_open_mode class-attribute instance-attribute

ocsp_fail_open_mode: bool = True

OCSP fail-open mode (True = allow connection on OCSP errors). Env: SNOWFLAKE_OCSP_FAIL_OPEN_MODE.

keep_session_alive class-attribute instance-attribute

keep_session_alive: bool = False

Prevent session expiry during long operations. Env: SNOWFLAKE_KEEP_SESSION_ALIVE.

app_name class-attribute instance-attribute

app_name: str | None = None

Application identifier sent to Snowflake. Env: SNOWFLAKE_APP_NAME.

disable_telemetry class-attribute instance-attribute

disable_telemetry: bool = False

Disable Snowflake usage telemetry. Env: SNOWFLAKE_DISABLE_TELEMETRY.

cache_mfa_token class-attribute instance-attribute

cache_mfa_token: bool = False

Cache MFA token for subsequent connections. Env: SNOWFLAKE_CACHE_MFA_TOKEN.

store_temp_creds class-attribute instance-attribute

store_temp_creds: bool = False

Cache ID token for SSO. Env: SNOWFLAKE_STORE_TEMP_CREDS.

check_private_key_exclusion

check_private_key_exclusion() -> Self

Raise ValidationError if both private_key_path and private_key_pem are set.

Source code in src/adbc_poolhouse/_snowflake_config.py
@model_validator(mode="after")
def check_private_key_exclusion(self) -> Self:
    """Raise ValidationError if both private_key_path and private_key_pem are set."""
    if self.private_key_path is not None and self.private_key_pem is not None:
        raise ValueError(
            "Provide only one of private_key_path (path to a PKCS1/PKCS8 "
            "private key file) or private_key_pem (inline PEM-encoded key "
            "content), not both. Use private_key_path for a key file, or "
            "private_key_pem for inline PEM content."
        )
    return self

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Returns a dict[str, str] suitable for passing as db_kwargs to adbc_driver_manager.dbapi.connect(). All values are strings; None fields are omitted. Boolean fields are always included as 'true'/'false' strings.

Key names follow adbc_driver_snowflake DatabaseOptions and AuthType enums. 'username' and 'password' are plain string keys (not prefixed with 'adbc.snowflake.sql.*').

Source code in src/adbc_poolhouse/_snowflake_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Returns a dict[str, str] suitable for passing as ``db_kwargs`` to
    ``adbc_driver_manager.dbapi.connect()``. All values are strings;
    None fields are omitted. Boolean fields are always included as
    ``'true'``/``'false'`` strings.

    Key names follow adbc_driver_snowflake ``DatabaseOptions`` and
    ``AuthType`` enums. ``'username'`` and ``'password'`` are plain
    string keys (not prefixed with ``'adbc.snowflake.sql.*'``).
    """
    kwargs: dict[str, str] = {}

    # --- Identity (always include) ---
    kwargs["adbc.snowflake.sql.account"] = self.account

    # --- Auth (include only if not None) ---
    if self.user is not None:
        kwargs["username"] = self.user
    if self.password is not None:
        kwargs["password"] = self.password.get_secret_value()  # pragma: allowlist secret
    if self.auth_type is not None:
        kwargs["adbc.snowflake.sql.auth_type"] = self.auth_type

    # --- JWT / private key (include only if not None) ---
    if self.private_key_path is not None:
        kwargs["adbc.snowflake.sql.client_option.jwt_private_key"] = str(self.private_key_path)
    if self.private_key_pem is not None:
        kwargs["adbc.snowflake.sql.client_option.jwt_private_key_pkcs8_value"] = (
            self.private_key_pem.get_secret_value()  # pragma: allowlist secret
        )
    if self.private_key_passphrase is not None:
        kwargs["adbc.snowflake.sql.client_option.jwt_private_key_pkcs8_password"] = (
            self.private_key_passphrase.get_secret_value()  # pragma: allowlist secret
        )
    if self.jwt_expire_timeout is not None:
        kwargs["adbc.snowflake.sql.client_option.jwt_expire_timeout"] = self.jwt_expire_timeout

    # --- OAuth / Okta / WIF (include only if not None) ---
    if self.oauth_token is not None:
        kwargs["adbc.snowflake.sql.client_option.auth_token"] = (
            self.oauth_token.get_secret_value()  # pragma: allowlist secret
        )
    if self.okta_url is not None:
        kwargs["adbc.snowflake.sql.client_option.okta_url"] = self.okta_url
    if self.identity_provider is not None:
        kwargs["adbc.snowflake.sql.client_option.identity_provider"] = self.identity_provider

    # --- Session / scope (include only if not None) ---
    if self.database is not None:
        kwargs["adbc.snowflake.sql.db"] = self.database
    if self.schema_ is not None:
        kwargs["adbc.snowflake.sql.schema"] = self.schema_
    if self.warehouse is not None:
        kwargs["adbc.snowflake.sql.warehouse"] = self.warehouse
    if self.role is not None:
        kwargs["adbc.snowflake.sql.role"] = self.role
    if self.region is not None:
        kwargs["adbc.snowflake.sql.region"] = self.region

    # --- Connection (include only if not None) ---
    if self.host is not None:
        kwargs["adbc.snowflake.sql.uri.host"] = self.host
    if self.port is not None:
        kwargs["adbc.snowflake.sql.uri.port"] = str(self.port)
    if self.protocol is not None:
        kwargs["adbc.snowflake.sql.uri.protocol"] = self.protocol

    # --- Timeouts (include only if not None) ---
    if self.login_timeout is not None:
        kwargs["adbc.snowflake.sql.client_option.login_timeout"] = self.login_timeout
    if self.request_timeout is not None:
        kwargs["adbc.snowflake.sql.client_option.request_timeout"] = self.request_timeout
    if self.client_timeout is not None:
        kwargs["adbc.snowflake.sql.client_option.client_timeout"] = self.client_timeout

    # --- Boolean flags (always include) ---
    kwargs["adbc.snowflake.sql.client_option.tls_skip_verify"] = str(
        self.tls_skip_verify
    ).lower()
    kwargs["adbc.snowflake.sql.client_option.ocsp_fail_open_mode"] = str(
        self.ocsp_fail_open_mode
    ).lower()
    kwargs["adbc.snowflake.sql.client_option.keep_session_alive"] = str(
        self.keep_session_alive
    ).lower()
    kwargs["adbc.snowflake.sql.client_option.disable_telemetry"] = str(
        self.disable_telemetry
    ).lower()
    kwargs["adbc.snowflake.sql.client_option.cache_mfa_token"] = str(
        self.cache_mfa_token
    ).lower()
    kwargs["adbc.snowflake.sql.client_option.store_temp_creds"] = str(
        self.store_temp_creds
    ).lower()

    # --- Misc (include only if not None) ---
    if self.app_name is not None:
        kwargs["adbc.snowflake.sql.client_option.app_name"] = self.app_name

    return kwargs

SQLiteConfig

Bases: BaseWarehouseConfig

SQLite warehouse configuration.

Covers SQLite ADBC connection parameters. Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited from BaseWarehouseConfig and loaded from SQLITE_* environment variables.

Unlike DuckDB, an SQLite in-memory database is shared across all connections in the pool. This means pool_size > 1 with database=':memory:' is almost always unintended (connection state races across a single shared DB), so it is rejected by a validator.

Example
SQLiteConfig(database="/data/warehouse.db", pool_size=5)
SQLiteConfig()  # in-memory, pool_size=1 enforced

database class-attribute instance-attribute

database: str = ':memory:'

File path or ':memory:'. Env: SQLITE_DATABASE.

pool_size class-attribute instance-attribute

pool_size: int = 1

Number of connections in the pool. Default 1 for in-memory SQLite.

SQLite in-memory databases are shared across all connections in the pool — unlike DuckDB, where each connection gets its own isolated empty DB. Use pool_size=1 for ':memory:', or set database to a file path if you need pool_size > 1. Setting pool_size > 1 with database=':memory:' raises ValidationError. Env: SQLITE_POOL_SIZE.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Returns:

Type Description
dict[str, str]

Dict with a single 'uri' key set to the database path

dict[str, str]

(or ':memory:').

Source code in src/adbc_poolhouse/_sqlite_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Returns:
        Dict with a single ``'uri'`` key set to the database path
        (or ``':memory:'``).
    """
    return {"uri": self.database}

TrinoConfig

Bases: BaseWarehouseConfig

Trino warehouse configuration.

Uses the Columnar ADBC Trino driver (Foundry-distributed, not on PyPI). Supports URI-based or decomposed field connection specification.

Pool tuning fields are inherited and loaded from TRINO_* env vars.

Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.

uri class-attribute instance-attribute

uri: str | None = None

Connection URI. Format: trino://[user[:password]@]host[:port][/catalog[/schema]][?params] Env: TRINO_URI.

host class-attribute instance-attribute

host: str | None = None

Trino coordinator hostname. Alternative to URI. Env: TRINO_HOST.

port class-attribute instance-attribute

port: int | None = None

Trino coordinator port. Defaults: 8080 (HTTP), 8443 (HTTPS). Env: TRINO_PORT.

user class-attribute instance-attribute

user: str | None = None

Username. Env: TRINO_USER.

password class-attribute instance-attribute

password: SecretStr | None = None

Password (HTTPS connections only). Env: TRINO_PASSWORD.

catalog class-attribute instance-attribute

catalog: str | None = None

Default catalog. Env: TRINO_CATALOG.

schema_ class-attribute instance-attribute

schema_: str | None = Field(
    default=None, validation_alias="schema", alias="schema"
)

Default schema. Python attribute is schema_ to avoid Pydantic conflicts. Env: TRINO_SCHEMA.

ssl class-attribute instance-attribute

ssl: bool = True

Use HTTPS. Disable for local development clusters. Env: TRINO_SSL.

ssl_verify class-attribute instance-attribute

ssl_verify: bool = True

Verify SSL certificate. Env: TRINO_SSL_VERIFY.

source class-attribute instance-attribute

source: str | None = None

Application identifier sent to Trino coordinator. Env: TRINO_SOURCE.

to_adbc_kwargs

to_adbc_kwargs() -> dict[str, str]

Convert config to ADBC driver connection kwargs.

Supports two modes:

  • URI mode (uri set): returns {uri: ...}.
  • Decomposed mode: maps individual fields to their ADBC key equivalents. Boolean defaults (ssl, ssl_verify) are always included as 'true'/'false' strings.

Returns:

Type Description
dict[str, str]

Dict of ADBC driver kwargs for adbc_driver_manager.dbapi.connect().

Source code in src/adbc_poolhouse/_trino_config.py
def to_adbc_kwargs(self) -> dict[str, str]:
    """
    Convert config to ADBC driver connection kwargs.

    Supports two modes:

    - URI mode (``uri`` set): returns ``{uri: ...}``.
    - Decomposed mode: maps individual fields to their ADBC key
      equivalents. Boolean defaults (``ssl``, ``ssl_verify``) are
      always included as ``'true'``/``'false'`` strings.

    Returns:
        Dict of ADBC driver kwargs for ``adbc_driver_manager.dbapi.connect()``.
    """
    kwargs: dict[str, str] = {}

    # URI-first: if uri is set, use it as the primary connection spec
    if self.uri is not None:
        kwargs["uri"] = self.uri
        return kwargs

    # Decomposed fields (include only if not None)
    if self.host is not None:
        kwargs["host"] = self.host
    if self.port is not None:
        kwargs["port"] = str(self.port)
    if self.user is not None:
        kwargs["username"] = self.user
    if self.password is not None:
        kwargs["password"] = self.password.get_secret_value()  # pragma: allowlist secret
    if self.catalog is not None:
        kwargs["catalog"] = self.catalog
    if self.schema_ is not None:
        kwargs["schema"] = self.schema_

    # SSL fields (bool -> 'true'/'false' strings, always included)
    kwargs["ssl"] = str(self.ssl).lower()
    kwargs["ssl_verify"] = str(self.ssl_verify).lower()

    if self.source is not None:
        kwargs["source"] = self.source

    return kwargs

close_pool

close_pool(pool: QueuePool) -> None

Dispose a pool and close its underlying ADBC source connection.

Replaces the two-step pattern pool.dispose() followed by pool._adbc_source.close(). Always call this instead of calling pool.dispose() directly to avoid leaving the ADBC source connection open.

Parameters:

Name Type Description Default
pool QueuePool

A pool returned by create_pool.

required
Example
from adbc_poolhouse import DuckDBConfig, create_pool, close_pool

pool = create_pool(DuckDBConfig(database="/tmp/wh.db"))
close_pool(pool)
Source code in src/adbc_poolhouse/_pool_factory.py
def close_pool(pool: sqlalchemy.pool.QueuePool) -> None:
    """
    Dispose a pool and close its underlying ADBC source connection.

    Replaces the two-step pattern ``pool.dispose()`` followed by
    ``pool._adbc_source.close()``. Always call this instead of calling
    ``pool.dispose()`` directly to avoid leaving the ADBC source connection open.

    Args:
        pool: A pool returned by `create_pool`.

    Example:
        ```python
        from adbc_poolhouse import DuckDBConfig, create_pool, close_pool

        pool = create_pool(DuckDBConfig(database="/tmp/wh.db"))
        close_pool(pool)
        ```
    """
    pool.dispose()
    pool._adbc_source.close()  # type: ignore[attr-defined]

create_pool

create_pool(
    config: WarehouseConfig,
    *,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> sqlalchemy.pool.QueuePool
create_pool(
    *,
    driver_path: str,
    db_kwargs: dict[str, str],
    entrypoint: str | None = None,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> sqlalchemy.pool.QueuePool
create_pool(
    *,
    dbapi_module: str,
    db_kwargs: dict[str, str],
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> sqlalchemy.pool.QueuePool

Create a SQLAlchemy QueuePool backed by an ADBC driver.

Three call patterns are supported:

pool = create_pool(DuckDBConfig(...))           # from a config object
pool = create_pool(driver_path="...", ...)       # native ADBC driver
pool = create_pool(dbapi_module="...", ...)      # Python dbapi module

The config path extracts driver information from the config object's methods. The two raw paths accept driver arguments directly, bypassing config objects entirely.

Parameters:

Name Type Description Default
config WarehouseConfig | None

A warehouse config model instance (e.g. DuckDBConfig). Mutually exclusive with driver_path and dbapi_module.

None
driver_path str | None

Path to a native ADBC driver shared library, or a short driver name for manifest-based resolution. Requires db_kwargs. Mutually exclusive with config and dbapi_module.

None
db_kwargs dict[str, str] | None

ADBC connection keyword arguments as dict[str, str]. Required when using driver_path or dbapi_module.

None
entrypoint str | None

ADBC entry-point symbol. Only used with driver_path (e.g. "duckdb_adbc_init" for DuckDB). Default: None.

None
dbapi_module str | None

Dotted module name for a Python package implementing the ADBC dbapi interface (e.g. "adbc_driver_snowflake.dbapi" or a custom "my_driver.dbapi"). Requires db_kwargs. Mutually exclusive with config and driver_path.

None
pool_size int

Number of connections to keep in the pool. Default: 5.

5
max_overflow int

Extra connections allowed above pool_size. Default: 3.

3
timeout int

Seconds to wait for a connection before raising. Default: 30.

30
recycle int

Seconds before a connection is recycled. Default: 3600.

3600
pre_ping bool

Whether to ping connections before checkout. Default: False. Pre-ping does not function on a standalone QueuePool without a SQLAlchemy dialect; recycle is the preferred health mechanism.

False

Returns:

Type Description
QueuePool

A configured sqlalchemy.pool.QueuePool ready for use.

Raises:

Type Description
TypeError

If none of config, driver_path, or dbapi_module is provided, or if both driver_path and dbapi_module are provided.

ImportError

If the required ADBC driver is not installed.

Example

Config path:

from adbc_poolhouse import create_pool, close_pool
from adbc_poolhouse import DuckDBConfig

pool = create_pool(DuckDBConfig(database="/tmp/my.db"))
close_pool(pool)

Raw native driver path:

pool = create_pool(
    driver_path="/path/to/libduckdb.dylib",
    db_kwargs={"path": "/tmp/my.db"},
    entrypoint="duckdb_adbc_init",
)
close_pool(pool)
Source code in src/adbc_poolhouse/_pool_factory.py
def create_pool(
    config: WarehouseConfig | None = None,
    *,
    driver_path: str | None = None,
    db_kwargs: dict[str, str] | None = None,
    entrypoint: str | None = None,
    dbapi_module: str | None = None,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> sqlalchemy.pool.QueuePool:
    """
    Create a SQLAlchemy QueuePool backed by an ADBC driver.

    Three call patterns are supported:

        pool = create_pool(DuckDBConfig(...))           # from a config object
        pool = create_pool(driver_path="...", ...)       # native ADBC driver
        pool = create_pool(dbapi_module="...", ...)      # Python dbapi module

    The config path extracts driver information from the config object's
    methods. The two raw paths accept driver arguments directly, bypassing
    config objects entirely.

    Args:
        config: A warehouse config model instance (e.g. ``DuckDBConfig``).
            Mutually exclusive with ``driver_path`` and ``dbapi_module``.
        driver_path: Path to a native ADBC driver shared library, or a
            short driver name for manifest-based resolution. Requires
            ``db_kwargs``. Mutually exclusive with ``config`` and
            ``dbapi_module``.
        db_kwargs: ADBC connection keyword arguments as ``dict[str, str]``.
            Required when using ``driver_path`` or ``dbapi_module``.
        entrypoint: ADBC entry-point symbol. Only used with ``driver_path``
            (e.g. ``"duckdb_adbc_init"`` for DuckDB). Default: ``None``.
        dbapi_module: Dotted module name for a Python package implementing
            the ADBC dbapi interface (e.g. ``"adbc_driver_snowflake.dbapi"``
            or a custom ``"my_driver.dbapi"``). Requires ``db_kwargs``.
            Mutually exclusive with ``config`` and ``driver_path``.
        pool_size: Number of connections to keep in the pool. Default: 5.
        max_overflow: Extra connections allowed above pool_size. Default: 3.
        timeout: Seconds to wait for a connection before raising. Default: 30.
        recycle: Seconds before a connection is recycled. Default: 3600.
        pre_ping: Whether to ping connections before checkout. Default: False.
            Pre-ping does not function on a standalone QueuePool without a
            SQLAlchemy dialect; recycle is the preferred health mechanism.

    Returns:
        A configured ``sqlalchemy.pool.QueuePool`` ready for use.

    Raises:
        TypeError: If none of ``config``, ``driver_path``, or ``dbapi_module``
            is provided, or if both ``driver_path`` and ``dbapi_module`` are
            provided.
        ImportError: If the required ADBC driver is not installed.

    Example:
        Config path:

        ```python
        from adbc_poolhouse import create_pool, close_pool
        from adbc_poolhouse import DuckDBConfig

        pool = create_pool(DuckDBConfig(database="/tmp/my.db"))
        close_pool(pool)
        ```

        Raw native driver path:

        ```python
        pool = create_pool(
            driver_path="/path/to/libduckdb.dylib",
            db_kwargs={"path": "/tmp/my.db"},
            entrypoint="duckdb_adbc_init",
        )
        close_pool(pool)
        ```
    """
    return _create_pool_impl(
        config,
        driver_path,
        db_kwargs,
        entrypoint,
        dbapi_module,
        pool_size,
        max_overflow,
        timeout,
        recycle,
        pre_ping,
    )

managed_pool

managed_pool(
    config: WarehouseConfig,
    *,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> contextlib.AbstractContextManager[
    sqlalchemy.pool.QueuePool
]
managed_pool(
    *,
    driver_path: str,
    db_kwargs: dict[str, str],
    entrypoint: str | None = None,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> contextlib.AbstractContextManager[
    sqlalchemy.pool.QueuePool
]
managed_pool(
    *,
    dbapi_module: str,
    db_kwargs: dict[str, str],
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> contextlib.AbstractContextManager[
    sqlalchemy.pool.QueuePool
]

Context manager that creates a pool and closes it on exit.

The pool is created when the with block is entered and disposed (via close_pool) when the block exits, whether normally or by exception.

Three call patterns are supported:

with managed_pool(DuckDBConfig(...)) as pool: ...          # config
with managed_pool(driver_path="...", ...) as pool: ...     # native
with managed_pool(dbapi_module="...", ...) as pool: ...    # dbapi

Parameters:

Name Type Description Default
config WarehouseConfig | None

A warehouse config model instance (e.g. DuckDBConfig). Mutually exclusive with driver_path and dbapi_module.

None
driver_path str | None

Path to a native ADBC driver shared library, or a short driver name for manifest-based resolution. Requires db_kwargs. Mutually exclusive with config and dbapi_module.

None
db_kwargs dict[str, str] | None

ADBC connection keyword arguments as dict[str, str]. Required when using driver_path or dbapi_module.

None
entrypoint str | None

ADBC entry-point symbol. Only used with driver_path (e.g. "duckdb_adbc_init" for DuckDB). Default: None.

None
dbapi_module str | None

Dotted module name for a Python package implementing the ADBC dbapi interface (e.g. "adbc_driver_snowflake.dbapi" or a custom "my_driver.dbapi"). Requires db_kwargs. Mutually exclusive with config and driver_path.

None
pool_size int

Number of connections to keep in the pool. Default: 5.

5
max_overflow int

Extra connections allowed above pool_size. Default: 3.

3
timeout int

Seconds to wait for a connection before raising. Default: 30.

30
recycle int

Seconds before a connection is recycled. Default: 3600.

3600
pre_ping bool

Whether to ping connections before checkout. Default: False.

False

Yields:

Type Description
QueuePool

A configured sqlalchemy.pool.QueuePool. The pool is automatically

QueuePool

closed when the with block exits.

Raises:

Type Description
TypeError

If none of config, driver_path, or dbapi_module is provided, or if both driver_path and dbapi_module are provided.

ImportError

If the required ADBC driver is not installed.

Example

Config path:

from adbc_poolhouse import DuckDBConfig, managed_pool

with managed_pool(DuckDBConfig(database="/tmp/wh.db")) as pool:
    with pool.connect() as conn:
        cursor = conn.cursor()
        cursor.execute("SELECT 1")

Raw native driver path:

with managed_pool(
    driver_path="/path/to/libduckdb.dylib",
    db_kwargs={"path": "/tmp/my.db"},
    entrypoint="duckdb_adbc_init",
) as pool:
    with pool.connect() as conn:
        cursor = conn.cursor()
        cursor.execute("SELECT 42")
Source code in src/adbc_poolhouse/_pool_factory.py
@contextlib.contextmanager
def managed_pool(
    config: WarehouseConfig | None = None,
    *,
    driver_path: str | None = None,
    db_kwargs: dict[str, str] | None = None,
    entrypoint: str | None = None,
    dbapi_module: str | None = None,
    pool_size: int = 5,
    max_overflow: int = 3,
    timeout: int = 30,
    recycle: int = 3600,
    pre_ping: bool = False,
) -> collections.abc.Iterator[sqlalchemy.pool.QueuePool]:
    """
    Context manager that creates a pool and closes it on exit.

    The pool is created when the ``with`` block is entered and disposed
    (via `close_pool`) when the block exits, whether normally or by
    exception.

    Three call patterns are supported:

        with managed_pool(DuckDBConfig(...)) as pool: ...          # config
        with managed_pool(driver_path="...", ...) as pool: ...     # native
        with managed_pool(dbapi_module="...", ...) as pool: ...    # dbapi

    Args:
        config: A warehouse config model instance (e.g. ``DuckDBConfig``).
            Mutually exclusive with ``driver_path`` and ``dbapi_module``.
        driver_path: Path to a native ADBC driver shared library, or a
            short driver name for manifest-based resolution. Requires
            ``db_kwargs``. Mutually exclusive with ``config`` and
            ``dbapi_module``.
        db_kwargs: ADBC connection keyword arguments as ``dict[str, str]``.
            Required when using ``driver_path`` or ``dbapi_module``.
        entrypoint: ADBC entry-point symbol. Only used with ``driver_path``
            (e.g. ``"duckdb_adbc_init"`` for DuckDB). Default: ``None``.
        dbapi_module: Dotted module name for a Python package implementing
            the ADBC dbapi interface (e.g. ``"adbc_driver_snowflake.dbapi"``
            or a custom ``"my_driver.dbapi"``). Requires ``db_kwargs``.
            Mutually exclusive with ``config`` and ``driver_path``.
        pool_size: Number of connections to keep in the pool. Default: 5.
        max_overflow: Extra connections allowed above pool_size. Default: 3.
        timeout: Seconds to wait for a connection before raising. Default: 30.
        recycle: Seconds before a connection is recycled. Default: 3600.
        pre_ping: Whether to ping connections before checkout. Default: False.

    Yields:
        A configured ``sqlalchemy.pool.QueuePool``. The pool is automatically
        closed when the ``with`` block exits.

    Raises:
        TypeError: If none of ``config``, ``driver_path``, or ``dbapi_module``
            is provided, or if both ``driver_path`` and ``dbapi_module`` are
            provided.
        ImportError: If the required ADBC driver is not installed.

    Example:
        Config path:

        ```python
        from adbc_poolhouse import DuckDBConfig, managed_pool

        with managed_pool(DuckDBConfig(database="/tmp/wh.db")) as pool:
            with pool.connect() as conn:
                cursor = conn.cursor()
                cursor.execute("SELECT 1")
        ```

        Raw native driver path:

        ```python
        with managed_pool(
            driver_path="/path/to/libduckdb.dylib",
            db_kwargs={"path": "/tmp/my.db"},
            entrypoint="duckdb_adbc_init",
        ) as pool:
            with pool.connect() as conn:
                cursor = conn.cursor()
                cursor.execute("SELECT 42")
        ```
    """
    pool = _create_pool_impl(
        config,
        driver_path,
        db_kwargs,
        entrypoint,
        dbapi_module,
        pool_size,
        max_overflow,
        timeout,
        recycle,
        pre_ping,
    )
    try:
        yield pool
    finally:
        close_pool(pool)