adbc_poolhouse¶
adbc_poolhouse
¶
Connection pooling for ADBC drivers from typed warehouse configs.
BaseWarehouseConfig
¶
Bases: BaseSettings, ABC
Base class for all warehouse config models.
Provides pool tuning fields with library defaults. Not intended to be instantiated directly — use a concrete subclass (e.g. DuckDBConfig).
Pool tuning fields are inherited by all concrete configs, and each concrete config's env_prefix applies to these fields automatically. For example, DUCKDB_POOL_SIZE populates DuckDBConfig.pool_size.
pool_size
class-attribute
instance-attribute
¶
Number of connections to keep open in the pool. Default: 5.
max_overflow
class-attribute
instance-attribute
¶
Connections allowed above pool_size when pool is exhausted. Default: 3.
timeout
class-attribute
instance-attribute
¶
Seconds to wait for a connection before raising TimeoutError. Default: 30.
recycle
class-attribute
instance-attribute
¶
Seconds before a connection is closed and replaced. Default: 3600.
to_adbc_kwargs
abstractmethod
¶
Convert config to ADBC driver connection kwargs.
Subclasses must override this method to provide backend-specific serialization.
WarehouseConfig
¶
Bases: Protocol
Structural type for warehouse config objects.
Any class with these attributes and methods can be passed to
create_pool or managed_pool. The built-in config
classes all satisfy this protocol through
BaseWarehouseConfig.
Third-party authors: inherit from BaseWarehouseConfig
for pool-tuning defaults and _resolve_driver_path, or
implement the full protocol from scratch.
BigQueryConfig
¶
Bases: BaseWarehouseConfig
BigQuery warehouse configuration.
Supports SDK default auth (ADC), JSON credential file, JSON credential string, and user authentication flows.
Pool tuning fields are inherited and loaded from BIGQUERY_* env vars.
auth_type
class-attribute
instance-attribute
¶
Auth method: 'bigquery' (SDK default/ADC), 'json_credential_file', 'json_credential_string', 'user_authentication'. Env: BIGQUERY_AUTH_TYPE.
auth_credentials
class-attribute
instance-attribute
¶
JSON credentials file path or encoded credential string, depending on auth_type. Env: BIGQUERY_AUTH_CREDENTIALS.
auth_client_id
class-attribute
instance-attribute
¶
OAuth client ID for user_authentication flow. Env: BIGQUERY_AUTH_CLIENT_ID.
auth_client_secret
class-attribute
instance-attribute
¶
OAuth client secret for user_authentication flow. Env: BIGQUERY_AUTH_CLIENT_SECRET.
auth_refresh_token
class-attribute
instance-attribute
¶
OAuth refresh token for user_authentication flow. Env: BIGQUERY_AUTH_REFRESH_TOKEN.
project_id
class-attribute
instance-attribute
¶
GCP project ID. Env: BIGQUERY_PROJECT_ID.
dataset_id
class-attribute
instance-attribute
¶
Default dataset. Env: BIGQUERY_DATASET_ID.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
All keys use the adbc.bigquery.sql.* prefix verified from the
adbc_driver_bigquery.DatabaseOptions enum. Only non-None fields
are included.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict of ADBC driver kwargs. Empty when no fields are set. |
Source code in src/adbc_poolhouse/_bigquery_config.py
ClickHouseConfig
¶
Bases: BaseWarehouseConfig
ClickHouse warehouse configuration.
Uses the Columnar ADBC ClickHouse driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry:
dbc install --pre clickhouse
The --pre flag is required — only alpha releases are available
(v0.1.0-alpha.1).
Supports two connection modes:
- URI mode: set
uriwith the full ClickHouse connection string. - Decomposed mode: set
hostandusernametogether.password,database, andportare optional.portdefaults to 8123 (HTTP interface).
At least one mode must be fully specified — construction raises
ConfigurationError if neither is provided.
Note: The field name is username, not user. The Columnar
ClickHouse driver uses username as the kwarg key. Passing user
causes a silent auth failure.
Pool tuning fields are inherited and loaded from CLICKHOUSE_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Full ClickHouse connection URI. May contain credentials — stored as SecretStr. Env: CLICKHOUSE_URI.
host
class-attribute
instance-attribute
¶
ClickHouse hostname. Alternative to embedding host in URI. Env: CLICKHOUSE_HOST.
port
class-attribute
instance-attribute
¶
ClickHouse HTTP interface port. Default: 8123. Env: CLICKHOUSE_PORT.
username
class-attribute
instance-attribute
¶
ClickHouse username. Maps to the username driver kwarg (not user).
Env: CLICKHOUSE_USERNAME.
password
class-attribute
instance-attribute
¶
ClickHouse password. Optional. Env: CLICKHOUSE_PASSWORD.
database
class-attribute
instance-attribute
¶
ClickHouse database name. Optional. Env: CLICKHOUSE_DATABASE.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Supports two modes:
- URI mode (
uriset): returns{uri: ...}with the secret value extracted. - Decomposed mode (
host+usernameset): returns individual kwargs withportas a string.passwordanddatabaseare omitted whenNone.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict of ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_clickhouse_config.py
check_connection_spec
¶
Raise ConfigurationError if neither uri nor minimum decomposed fields are set.
Source code in src/adbc_poolhouse/_clickhouse_config.py
DatabricksConfig
¶
Bases: BaseWarehouseConfig
Databricks warehouse configuration.
Uses the Columnar ADBC Databricks driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry.
Supports PAT (personal access token) and OAuth (U2M and M2M) auth. Supports two connection modes:
- URI mode: set
uriwith the full DSN string. - Decomposed mode: set
host,http_path, andtokentogether.
At least one mode must be fully specified — construction raises
ConfigurationError if neither is provided.
Pool tuning fields are inherited and loaded from DATABRICKS_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Full connection URI: databricks://token:
host
class-attribute
instance-attribute
¶
Databricks workspace hostname (e.g. 'adb-xxx.azuredatabricks.net'). Alternative to embedding host in URI. Env: DATABRICKS_HOST.
http_path
class-attribute
instance-attribute
¶
SQL warehouse HTTP path (e.g. '/sql/1.0/warehouses/abc123'). Env: DATABRICKS_HTTP_PATH.
token
class-attribute
instance-attribute
¶
Personal access token for PAT auth. Env: DATABRICKS_TOKEN.
auth_type
class-attribute
instance-attribute
¶
OAuth auth type: 'OAuthU2M' (browser-based) or 'OAuthM2M' (service principal). Omit for PAT auth. Env: DATABRICKS_AUTH_TYPE.
client_id
class-attribute
instance-attribute
¶
OAuth M2M service principal client ID. Env: DATABRICKS_CLIENT_ID.
client_secret
class-attribute
instance-attribute
¶
OAuth M2M service principal client secret. Env: DATABRICKS_CLIENT_SECRET.
catalog
class-attribute
instance-attribute
¶
Default Unity Catalog. Env: DATABRICKS_CATALOG.
schema_
class-attribute
instance-attribute
¶
Default schema. Python attribute is schema_ to avoid Pydantic conflicts. Env: DATABRICKS_SCHEMA.
check_connection_spec
¶
Raise ConfigurationError if neither uri nor all minimum decomposed fields are set.
Source code in src/adbc_poolhouse/_databricks_config.py
to_adbc_kwargs
¶
Convert Databricks config fields to ADBC driver kwargs.
Supports two modes:
- URI mode (
uriset): extractsSecretStrvalue and returns{"uri": ...}. - Decomposed mode: builds
databricks://token:{encoded}@{host}:443{http_path}fromhost,http_path, andtoken. Token is URL-encoded viaurllib.parse.quotewithsafe="".
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_databricks_config.py
DuckDBConfig
¶
Bases: BaseWarehouseConfig
DuckDB warehouse configuration.
Covers all DuckDB ADBC connection parameters. Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited from BaseWarehouseConfig and loaded from DUCKDB_* environment variables.
Example
database
class-attribute
instance-attribute
¶
File path or ':memory:'. Env: DUCKDB_DATABASE.
pool_size
class-attribute
instance-attribute
¶
Number of connections in the pool. Default 1 for in-memory DuckDB.
In-memory DuckDB databases are isolated per connection — each pool connection gets a different empty DB. Use pool_size=1 for ':memory:', or set database to a file path if you need pool_size > 1. Setting pool_size > 1 with database=':memory:' raises ValidationError. Env: DUCKDB_POOL_SIZE.
read_only
class-attribute
instance-attribute
¶
Open the database in read-only mode. Env: DUCKDB_READ_ONLY.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict with |
dict[str, str]
|
|
Source code in src/adbc_poolhouse/_duckdb_config.py
ConfigurationError
¶
Bases: PoolhouseError, ValueError
Raised when a config model contains invalid field values.
Inherits from both PoolhouseError (library hierarchy) and ValueError (pydantic model_validator compatibility). When raised inside a pydantic @model_validator, pydantic wraps it in ValidationError --- which itself inherits from ValueError --- satisfying 'raises ValueError' test expectations.
PoolhouseError
¶
Bases: Exception
Base exception for all adbc-poolhouse errors.
All library-specific exceptions inherit from this class.
Consumers can use except PoolhouseError to catch any library error.
FlightSQLConfig
¶
Bases: BaseWarehouseConfig
FlightSQL warehouse configuration.
Connects to any Apache Arrow Flight SQL server (e.g. Dremio, InfluxDB, DuckDB server mode, custom Flight SQL implementations).
Pool tuning fields are inherited and loaded from FLIGHTSQL_* env vars.
uri
class-attribute
instance-attribute
¶
gRPC endpoint URI. Env: FLIGHTSQL_URI. Format: grpc://host:port (plaintext) or grpc+tls://host:port (TLS).
username
class-attribute
instance-attribute
¶
Username for HTTP-style basic auth. Env: FLIGHTSQL_USERNAME.
password
class-attribute
instance-attribute
¶
Password for HTTP-style basic auth. Env: FLIGHTSQL_PASSWORD.
authorization_header
class-attribute
instance-attribute
¶
Custom authorization header value (overrides username/password if set). Env: FLIGHTSQL_AUTHORIZATION_HEADER.
mtls_cert_chain
class-attribute
instance-attribute
¶
mTLS certificate chain (PEM). Env: FLIGHTSQL_MTLS_CERT_CHAIN.
mtls_private_key
class-attribute
instance-attribute
¶
mTLS private key (PEM). Env: FLIGHTSQL_MTLS_PRIVATE_KEY.
tls_root_certs
class-attribute
instance-attribute
¶
Root CA certificate(s) in PEM format. Env: FLIGHTSQL_TLS_ROOT_CERTS.
tls_skip_verify
class-attribute
instance-attribute
¶
Disable TLS certificate verification. Env: FLIGHTSQL_TLS_SKIP_VERIFY.
tls_override_hostname
class-attribute
instance-attribute
¶
Override the TLS hostname for SNI. Env: FLIGHTSQL_TLS_OVERRIDE_HOSTNAME.
connect_timeout
class-attribute
instance-attribute
¶
Connection timeout in seconds. Env: FLIGHTSQL_CONNECT_TIMEOUT.
query_timeout
class-attribute
instance-attribute
¶
Query execution timeout in seconds. Env: FLIGHTSQL_QUERY_TIMEOUT.
fetch_timeout
class-attribute
instance-attribute
¶
Result fetch timeout in seconds. Env: FLIGHTSQL_FETCH_TIMEOUT.
update_timeout
class-attribute
instance-attribute
¶
DML update timeout in seconds. Env: FLIGHTSQL_UPDATE_TIMEOUT.
authority
class-attribute
instance-attribute
¶
Override gRPC authority header. Env: FLIGHTSQL_AUTHORITY.
max_msg_size
class-attribute
instance-attribute
¶
Maximum gRPC message size in bytes (driver default: 16 MiB). Env: FLIGHTSQL_MAX_MSG_SIZE.
with_cookie_middleware
class-attribute
instance-attribute
¶
Enable gRPC cookie middleware (required by some servers for session management). Env: FLIGHTSQL_WITH_COOKIE_MIDDLEWARE.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Maps FlightSQL config fields to their adbc.flight.sql.* key
equivalents. Boolean defaults (tls_skip_verify,
with_cookie_middleware) are always included as 'true'/
'false' strings. Optional fields are omitted when None.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict of ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_flightsql_config.py
MSSQLConfig
¶
Bases: BaseWarehouseConfig
Microsoft SQL Server / Azure SQL / Azure Fabric / Synapse Analytics configuration.
Uses the Columnar ADBC MSSQL driver (Foundry-distributed, not on PyPI). One class covers all Microsoft SQL variants via optional variant-specific fields: - SQL Server: use host + port + instance (or URI) - Azure SQL: use host + port, optionally fedauth for Entra ID / Azure AD auth - Azure Fabric / Synapse Analytics: use fedauth for managed identity or service principal authentication
Pool tuning fields are inherited and loaded from MSSQL_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Connection URI. Format: mssql://user:pass@host[:port][/instance][?params] # pragma: allowlist secret
Also accepts the sqlserver:// scheme. Env: MSSQL_URI.
host
class-attribute
instance-attribute
¶
Hostname or IP address. Alternative to URI-based connection. Env: MSSQL_HOST.
port
class-attribute
instance-attribute
¶
Port number. Default: 1433. Env: MSSQL_PORT.
instance
class-attribute
instance-attribute
¶
SQL Server named instance (e.g. 'SQLExpress'). Env: MSSQL_INSTANCE.
user
class-attribute
instance-attribute
¶
SQL auth username. Env: MSSQL_USER.
password
class-attribute
instance-attribute
¶
SQL auth password. Env: MSSQL_PASSWORD.
database
class-attribute
instance-attribute
¶
Target database name. Env: MSSQL_DATABASE.
trust_server_certificate
class-attribute
instance-attribute
¶
Accept self-signed TLS certificates. Enable for local development. Env: MSSQL_TRUST_SERVER_CERTIFICATE.
connection_timeout
class-attribute
instance-attribute
¶
Connection timeout in seconds. Env: MSSQL_CONNECTION_TIMEOUT.
fedauth
class-attribute
instance-attribute
¶
Federated authentication method for Entra ID / Azure AD. Used for Azure SQL, Azure Fabric, and Synapse Analytics. Values: 'ActiveDirectoryPassword', 'ActiveDirectoryMsi', 'ActiveDirectoryServicePrincipal', 'ActiveDirectoryInteractive'. Env: MSSQL_FEDAUTH.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Supports two modes:
- URI mode (
uriset): returns{uri: ...}. - Decomposed mode: maps individual fields to their ADBC key
equivalents.
trust_server_certificateis always included as a'true'/'false'string.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict of ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_mssql_config.py
MySQLConfig
¶
Bases: BaseWarehouseConfig
MySQL warehouse configuration.
Uses the Columnar ADBC MySQL driver (Foundry-distributed, not on PyPI). Install via the ADBC Driver Foundry (see DEVELOP.md for setup instructions).
Supports two connection modes:
- URI mode: set
uriwith the full MySQL connection string. - Decomposed mode: set
host,user, anddatabasetogether.passwordis optional — MySQL supports passwordless connections.portdefaults to 3306.
At least one mode must be fully specified — construction raises
ConfigurationError if neither is provided.
Pool tuning fields are inherited and loaded from MYSQL_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Full MySQL connection URI. May contain credentials — stored as SecretStr. Env: MYSQL_URI.
host
class-attribute
instance-attribute
¶
MySQL hostname. Alternative to embedding host in URI. Env: MYSQL_HOST.
port
class-attribute
instance-attribute
¶
MySQL port. Default: 3306. Env: MYSQL_PORT.
password
class-attribute
instance-attribute
¶
MySQL password. Optional — MySQL supports passwordless connections. Env: MYSQL_PASSWORD.
database
class-attribute
instance-attribute
¶
MySQL database name. Env: MYSQL_DATABASE.
check_connection_spec
¶
Raise ConfigurationError if neither uri nor all minimum decomposed fields are set.
Source code in src/adbc_poolhouse/_mysql_config.py
to_adbc_kwargs
¶
Convert MySQL config fields to ADBC driver kwargs.
Supports two modes:
- URI mode (
uriset): extractsSecretStrvalue and returns{"uri": ...}. - Decomposed mode: builds a Go DSN from
user,password,host,port, anddatabase. Password is URL-encoded viaurllib.parse.quotewithsafe="".
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_mysql_config.py
PostgreSQLConfig
¶
Bases: BaseWarehouseConfig
PostgreSQL warehouse configuration.
The PostgreSQL ADBC driver wraps libpq. Specify the connection either as
a full URI or via individual fields. If neither is provided, libpq falls
back to its own environment variables (PGHOST, PGUSER, etc.).
Pool tuning fields are inherited and loaded from POSTGRESQL_* env vars.
Example
uri
class-attribute
instance-attribute
¶
libpq connection URI. Takes precedence over individual fields.
Format: postgresql://[user[:password]@][host][:port][/dbname][?params]
Env: POSTGRESQL_URI.
host
class-attribute
instance-attribute
¶
Database hostname or IP address. Env: POSTGRESQL_HOST.
port
class-attribute
instance-attribute
¶
Database port. Defaults to 5432 when omitted. Env: POSTGRESQL_PORT.
user
class-attribute
instance-attribute
¶
Database username. Env: POSTGRESQL_USER.
password
class-attribute
instance-attribute
¶
Database password. Env: POSTGRESQL_PASSWORD.
database
class-attribute
instance-attribute
¶
Database name. Env: POSTGRESQL_DATABASE.
sslmode
class-attribute
instance-attribute
¶
SSL mode. Accepted values: disable, allow, prefer,
require, verify-ca, verify-full. Env: POSTGRESQL_SSLMODE.
use_copy
class-attribute
instance-attribute
¶
Use PostgreSQL COPY protocol for bulk query execution (driver default: True). Disable if COPY triggers permission errors. Env: POSTGRESQL_USE_COPY.
to_adbc_kwargs
¶
Convert PostgreSQL config fields to ADBC driver kwargs.
Supports three modes:
- URI mode (
uriset): passed directly as{"uri": ...}. - Decomposed mode: builds a libpq URI from
host,port,user,password,database, andsslmode. Password is URL-encoded viaurllib.parse.quotewithsafe="". - Empty mode: returns
{}so libpq resolves from env vars.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_postgresql_config.py
RedshiftConfig
¶
Bases: BaseWarehouseConfig
Redshift warehouse configuration.
Uses the Columnar ADBC Redshift driver (Foundry-distributed, not on PyPI). Supports provisioned clusters (standard and IAM auth) and Redshift Serverless.
Pool tuning fields are inherited and loaded from REDSHIFT_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Connection URI: redshift://[user:password@]host[:port]/dbname[?params] Use redshift:///dbname for automatic endpoint discovery. Env: REDSHIFT_URI.
host
class-attribute
instance-attribute
¶
Redshift cluster hostname. Alternative to URI. Env: REDSHIFT_HOST.
port
class-attribute
instance-attribute
¶
Port number. Default: 5439. Env: REDSHIFT_PORT.
user
class-attribute
instance-attribute
¶
Database username. Env: REDSHIFT_USER.
password
class-attribute
instance-attribute
¶
Database password. Env: REDSHIFT_PASSWORD.
database
class-attribute
instance-attribute
¶
Target database name. Env: REDSHIFT_DATABASE.
cluster_type
class-attribute
instance-attribute
¶
Cluster variant: 'redshift' (standard), 'redshift-iam', or 'redshift-serverless'. Env: REDSHIFT_CLUSTER_TYPE.
cluster_identifier
class-attribute
instance-attribute
¶
Provisioned cluster identifier (required for IAM auth). Env: REDSHIFT_CLUSTER_IDENTIFIER.
workgroup_name
class-attribute
instance-attribute
¶
Serverless workgroup name. Env: REDSHIFT_WORKGROUP_NAME.
aws_region
class-attribute
instance-attribute
¶
AWS region (e.g. 'us-west-2'). Env: REDSHIFT_AWS_REGION.
aws_access_key_id
class-attribute
instance-attribute
¶
AWS IAM access key ID. Env: REDSHIFT_AWS_ACCESS_KEY_ID.
aws_secret_access_key
class-attribute
instance-attribute
¶
AWS IAM secret access key. Env: REDSHIFT_AWS_SECRET_ACCESS_KEY.
sslmode
class-attribute
instance-attribute
¶
SSL mode (e.g. 'require', 'verify-full'). Env: REDSHIFT_SSLMODE.
to_adbc_kwargs
¶
Convert Redshift config fields to ADBC driver kwargs.
Supports two connection modes:
- URI mode (
uriset): passed directly as{"uri": ...}. - Decomposed mode: builds a
redshift://URI fromhost,port,user,password,database, andsslmode. Password is URL-encoded viaurllib.parse.quotewithsafe="".
IAM and cluster fields (cluster_type, cluster_identifier,
workgroup_name, aws_region, aws_access_key_id,
aws_secret_access_key) are always translated as separate driver
kwargs when set, regardless of connection mode.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_redshift_config.py
SnowflakeConfig
¶
Bases: BaseWarehouseConfig
Snowflake warehouse configuration.
Supports all authentication methods provided by adbc-driver-snowflake: password, JWT (private_key_path / private_key_pem), external browser, OAuth, MFA, Okta, PAT, and workload identity federation (WIF).
Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited and loaded from SNOWFLAKE_* environment variables.
Example
account
instance-attribute
¶
Snowflake account identifier (e.g. 'myorg-myaccount'). Env: SNOWFLAKE_ACCOUNT.
user
class-attribute
instance-attribute
¶
Username. Required for most auth methods. Env: SNOWFLAKE_USER.
password
class-attribute
instance-attribute
¶
Password for basic auth. Env: SNOWFLAKE_PASSWORD.
auth_type
class-attribute
instance-attribute
¶
Auth method: auth_jwt, auth_ext_browser, auth_oauth, auth_mfa, auth_okta, auth_pat, auth_wif. Env: SNOWFLAKE_AUTH_TYPE.
private_key_path
class-attribute
instance-attribute
¶
File path to a PKCS1 or PKCS8 private key file. Mutually exclusive with private_key_pem. Env: SNOWFLAKE_PRIVATE_KEY_PATH.
private_key_pem
class-attribute
instance-attribute
¶
Inline PEM-encoded PKCS8 private key (encrypted or unencrypted). Mutually exclusive with private_key_path. Env: SNOWFLAKE_PRIVATE_KEY_PEM.
private_key_passphrase
class-attribute
instance-attribute
¶
Passphrase to decrypt an encrypted PKCS8 key. Env: SNOWFLAKE_PRIVATE_KEY_PASSPHRASE.
jwt_expire_timeout
class-attribute
instance-attribute
¶
JWT expiry duration (e.g. '300ms', '1m30s'). Env: SNOWFLAKE_JWT_EXPIRE_TIMEOUT.
oauth_token
class-attribute
instance-attribute
¶
Bearer token for auth_oauth. Env: SNOWFLAKE_OAUTH_TOKEN.
okta_url
class-attribute
instance-attribute
¶
Okta server URL required for auth_okta. Env: SNOWFLAKE_OKTA_URL.
identity_provider
class-attribute
instance-attribute
¶
Identity provider for auth_wif. Env: SNOWFLAKE_IDENTITY_PROVIDER.
database
class-attribute
instance-attribute
¶
Default database. Env: SNOWFLAKE_DATABASE.
schema_
class-attribute
instance-attribute
¶
Default schema. Python attribute is schema_ to avoid Pydantic conflicts; env var is SNOWFLAKE_SCHEMA. Env: SNOWFLAKE_SCHEMA.
warehouse
class-attribute
instance-attribute
¶
Snowflake virtual warehouse. Env: SNOWFLAKE_WAREHOUSE.
role
class-attribute
instance-attribute
¶
Snowflake role. Env: SNOWFLAKE_ROLE.
region
class-attribute
instance-attribute
¶
Snowflake region (if not embedded in account). Env: SNOWFLAKE_REGION.
host
class-attribute
instance-attribute
¶
Explicit hostname (alternative to account-derived URI). Env: SNOWFLAKE_HOST.
port
class-attribute
instance-attribute
¶
Connection port. Env: SNOWFLAKE_PORT.
protocol
class-attribute
instance-attribute
¶
Protocol: 'http' or 'https'. Env: SNOWFLAKE_PROTOCOL.
login_timeout
class-attribute
instance-attribute
¶
Login retry timeout duration string. Env: SNOWFLAKE_LOGIN_TIMEOUT.
request_timeout
class-attribute
instance-attribute
¶
Request retry timeout duration string. Env: SNOWFLAKE_REQUEST_TIMEOUT.
client_timeout
class-attribute
instance-attribute
¶
Network roundtrip timeout duration string. Env: SNOWFLAKE_CLIENT_TIMEOUT.
tls_skip_verify
class-attribute
instance-attribute
¶
Disable TLS certificate verification. Env: SNOWFLAKE_TLS_SKIP_VERIFY.
ocsp_fail_open_mode
class-attribute
instance-attribute
¶
OCSP fail-open mode (True = allow connection on OCSP errors). Env: SNOWFLAKE_OCSP_FAIL_OPEN_MODE.
keep_session_alive
class-attribute
instance-attribute
¶
Prevent session expiry during long operations. Env: SNOWFLAKE_KEEP_SESSION_ALIVE.
app_name
class-attribute
instance-attribute
¶
Application identifier sent to Snowflake. Env: SNOWFLAKE_APP_NAME.
disable_telemetry
class-attribute
instance-attribute
¶
Disable Snowflake usage telemetry. Env: SNOWFLAKE_DISABLE_TELEMETRY.
cache_mfa_token
class-attribute
instance-attribute
¶
Cache MFA token for subsequent connections. Env: SNOWFLAKE_CACHE_MFA_TOKEN.
store_temp_creds
class-attribute
instance-attribute
¶
Cache ID token for SSO. Env: SNOWFLAKE_STORE_TEMP_CREDS.
check_private_key_exclusion
¶
Raise ValidationError if both private_key_path and private_key_pem are set.
Source code in src/adbc_poolhouse/_snowflake_config.py
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Returns a dict[str, str] suitable for passing as db_kwargs to
adbc_driver_manager.dbapi.connect(). All values are strings;
None fields are omitted. Boolean fields are always included as
'true'/'false' strings.
Key names follow adbc_driver_snowflake DatabaseOptions and
AuthType enums. 'username' and 'password' are plain
string keys (not prefixed with 'adbc.snowflake.sql.*').
Source code in src/adbc_poolhouse/_snowflake_config.py
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | |
SQLiteConfig
¶
Bases: BaseWarehouseConfig
SQLite warehouse configuration.
Covers SQLite ADBC connection parameters. Pool tuning fields (pool_size, max_overflow, timeout, recycle) are inherited from BaseWarehouseConfig and loaded from SQLITE_* environment variables.
Unlike DuckDB, an SQLite in-memory database is shared across all connections in the pool. This means pool_size > 1 with database=':memory:' is almost always unintended (connection state races across a single shared DB), so it is rejected by a validator.
Example
database
class-attribute
instance-attribute
¶
File path or ':memory:'. Env: SQLITE_DATABASE.
pool_size
class-attribute
instance-attribute
¶
Number of connections in the pool. Default 1 for in-memory SQLite.
SQLite in-memory databases are shared across all connections in the pool — unlike DuckDB, where each connection gets its own isolated empty DB. Use pool_size=1 for ':memory:', or set database to a file path if you need pool_size > 1. Setting pool_size > 1 with database=':memory:' raises ValidationError. Env: SQLITE_POOL_SIZE.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict with a single |
dict[str, str]
|
(or |
Source code in src/adbc_poolhouse/_sqlite_config.py
TrinoConfig
¶
Bases: BaseWarehouseConfig
Trino warehouse configuration.
Uses the Columnar ADBC Trino driver (Foundry-distributed, not on PyPI). Supports URI-based or decomposed field connection specification.
Pool tuning fields are inherited and loaded from TRINO_* env vars.
Note: This driver is distributed via the ADBC Driver Foundry, not PyPI. See the installation guide for Foundry setup instructions.
uri
class-attribute
instance-attribute
¶
Connection URI. Format: trino://[user[:password]@]host[:port][/catalog[/schema]][?params]
Env: TRINO_URI.
host
class-attribute
instance-attribute
¶
Trino coordinator hostname. Alternative to URI. Env: TRINO_HOST.
port
class-attribute
instance-attribute
¶
Trino coordinator port. Defaults: 8080 (HTTP), 8443 (HTTPS). Env: TRINO_PORT.
password
class-attribute
instance-attribute
¶
Password (HTTPS connections only). Env: TRINO_PASSWORD.
catalog
class-attribute
instance-attribute
¶
Default catalog. Env: TRINO_CATALOG.
schema_
class-attribute
instance-attribute
¶
Default schema. Python attribute is schema_ to avoid Pydantic conflicts. Env: TRINO_SCHEMA.
ssl
class-attribute
instance-attribute
¶
Use HTTPS. Disable for local development clusters. Env: TRINO_SSL.
ssl_verify
class-attribute
instance-attribute
¶
Verify SSL certificate. Env: TRINO_SSL_VERIFY.
source
class-attribute
instance-attribute
¶
Application identifier sent to Trino coordinator. Env: TRINO_SOURCE.
to_adbc_kwargs
¶
Convert config to ADBC driver connection kwargs.
Supports two modes:
- URI mode (
uriset): returns{uri: ...}. - Decomposed mode: maps individual fields to their ADBC key
equivalents. Boolean defaults (
ssl,ssl_verify) are always included as'true'/'false'strings.
Returns:
| Type | Description |
|---|---|
dict[str, str]
|
Dict of ADBC driver kwargs for |
Source code in src/adbc_poolhouse/_trino_config.py
close_pool
¶
Dispose a pool and close its underlying ADBC source connection.
Replaces the two-step pattern pool.dispose() followed by
pool._adbc_source.close(). Always call this instead of calling
pool.dispose() directly to avoid leaving the ADBC source connection open.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pool
|
QueuePool
|
A pool returned by |
required |
Example
Source code in src/adbc_poolhouse/_pool_factory.py
create_pool
¶
create_pool(
config: WarehouseConfig,
*,
pool_size: int = 5,
max_overflow: int = 3,
timeout: int = 30,
recycle: int = 3600,
pre_ping: bool = False,
) -> sqlalchemy.pool.QueuePool
Create a SQLAlchemy QueuePool backed by an ADBC driver.
Three call patterns are supported:
pool = create_pool(DuckDBConfig(...)) # from a config object
pool = create_pool(driver_path="...", ...) # native ADBC driver
pool = create_pool(dbapi_module="...", ...) # Python dbapi module
The config path extracts driver information from the config object's methods. The two raw paths accept driver arguments directly, bypassing config objects entirely.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
WarehouseConfig | None
|
A warehouse config model instance (e.g. |
None
|
driver_path
|
str | None
|
Path to a native ADBC driver shared library, or a
short driver name for manifest-based resolution. Requires
|
None
|
db_kwargs
|
dict[str, str] | None
|
ADBC connection keyword arguments as |
None
|
entrypoint
|
str | None
|
ADBC entry-point symbol. Only used with |
None
|
dbapi_module
|
str | None
|
Dotted module name for a Python package implementing
the ADBC dbapi interface (e.g. |
None
|
pool_size
|
int
|
Number of connections to keep in the pool. Default: 5. |
5
|
max_overflow
|
int
|
Extra connections allowed above pool_size. Default: 3. |
3
|
timeout
|
int
|
Seconds to wait for a connection before raising. Default: 30. |
30
|
recycle
|
int
|
Seconds before a connection is recycled. Default: 3600. |
3600
|
pre_ping
|
bool
|
Whether to ping connections before checkout. Default: False. Pre-ping does not function on a standalone QueuePool without a SQLAlchemy dialect; recycle is the preferred health mechanism. |
False
|
Returns:
| Type | Description |
|---|---|
QueuePool
|
A configured |
Raises:
| Type | Description |
|---|---|
TypeError
|
If none of |
ImportError
|
If the required ADBC driver is not installed. |
Example
Config path:
from adbc_poolhouse import create_pool, close_pool
from adbc_poolhouse import DuckDBConfig
pool = create_pool(DuckDBConfig(database="/tmp/my.db"))
close_pool(pool)
Raw native driver path:
Source code in src/adbc_poolhouse/_pool_factory.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | |
managed_pool
¶
managed_pool(
config: WarehouseConfig,
*,
pool_size: int = 5,
max_overflow: int = 3,
timeout: int = 30,
recycle: int = 3600,
pre_ping: bool = False,
) -> contextlib.AbstractContextManager[
sqlalchemy.pool.QueuePool
]
Context manager that creates a pool and closes it on exit.
The pool is created when the with block is entered and disposed
(via close_pool) when the block exits, whether normally or by
exception.
Three call patterns are supported:
with managed_pool(DuckDBConfig(...)) as pool: ... # config
with managed_pool(driver_path="...", ...) as pool: ... # native
with managed_pool(dbapi_module="...", ...) as pool: ... # dbapi
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
WarehouseConfig | None
|
A warehouse config model instance (e.g. |
None
|
driver_path
|
str | None
|
Path to a native ADBC driver shared library, or a
short driver name for manifest-based resolution. Requires
|
None
|
db_kwargs
|
dict[str, str] | None
|
ADBC connection keyword arguments as |
None
|
entrypoint
|
str | None
|
ADBC entry-point symbol. Only used with |
None
|
dbapi_module
|
str | None
|
Dotted module name for a Python package implementing
the ADBC dbapi interface (e.g. |
None
|
pool_size
|
int
|
Number of connections to keep in the pool. Default: 5. |
5
|
max_overflow
|
int
|
Extra connections allowed above pool_size. Default: 3. |
3
|
timeout
|
int
|
Seconds to wait for a connection before raising. Default: 30. |
30
|
recycle
|
int
|
Seconds before a connection is recycled. Default: 3600. |
3600
|
pre_ping
|
bool
|
Whether to ping connections before checkout. Default: False. |
False
|
Yields:
| Type | Description |
|---|---|
QueuePool
|
A configured |
QueuePool
|
closed when the |
Raises:
| Type | Description |
|---|---|
TypeError
|
If none of |
ImportError
|
If the required ADBC driver is not installed. |
Example
Config path:
from adbc_poolhouse import DuckDBConfig, managed_pool
with managed_pool(DuckDBConfig(database="/tmp/wh.db")) as pool:
with pool.connect() as conn:
cursor = conn.cursor()
cursor.execute("SELECT 1")
Raw native driver path:
Source code in src/adbc_poolhouse/_pool_factory.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 | |