Configuration Reference
Complete reference for Helm values configuration.
Global Settings
global:
# Environment: development, staging, production
environment: production
# Image registry prefix (for private registries)
imageRegistry: "your-registry.example.com"
# Image pull secrets for private registries
imagePullSecrets:
- name: regcred
# RDS SSL configuration (for AWS RDS)
rdsSSL:
enabled: true
Proxy & Network
See the dedicated Network & Proxy Configuration page for full details.
global:
proxy:
httpsProxy: "http://proxy.corp.example.com:3128"
httpProxy: ""
noProxy: "*.internal.corp.com"
networkPolicies:
enabled: false
allowedExternalCIDRs: []
allowDNS: true
Connector API Base URL Overrides
connectors:
slack:
apiBaseUrl: "" # Override Slack API base URL
teams:
graphApiBaseUrl: "" # Override Microsoft Graph API base URL
jira:
apiBaseUrl: "" # Override Jira/Atlassian API base URL
github:
apiBaseUrl: "" # Override GitHub API base URL (or GitHub Enterprise)
linear:
apiBaseUrl: "" # Override Linear API base URL
Database
database:
# Name of the Kubernetes secret containing database credentials
secretName: align-database
# Secret should contain:
# url: postgresql://user:pass@host:port/dbname
# host: hostname (for health checks)
# port: 5432
# username: database user
# password: database password
Connection Pooling
Align's historical scan feature (Discover) processes items in parallel, which can put pressure on database connections. We provide several options depending on your infrastructure.
Option 1: PgBouncer Sidecar (Recommended for Self-Hosted)
The Helm chart includes an optional PgBouncer sidecar that provides enterprise-grade connection pooling on any cloud provider (AWS, GCP, Azure, on-premises):
- Cost: Free (runs as a sidecar container)
- Benefits: Connection multiplexing (1000 app connections → 50 DB connections), works everywhere
- Configuration: Enable in your values override
gateway:
pgbouncer:
enabled: true
# Pool mode: transaction (recommended), session, or statement
poolMode: "transaction"
# Maximum connections from the gateway to PgBouncer
maxClientConnections: 1000
# Connections PgBouncer maintains to PostgreSQL
defaultPoolSize: 50
minPoolSize: 10
reservePoolSize: 5
extraEnv:
# With PgBouncer, you can use higher concurrency
- name: SQS_IMPORT_WORKER_CONCURRENCY
value: "30"
When enabled, the gateway automatically routes database connections through the PgBouncer sidecar running on localhost:6432.
Option 2: RDS Proxy (AWS)
If you're running on AWS, RDS Proxy is a managed alternative:
- Cost: ~$22/month
- Benefits: Connection multiplexing, automatic failover, IAM authentication
- Configuration: Point
DATABASE_URLto the proxy endpoint instead of RDS directly
Setup steps:
-
Create RDS Proxy in AWS (via Console, CLI, or Terraform):
- Target your PostgreSQL RDS instance
- Create an IAM role with Secrets Manager access
- Configure security group to allow traffic from your EKS cluster
-
Update your database secret to use the proxy endpoint:
kubectl create secret generic align-database \
--namespace align \
--from-literal=url="postgresql://user:pass@my-proxy.proxy-xxx.us-east-1.rds.amazonaws.com:5432/align?sslmode=require" \
--from-literal=host="my-proxy.proxy-xxx.us-east-1.rds.amazonaws.com" \
--from-literal=port="5432" \
--from-literal=username="user" \
--from-literal=password="pass" \
--from-literal=database="align" -
Helm values - PgBouncer is disabled by default, so no changes needed:
gateway:
pgbouncer:
enabled: false # Default, no sidecar needed with RDS Proxy
extraEnv:
# With RDS Proxy, you can use higher concurrency
- name: SQS_IMPORT_WORKER_CONCURRENCY
value: "30"
# App pool is just for local buffering
- name: PG_POOL_MAX
value: "10"
RDS Proxy is transparent to the application - you simply point DATABASE_URL to the proxy endpoint instead of the direct RDS endpoint.
Option 3: Application-Level Pooling (Fallback)
Without connection pooling, Align includes automatic retry with exponential backoff for connection pool exhaustion:
gateway:
pgbouncer:
enabled: false
extraEnv:
# Keep workers <= pool_size * num_replicas / 2
# Example: 2 replicas, pool=15 → workers=15 max
- name: SQS_IMPORT_WORKER_CONCURRENCY
value: "15"
# Size based on your PostgreSQL max_connections
- name: PG_POOL_MAX
value: "15"
Tuning guidelines (without pooling):
- Check your PostgreSQL
max_connections(typically 100-200) - Reserve ~20 connections for admin/migrations
- Divide remaining by number of services (gateway + brain)
- Set
PG_POOL_MAXto that value per service - Set
SQS_IMPORT_WORKER_CONCURRENCYtoPG_POOL_MAX * num_replicas / 2
Gateway (API Server)
gateway:
enabled: true
replicaCount: 2
# Frontend URL for decision links in Slack/Teams
frontendUrl: "https://app.yourdomain.com"
image:
repository: align/gateway
tag: latest
pullPolicy: IfNotPresent
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
service:
type: ClusterIP
port: 8080
# Autoscaling
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
# Pod Disruption Budget
pdb:
enabled: true
minAvailable: 1
# Ingress
ingress:
enabled: true
className: "nginx" # or "traefik"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: api.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: api-tls
hosts:
- api.yourdomain.com
# Redis for job state and real-time progress (required for multi-pod deployments)
# For single-pod deployments, leave empty to use in-memory storage
# Works with any Redis-compatible service: ElastiCache, Memorystore, Azure Cache, Dragonfly, KeyDB
redis:
# Direct URL (e.g., redis://host:6379 or rediss://user:pass@host:6379)
url: ""
# Or use a Kubernetes secret:
secretName: ""
secretKey: "url"
# Message queues for distributed job processing (optional)
# Leave empty for in-memory queue (works for single-pod and most deployments)
# For multi-pod high-volume deployments, configure SQS (AWS) queue URLs
# The queue backend is pluggable - see QueueBackend<T> interface for adding new providers
sqs:
# Queue for Discover historical scans
importJobQueueUrl: ""
# Queue for bulk approval with cross-linkage
bulkApprovalQueueUrl: ""
# Queue for bulk delete operations
bulkDeleteQueueUrl: ""
# PgBouncer sidecar for connection pooling (recommended for self-hosted)
# See "Connection Pooling" section above for details
pgbouncer:
enabled: false
image:
repository: bitnami/pgbouncer
tag: "1.23.1"
poolMode: "transaction"
maxClientConnections: 1000
defaultPoolSize: 50
minPoolSize: 10
reservePoolSize: 5
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "200m"
# OIDC Authentication (for self-hosted SSO)
# Allows you to use your own identity provider (Azure AD, Okta, Auth0, Keycloak)
oidc:
enabled: true
scopes: "openid email profile"
# For single-tenant deployments, set this to your tenant ID:
defaultTenantId: "your-tenant-uuid"
# Additional environment variables
extraEnv:
- name: LOG_LEVEL
value: "info"
Discover (Historical Scan) Tuning
The Discover feature scans your connected platforms for historical decisions. It uses platform-specific batch sizes and confidence thresholds optimized for each connector:
| Platform | Batch Size | Min Confidence | Notes |
|---|---|---|---|
| Slack | 12 | 0.20 | Larger batches - thread items are shorter |
| Teams | 12 | 0.20 | Similar to Slack - chat-based items |
| GitHub | 8 | 0.30 | Medium batches - PRs/issues with comments |
| Jira | 8 | 0.30 | Medium batches - structured ticket content |
| Confluence | 8 | 0.30 | Medium batches - document-style content |
These defaults are tuned for optimal detection quality and speed. You can override them with environment variables:
gateway:
extraEnv:
# Override minimum confidence threshold for all platforms (default: platform-specific)
# Items below this threshold are filtered out. Lower values catch more
# borderline decisions for human review. Range: 0.0-1.0
- name: IMPORT_MIN_CONFIDENCE
value: "0.2"
# Worker concurrency - number of parallel batch processors (default: 12 in-memory, 20 SQS)
# Higher = faster scans but more database connections consumed.
# Must be < PG_POOL_MAX to leave headroom for API requests.
- name: SQS_IMPORT_WORKER_CONCURRENCY
value: "20"
# Application connection pool size per gateway replica (default: 15)
# Total DB connections = PG_POOL_MAX × number of gateway replicas
# Ensure this fits within your PostgreSQL max_connections
- name: PG_POOL_MAX
value: "25"
# Database statement timeout in milliseconds (default: 120000 = 2 min)
# Increase for large tenant scans that process thousands of items
- name: PG_STATEMENT_TIMEOUT
value: "300000"
# --- Parallelism Settings ---
# These control concurrent API calls to connectors during scanning.
# Higher values = faster scans but more API rate limit pressure.
# Parallel Slack channels to scan simultaneously (default: 10)
- name: IMPORT_PARALLEL_CHANNELS
value: "10"
# Parallel thread fetches for Slack/Teams (default: 20)
- name: IMPORT_PARALLEL_THREADS
value: "20"
# Parallel comment fetches for Jira/GitHub (default: 20)
- name: IMPORT_PARALLEL_COMMENTS
value: "20"
# Parallel meeting fetches for Teams (default: 10)
- name: IMPORT_PARALLEL_MEETINGS
value: "10"
# Maximum parallel GitHub repos to scan simultaneously (default: 10)
# Higher values speed up GitHub scans but increase API rate limit pressure
- name: IMPORT_PARALLEL_REPOS
value: "10"
# Parallel GitHub detail fetches per repo (default: 20)
# Controls concurrent issue/PR detail API calls within each repo
- name: IMPORT_PARALLEL_GITHUB_DETAILS
value: "20"
Performance tips:
- Single-pod deployments: Use
SQS_IMPORT_WORKER_CONCURRENCY=12(default) andPG_POOL_MAX=15 - Multi-pod with Redis: Use
SQS_IMPORT_WORKER_CONCURRENCY=20-30andPG_POOL_MAX=25-40 - With PgBouncer: You can safely increase
SQS_IMPORT_WORKER_CONCURRENCY=30+since PgBouncer handles connection multiplexing - Large organizations (1000+ items): Increase
SQS_IMPORT_WORKER_CONCURRENCYand add Brain replicas for more parallel LLM analysis - Slack and Teams benefit most from lower confidence thresholds, as thread-based discussions often contain implicit decisions
- The Brain service dynamically scales its LLM token budget based on batch size, so no tuning is needed on the Brain side
Batch sizes are now platform-specific and optimized automatically (Slack/Teams: 12, GitHub/Jira: 8). If IMPORT_BATCH_SIZE is set, it acts as a global override for all platforms. In most cases you can safely remove this variable; if you keep it, it will override all platform-specific defaults.
OIDC Authentication
Self-hosted deployments can use their own identity provider for single sign-on (SSO). Align supports any OIDC-compliant provider:
- Azure AD / Entra ID
- Okta
- Auth0
- Keycloak
- Google Workspace
Setup Steps
-
Create an OAuth application in your identity provider:
- Set the redirect URI to
https://api.yourdomain.com/auth/callback - Enable the
openid,email, andprofilescopes - Note your Client ID and Client Secret
- Set the redirect URI to
-
Create the OIDC secret:
kubectl create secret generic align-oidc \
--namespace align \
--from-literal=issuer-url="https://login.microsoftonline.com/{tenant-id}/v2.0" \
--from-literal=client-id="your-client-id" \
--from-literal=client-secret="your-client-secret" -
Enable OIDC in your values:
gateway:
oidc:
enabled: true
scopes: "openid email profile"
defaultTenantId: "your-align-tenant-uuid" # For single-tenant deployments -
Deploy with
helm upgrade
Provider-Specific Configuration
Azure AD / Entra ID
Issuer URL: https://login.microsoftonline.com/{tenant-id}/v2.0
- Go to Azure Portal → Azure Active Directory → App registrations
- Create a new registration
- Add a redirect URI:
https://api.yourdomain.com/auth/callback - Generate a client secret
- Grant
openid,email, andprofilepermissions
Okta
Issuer URL: https://{your-domain}.okta.com
- Create a new OIDC Web Application in Okta Admin
- Set the redirect URI:
https://api.yourdomain.com/auth/callback - Note the Client ID and Client Secret
Keycloak
Issuer URL: https://{host}/realms/{realm}
- Create a new client in your realm
- Set Access Type to "confidential"
- Add the redirect URI:
https://api.yourdomain.com/auth/callback - Enable "Standard Flow"
Brain (AI Service)
brain:
enabled: true
replicaCount: 2
image:
repository: align/brain
tag: latest
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "1000m"
service:
type: ClusterIP
port: 8090
# Probe configuration (brain handles long-running LLM calls)
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 5
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 30
timeoutSeconds: 15
failureThreshold: 6
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 8
targetCPUUtilizationPercentage: 70
extraEnv: []
UI (Frontend)
ui:
enabled: true
replicaCount: 2
image:
repository: align/ui
tag: latest
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
className: "nginx"
hosts:
- host: app.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: app-tls
hosts:
- app.yourdomain.com
extraEnv: []
Connectors
connectors:
# Common settings for all connectors
common:
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
slack:
enabled: true
replicaCount: 1
image:
repository: align/connector-slack
tag: latest
service:
port: 8081
extraEnv: []
teams:
enabled: true
replicaCount: 1
image:
repository: align/connector-teams
tag: latest
service:
port: 8084
extraEnv: []
jira:
enabled: true
replicaCount: 1
alignCommand: "/align" # Command to trigger capture
image:
repository: align/connector-jira
tag: latest
service:
port: 8083
extraEnv: []
github:
enabled: true
replicaCount: 1
image:
repository: align/connector-github
tag: latest
service:
port: 8085
extraEnv: []
linear:
enabled: false
replicaCount: 1
image:
repository: align/connector-linear
tag: latest
service:
port: 8082
extraEnv: []
# MCP Server for AI Assistants (Claude Desktop, Cursor, VS Code)
align:
enabled: false
replicaCount: 1
image:
repository: align/connector-align
tag: latest
service:
port: 8089
# Expose MCP server externally for remote AI assistants
ingress:
enabled: false
className: "nginx" # or "traefik"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: mcp.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: mcp-tls
hosts:
- mcp.yourdomain.com
extraEnv: []
When the MCP connector is enabled with ingress, AI assistants like Cursor can connect directly to https://mcp.yourdomain.com/tools. For Claude Desktop, you can also use stdio mode (local process) - see AI Assistants Integration.
Secrets
secrets:
# Create secrets from values (DEV ONLY)
create: false
# LLM API keys
llm:
openaiApiKey: ""
anthropicApiKey: ""
custom:
baseUrl: "" # OpenAI-compatible endpoint
model: "" # Model name
apiKey: "" # Optional API key
useLocalEmbeddings: true
# Internal secrets
internal:
encryptionKey: ""
jwtSecret: ""
cookieSecret: ""
serviceAuthToken: ""
OAuth credentials are not required. Self-hosted Align uses Align's centrally-managed OAuth apps for connector authentication.
External Secrets
Align supports the External Secrets Operator for syncing secrets from your cloud provider's secret manager.
AWS Secrets Manager
externalSecrets:
enabled: true
secretStore:
name: aws-secrets-manager
kind: ClusterSecretStore
refreshInterval: 1h
aws:
secretPath: "align/production"
GCP Secret Manager
externalSecrets:
enabled: true
secretStore:
name: gcp-secret-manager
kind: ClusterSecretStore
refreshInterval: 1h
gcp:
projectID: "your-project-id"
Azure Key Vault
externalSecrets:
enabled: true
secretStore:
name: azure-key-vault
kind: ClusterSecretStore
refreshInterval: 1h
azure:
vaultUrl: "https://your-vault.vault.azure.net"
HashiCorp Vault
externalSecrets:
enabled: true
secretStore:
name: hashicorp-vault
kind: ClusterSecretStore
refreshInterval: 1h
vault:
server: "https://vault.example.com"
path: "secret/data/align"
Telemetry
telemetry:
enabled: true
samplingRate: "1.0"
batchSize: 100
flushIntervalMs: 5000
# Hourly aggregation CronJob
aggregation:
enabled: true
schedule: "5 * * * *"
# Daily rollup CronJob
dailyRollup:
enabled: true
schedule: "30 1 * * *"
# Data retention (days)
retention:
rawEventDays: 90
hourlyMetricDays: 365
dailyMetricDays: 730
Migrations
migrations:
enabled: true
image:
repository: align/migrations
tag: latest
pullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"
Security Context
securityContext:
# Enable restrictive security context
# Disable for local dev where containers run as root
enabled: true
Node Scheduling
nodeSelector: {}
tolerations: []
affinity: {}
Service Account
serviceAccount:
create: true
name: align
annotations: {}
Cloud-specific IAM
AWS (IRSA)
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT:role/align-role
GCP (Workload Identity)
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: align@PROJECT.iam.gserviceaccount.com
Azure (Workload Identity)
serviceAccount:
annotations:
azure.workload.identity/client-id: "CLIENT_ID"
labels:
azure.workload.identity/use: "true"