Skip to main content
The Connector is available as a statically linked binary in a multi-architecture distroless Docker image.

Container Image Registries

You can pull Connector images from either of these registries:
  • AWS ECR: 654654333078.dkr.ecr.<region>.amazonaws.com/formalco-prod-connector:<tag>
  • GCP Artifact Registry: us-docker.pkg.dev/formal-public-assets/formalco-prod-connector/formalco-prod-connector:<tag>

Infrastructure Requirements

  • Operating System: Linux environment
  • Architecture: AMD64 or ARM64
  • Container Runtime: Docker or compatible container runtime

Network Requirements

The Connector requires:
  • Network access to api.joinformal.com (Formal Control Plane)
  • Outbound access to your protected resources
  • Inbound access from clients on configured listener ports
For customers deploying on AWS, the Connector can connect to the Formal Control Plane using AWS VPC Private Link instead of traversing the public internet. This provides enhanced security and network isolation. Service Details:
  • Service Name: com.amazonaws.vpce.eu-west-1.vpce-svc-01bfea09d5ec08d36
  • Region: eu-west-1
  • Endpoint Type: Interface (services that use NLBs or GWLBs)
Setup Steps:
  1. Create VPC Endpoint Using AWS Console:
    • Navigate to VPCEndpointsCreate Endpoint
    • Select Other endpoint services
    • Enter service name: com.amazonaws.vpce.eu-west-1.vpce-svc-01bfea09d5ec08d36
    • Click Verify service
    • Select your VPC and subnets where the Connector is deployed
    • Configure security groups to allow outbound HTTPS traffic (port 443) from the Connector
    • Important: Check Enable Private DNS Name
    • If your VPC is in a region other than eu-west-1, enable Cross-region endpoint and specify eu-west-1 as the target region
    • Review and create the endpoint
  2. Configuration Requirements
    • Private DNS: Must be enabled for proper DNS resolution of Control Plane endpoints
    • Cross-region Support: Required if your VPC is in any region other than eu-west-1
    • Security Groups: Must allow outbound HTTPS (port 443) from Connector to VPC endpoint
    • Network ACLs: Ensure subnet ACLs permit traffic to/from the VPC endpoint
  3. Verify Connectivity Once the endpoint is in Available state, test from your Connector instance:
    # Verify DNS resolves to private IP
    nslookup api.joinformal.com
    
    # Test HTTPS connectivity
    curl -v https://api.joinformal.com
    
The VPC endpoint uses Private DNS to automatically redirect api.joinformal.com traffic through the private connection. No configuration changes are needed on the Connector side.
For cross-region deployments (regions other than eu-west-1), you must enable the cross-region endpoint feature and specify eu-west-1 as the target region, or the connection will fail.

Resource Requirements

The Connector requires adequate resources to apply policies with minimal latency and maintain all necessary context in RAM (Control Plane data, queries metadata, responses data, log buffer, etc.).
SpecMinimumRecommended
CPU1 core2 cores per node (2 nodes)
RAM2 GB4 GB per node (2 nodes)
Disk0 GB10 GB
CPU Throttling Warning: If you don’t allocate entire CPU cores to the Connector and rely on CPU throttling (e.g., CFS quotas on Linux, CPU limits in Kubernetes), you will most likely experience increased latency and timeouts during peak loads.

Production Recommendations

For production deployments, we recommend:
  • High Availability: Run at least 2 nodes behind a load balancer
  • Resource Allocation: 2 CPU cores and 4 GB RAM per node
  • Load Distribution: Distribute traffic across multiple Connector instances
  • Persistent Storage: Mount a persistent volume to /formal/logs for the log spool
CPU, RAM and disk requirements vary based on usage patterns, query types, traffic volume, and enforced policies. Monitor your Connector with a telemetry collector and scale resources as needed. The Connector buffers up to 400 MB of logs in memory before spooling to disk. A typical log entry is around 1 KB – at 100 requests per second, 10 GB of disk provides more than a day of outage buffer. The Connector works without disk (spooling is disabled), but logs may be lost during disruptions.
When deploying multiple instances, Connectors attempt automatically to form a cluster with shared state. It enables Connectors to coordinate rate limiting across all instances. See the Clustering page for details.

AWS ECS Fargate

Deploy as a Fargate service behind a Network Load Balancer with multi-AZ availability

Kubernetes

Deploy using Formal Helm charts on any Kubernetes cluster (EKS, GKE, AKS, on-premises).

Docker

Run as a standalone container (development/testing only)
See the Quickstart guide for step-by-step deployment instructions.

Environment Variables

The Connector is configured primarily through the Control Plane, but requires the following environment variable:
VariableTypeDescription
FORMAL_CONTROL_PLANE_API_KEYStringAPI token to authenticate with the Control Plane (obtained when creating the Connector)