Deployment
This guide covers deploying Fire Arrow Server to production and non-production environments, including Docker, standalone Java, database setup, monitoring, and multi-node considerations.
Running with Docker (Recommended)
Docker is the recommended way to run Fire Arrow Server in production. The image includes the Java runtime and all dependencies.
Basic Docker Run
docker run -d \
--name fire-arrow-server \
-p 8080:8080 \
-e SPRING_DATASOURCE_URL="jdbc:postgresql://db-host:5432/hapi" \
-e SPRING_DATASOURCE_USERNAME="hapi" \
-e SPRING_DATASOURCE_PASSWORD="your-secure-password" \
-e FIRE_ARROW_LICENSE_SOURCE="inline" \
-e FIRE_ARROW_LICENSE_CONTENT="your-license-string" \
-e FIRE_ARROW_LICENSE_DEPLOYMENT_ID="my-deployment" \
-e FIRE_ARROW_LICENSE_RUNTIME_ENVIRONMENT="prod" \
fire-arrow-server:latest
Docker Compose
For environments where you want Fire Arrow Server and PostgreSQL running together:
version: "3.8"
services:
fire-arrow:
image: fire-arrow-server:latest
ports:
- "8080:8080"
environment:
SPRING_DATASOURCE_URL: "jdbc:postgresql://db:5432/hapi"
SPRING_DATASOURCE_USERNAME: "hapi"
SPRING_DATASOURCE_PASSWORD: "hapi"
FIRE_ARROW_LICENSE_SOURCE: "inline"
FIRE_ARROW_LICENSE_CONTENT: "${FIRE_ARROW_LICENSE_CONTENT}"
FIRE_ARROW_LICENSE_DEPLOYMENT_ID: "my-deployment"
FIRE_ARROW_LICENSE_RUNTIME_ENVIRONMENT: "prod"
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
environment:
POSTGRES_DB: "hapi"
POSTGRES_USER: "hapi"
POSTGRES_PASSWORD: "hapi"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U hapi"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pgdata:
Spring Boot Standalone
If Docker isn't an option, run Fire Arrow Server as a standalone Java application:
java -jar fire-arrow-server.jar \
--spring.config.additional-location=file:/etc/fire-arrow/application.yaml
Requirements:
- Java 17 or newer
- PostgreSQL database accessible from the host
- License file or inline license configured
PostgreSQL Setup
Minimum Requirements
- PostgreSQL 14 or newer
- A dedicated database and user for Fire Arrow Server
- UTF-8 encoding
Initial Setup
CREATE USER hapi WITH PASSWORD 'your-secure-password';
CREATE DATABASE hapi OWNER hapi ENCODING 'UTF8';
Production Tuning
For production workloads, adjust these PostgreSQL settings based on your expected data volume:
shared_buffers = 256MB # 25% of available RAM
effective_cache_size = 768MB # 75% of available RAM
work_mem = 16MB
maintenance_work_mem = 128MB
max_connections = 100
Fire Arrow Server uses connection pooling via HikariCP. Configure the pool size in your application.yaml:
spring:
datasource:
hikari:
maximum-pool-size: 20
minimum-idle: 5
Environment Variable Overrides
Every property in application.yaml can be overridden with an environment variable. Spring Boot converts nested YAML keys to uppercase with underscores and replaces dots and hyphens:
| Pattern | Example |
|---|---|
| Dots → underscores | spring.datasource.url → SPRING_DATASOURCE_URL |
| Hyphens → underscores | fire-arrow.authentication.enabled → FIRE_ARROW_AUTHENTICATION_ENABLED |
| List indices → numeric suffix | fire-arrow.authentication.providers[0].name → FIRE_ARROW_AUTHENTICATION_PROVIDERS_0_NAME |
This is the recommended way to inject secrets and environment-specific values in containerized deployments.
Configuration Overlay
For per-environment configuration without modifying the base application.yaml, use Spring Boot's configuration overlay:
java -jar fire-arrow-server.jar \
--spring.config.additional-location=file:/etc/fire-arrow/overrides.yaml
Properties in the overlay file take precedence over the base configuration. This lets you maintain a single base configuration and apply environment-specific overrides (e.g., different database URLs, license keys, feature flags).
Health Endpoints
Fire Arrow Server exposes Spring Boot Actuator health endpoints:
Liveness Probe
curl http://localhost:8080/actuator/health/liveness
Returns 200 OK as long as the JVM is running. Use this for container orchestrator liveness checks.
Readiness Probe
curl http://localhost:8080/actuator/health/readiness
Returns 200 OK when the server is ready to accept traffic. This includes checks for:
- Database connectivity
- License validity
- FHIR server initialization
Use this for load balancer health checks and Kubernetes readiness probes.
Full Health Details
curl http://localhost:8080/actuator/health
Returns detailed status of all health indicators. In a Kubernetes deployment:
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
Monitoring
OpenTelemetry
Fire Arrow Server supports distributed tracing and metrics via OpenTelemetry. Attach the OpenTelemetry Java agent to export telemetry data to your observability platform:
java -javaagent:/path/to/opentelemetry-javaagent.jar \
-Dotel.service.name=fire-arrow-server \
-Dotel.exporter.otlp.endpoint=http://otel-collector:4317 \
-jar fire-arrow-server.jar
In Docker:
docker run -d \
-e JAVA_TOOL_OPTIONS="-javaagent:/opt/otel/opentelemetry-javaagent.jar" \
-e OTEL_SERVICE_NAME="fire-arrow-server" \
-e OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector:4317" \
fire-arrow-server:latest
Prometheus Metrics
Prometheus metrics are exposed at the Actuator metrics endpoint:
curl http://localhost:8080/actuator/prometheus
This includes JVM metrics, HTTP request metrics, database connection pool metrics, and FHIR-specific metrics. Configure your Prometheus instance to scrape this endpoint.
Multi-Node Deployment
When running multiple Fire Arrow Server instances behind a load balancer:
Session Affinity
Fire Arrow Server's GraphQL cursors and some internal state benefit from session affinity (sticky sessions). Configure your load balancer to route requests from the same client to the same server instance where possible.
If session affinity isn't available, the server still functions correctly -- stateless cursors are used for pagination, and the database serves as the source of truth.
Distributed Locking
Some operations (CarePlan materialization, subscription processing) require coordination between nodes. Fire Arrow Server supports distributed locking via JDBC:
fire-arrow:
mutex:
type: jdbc
This uses the shared PostgreSQL database for lock coordination, ensuring that scheduled tasks run on exactly one node.
For single-node deployments, use the default local locking:
fire-arrow:
mutex:
type: local
Production Best Practices
- Use environment variables for secrets -- never commit passwords, connection strings, or license content to version control
- Enable readiness probes -- ensure your load balancer only routes traffic to healthy instances
- Set up monitoring -- use OpenTelemetry or Prometheus to track request latency, error rates, and resource utilization
- Configure connection pooling -- tune HikariCP's
maximum-pool-sizebased on your expected concurrent request volume - Use JDBC mutex in multi-node deployments -- prevents duplicate CarePlan materialization and subscription processing
- Plan for license renewal -- monitor the license health check and renew before expiration
- Back up PostgreSQL -- Fire Arrow Server stores all FHIR data in PostgreSQL; ensure you have regular backups and a tested recovery procedure
- Use TLS -- terminate TLS at your load balancer or reverse proxy; Fire Arrow Server itself serves HTTP on port 8080