Production
Learn how to self-host Latitude in production mode
Starting Latitude with Docker Compose
Latitude can be easily deployed in a single machine using Docker Compose, which will set up all required services including the web interface, API gateway, workers, websockets, database, and Redis.
Prerequisites
- Docker and Docker Compose installed on your system
- A copy of the
.env
configuration file
Configuration
- First, create your environment configuration by copying the example file:
- For proper functioning of the project, you need to create a Docker network named web. This network is used for communication between the various services, including Traefik.
You can create the network using the following command:
Make sure this network is created before running the containers with docker compose
.
- Configure your
.env
file with your production settings. The following key configurations are available:
-
Traefik Settings:
TRAEFIK_ACME_EMAIL
: Email address used for Let’s Encrypt ACME registration. Required for issuing and renewing SSL certificates. It is also used to receive expiration and renewal notifications.TRAEFIK_ADMIN_PASS
: Passwords must be hashed using MD5, SHA1, or BCrypt. Read more: https://doc.traefik.io/traefik/middlewares/http/basicauth/ Example command to generate a password for user admin:
-
Database Settings:
POSTGRES_USER
andPOSTGRES_PASSWORD
: Database credentialsDATABASE_URL
: PostgreSQL connection string
-
Redis Settings:
QUEUE_PORT
andQUEUE_HOST
: Redis queue configurationCACHE_PORT
andCACHE_HOST
: Redis cache configuration
-
Network Settings:
APP_DOMAIN
: Your domain (e.g.,latitude.so
)APP_URL
: Full URL to your applicationGATEWAY_HOSTNAME
: API gateway hostnameGATEWAY_SSL
: Enable/disable SSL
-
Email Configuration:
MAILGUN_EMAIL_DOMAIN
: Email domain for sending emailsFROM_MAILER_EMAIL
: Sender email addressMAILGUN_MAILER_API_KEY
: Mailgun API key (optional)DISABLE_EMAIL_AUTHENTICATION
: Disable email authentication (optional, default:false
)
-
Storage Configuration:
-
DRIVE_DISK
: Choose betweenlocal
ors3
for file storage-
local
file storage configuration:
-
Files are stored locally on the host machine using Docker volumes.
-
Default variables used:
-
s3
AWS S3 storage configuration:
-
With environment variables (for convenience/legacy): You explicitly provide AWS credentials (AWS_ACCESS_KEY and AWS_ACCESS_SECRET) via .env file. Required variables:
-
AWS S3 with IAM Roles (recommended): No explicit AWS keys needed! Use IAM Roles attached to your AWS services (ECS, EC2, Lambda). Ensure AWS resource has proper IAM Role with S3 access (GetObject, PutObject, DeleteObject). Only required environment variables (no keys explicitly stored):
How to configure IAM Role (example):
- Go to AWS IAM → Create Role.
- Select trusted entity type (e.g., AWS Service).
- Attach policy that allows access to required S3 buckets.
- Attach this IAM role to AWS infrastructure (EC2 Instance / ECS Task Definition / Lambda function).
AWS SDK automatically handles credentials from attached IAM roles.
-
-
-
Optional Features:
- Sentry integration for error tracking
- PostHog for analytics
Starting the Services
- Start all services using Docker Compose:
This will start the following services from public Docker images stored in our GitHub Container Registry:
- API Gateway (accessible via
gateway.latitude.localhost
) - Background workers
- Migrations daemon that will run on startup and automatically apply database migrations
- PostgreSQL database on port 5432
- Redis on port 6379
- Traefik (reverse proxy) on port 80
- Web application (accessible via
app.latitude.localhost
) - WebSocket server (accessible via
ws.latitude.localhost
)
Service URLs
Once running, you can access:
- Main application:
http://app.latitude.localhost
- API Gateway:
http://gateway.latitude.localhost
- WebSocket server:
http://ws.latitude.localhost
- Traefik dashboard:
http://localhost:8090
Monitoring
You can monitor the services using standard Docker commands:
Important Notes
- The services use Traefik as a reverse proxy, which automatically handles routing and service discovery.
- The database data is persisted using a Docker volume mounted at
./docker/pgdata
. - If you’re using local file storage, note that it requires additional configuration for multi-container setups, and S3 is recommended for production environments.
- Make sure docker/init-db.sh has execution permissions or otherwise the database container will not start appropiately.
- For a more robust production environment, make sure to:
- Set strong passwords in your
.env
file - Use appropriate storage configuration (S3 recommended)
- Set up proper monitoring and logging
- Use a container orchestrator like Kubernetes, GCP or AWS ECS
- Set strong passwords in your
Running in localhost
You might want to run the services in localhost for development purposes. To do so, you can use the following command:
This will start the same services as in production mode but SSL/HTTPS disabled,
which allows you to use local tlds such as localhost
. Remember to configure
your .env
file accordingly.
Building Your Own Images
We provide a custom docker profile for building your own images locally.
To build and run your local images, run the following command: