Starting Latitude with Docker Compose

Latitude can be easily deployed in a single machine using Docker Compose, which will set up all required services including the web interface, API gateway, workers, websockets, database, and Redis.

Prerequisites

  • Docker and Docker Compose installed on your system
  • A copy of the .env configuration file

Configuration

  1. First, create your environment configuration by copying the example file:
cp .env.example .env
  1. For proper functioning of the project, you need to create a Docker network named web. This network is used for communication between the various services, including Traefik.

You can create the network using the following command:

docker network create web

Make sure this network is created before running the containers with docker compose.

  1. Configure your .env file with your production settings. The following key configurations are available:
  • Traefik Settings:

    • TRAEFIK_ACME_EMAIL: Email address used for Let’s Encrypt ACME registration. Required for issuing and renewing SSL certificates. It is also used to receive expiration and renewal notifications.
    • TRAEFIK_ADMIN_PASS: Passwords must be hashed using MD5, SHA1, or BCrypt. Read more: https://doc.traefik.io/traefik/middlewares/http/basicauth/ Example command to generate a password for user admin:
      echo $(htpasswd -nb admin password) | sed -e s/\\$/\\$\\$/g
      
  • Database Settings:

    • POSTGRES_USER and POSTGRES_PASSWORD: Database credentials
    • DATABASE_URL: PostgreSQL connection string
  • Redis Settings:

    • QUEUE_PORT and QUEUE_HOST: Redis queue configuration
    • CACHE_PORT and CACHE_HOST: Redis cache configuration
  • Network Settings:

    • APP_DOMAIN: Your domain (e.g., latitude.so)
    • APP_URL: Full URL to your application
    • GATEWAY_HOSTNAME: API gateway hostname
    • GATEWAY_SSL: Enable/disable SSL
  • Email Configuration:

    • MAILGUN_EMAIL_DOMAIN: Email domain for sending emails
    • FROM_MAILER_EMAIL: Sender email address
    • MAILGUN_MAILER_API_KEY: Mailgun API key (optional)
    • DISABLE_EMAIL_AUTHENTICATION: Disable email authentication (optional, default: false)
  • Storage Configuration:

    • DRIVE_DISK: Choose between local or s3 for file storage

      1. local file storage configuration:

      • Files are stored locally on the host machine using Docker volumes.

      • Default variables used:

        # Paths for local storage (optional)
        FILE_STORAGE_ROOT_PATH="/app/storage/files" # e.g /app/storage/files
        FILE_PUBLIC_PATH="files" # e.g files
        FILES_STORAGE_PATH="/app/storage/files" # e.g /app/storage/files
        PUBLIC_FILES_STORAGE_PATH="/app/apps/web/public/files" # e.g /app/storage/public/files (IMPORTANT: has to be in nextjs's public folder)
        
      1. s3 AWS S3 storage configuration:

      • With environment variables (for convenience/legacy): You explicitly provide AWS credentials (AWS_ACCESS_KEY and AWS_ACCESS_SECRET) via .env file. Required variables:

        AWS_ACCESS_KEY=your-access-key-here
        AWS_ACCESS_SECRET=your-secret-here
        AWS_REGION=us-east-1
        S3_BUCKET=app-files-bucket
        PUBLIC_S3_BUCKET=public-files-bucket
        
      • AWS S3 with IAM Roles (recommended): No explicit AWS keys needed! Use IAM Roles attached to your AWS services (ECS, EC2, Lambda). Ensure AWS resource has proper IAM Role with S3 access (GetObject, PutObject, DeleteObject). Only required environment variables (no keys explicitly stored):

        AWS_REGION=us-east-1
        S3_BUCKET=app-files-bucket
        PUBLIC_S3_BUCKET=public-files-bucket
        

        How to configure IAM Role (example):

        1. Go to AWS IAM → Create Role.
        2. Select trusted entity type (e.g., AWS Service).
        3. Attach policy that allows access to required S3 buckets.
        {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket"
              ],
              "Resource": [
                "arn:aws:s3:::your-bucket-name-here",
                "arn:aws:s3:::your-bucket-name-here/*"
              ]
            }
          ]
        }
        
        1. Attach this IAM role to AWS infrastructure (EC2 Instance / ECS Task Definition / Lambda function).

        AWS SDK automatically handles credentials from attached IAM roles.

  • Optional Features:

    • Sentry integration for error tracking
    • PostHog for analytics

Starting the Services

  1. Start all services using Docker Compose:
docker compose up -d

This will start the following services from public Docker images stored in our GitHub Container Registry:

  • API Gateway (accessible via gateway.latitude.localhost)
  • Background workers
  • Migrations daemon that will run on startup and automatically apply database migrations
  • PostgreSQL database on port 5432
  • Redis on port 6379
  • Traefik (reverse proxy) on port 80
  • Web application (accessible via app.latitude.localhost)
  • WebSocket server (accessible via ws.latitude.localhost)

Service URLs

Once running, you can access:

  • Main application: http://app.latitude.localhost
  • API Gateway: http://gateway.latitude.localhost
  • WebSocket server: http://ws.latitude.localhost
  • Traefik dashboard: http://localhost:8090

Monitoring

You can monitor the services using standard Docker commands:

# View service logs
docker compose logs -f [service_name]

# Check service status
docker compose ps

# View resource usage
docker compose top

Important Notes

  1. The services use Traefik as a reverse proxy, which automatically handles routing and service discovery.
  2. The database data is persisted using a Docker volume mounted at ./docker/pgdata.
  3. If you’re using local file storage, note that it requires additional configuration for multi-container setups, and S3 is recommended for production environments.
  4. Make sure docker/init-db.sh has execution permissions or otherwise the database container will not start appropiately.
  5. For a more robust production environment, make sure to:
    • Set strong passwords in your .env file
    • Use appropriate storage configuration (S3 recommended)
    • Set up proper monitoring and logging
    • Use a container orchestrator like Kubernetes, GCP or AWS ECS

Running in localhost

You might want to run the services in localhost for development purposes. To do so, you can use the following command:

docker compose -f docker-compose.local.yml up

This will start the same services as in production mode but SSL/HTTPS disabled, which allows you to use local tlds such as localhost. Remember to configure your .env file accordingly.

Building Your Own Images

We provide a custom docker profile for building your own images locally.

To build and run your local images, run the following command:

docker compose --profile local up --build