The full Docker reference for self-hosting GlycemicGPT, including how to install Docker.
This is the full reference for running GlycemicGPT with Docker. If you just want the fastest path, see Get Started -- it walks you through the same content as a numbered checklist.
Docker is software that lets you run pre-packaged services on your computer or server without manually installing each one. Instead of installing PostgreSQL, Redis, Python, Node.js, and configuring them all to talk to each other, you run one command (docker compose up) and Docker takes care of the rest.
GlycemicGPT uses Docker because it makes the platform easy to install (one command), easy to update (one command), and identical across macOS, Linux, and Windows. You don't need to know how Docker works internally to use it -- you just need it installed.
GlycemicGPT runs on Windows through WSL2 (Windows Subsystem for Linux). If you've never set up WSL2 before, follow Microsoft's WSL install guide first.
Whichever setup you use, GlycemicGPT runs five services:
web -- The dashboard you visit in your browser. Serves on port 3000 locally, or proxied through HTTPS on an always-on deployment.
api -- The backend that handles your data, settings, and account. Serves on port 8000 locally, or proxied internally on an always-on deployment.
sidecar -- The AI bridge. When you chat with the AI, your message goes here, then to your AI provider, then back to you. Internal-only.
db -- A PostgreSQL database. This is where your data is stored. Internal-only.
redis -- A short-term memory store the platform uses to keep your sign-in session active and to deliver real-time dashboard updates quickly. Internal-only.
Most of GlycemicGPT's behavior is controlled by environment variables in the .env file. The defaults work for trying it on a local computer; you'll change a few when deploying anywhere that's reachable from outside your machine.
When the API container starts, it automatically runs any pending database schema migrations (we use Alembic). For most upgrades you don't need to do anything beyond the two commands above. A few things to be aware of:
Take a backup before major upgrades. See Backups below. If a migration fails partway through, the easiest recovery is restoring the dump.
Migrations that fail will keep the API container in a restart loop -- you'll see the same migration error repeating in docker compose logs api. Read the error, fix it (usually a manual SQL fix), and the container will succeed on the next restart.
Released breaking-change migrations are called out in the release notes on the GitHub Releases page. For non-breaking releases the upgrade is push-button; for breaking ones, read the notes first.
If the schema change between versions is too disruptive, the safest path is: take a pg_dump, docker compose down -v (which drops the volume), docker compose up -d on the new version (which creates a fresh schema), then restore your dump selectively. This is rarely needed -- documenting it for completeness.
GlycemicGPT does not ship an automated backup service in the default Docker setup. (The Kubernetes deployment does -- a daily pg_dump CronJob to a PVC; see Install with Kubernetes.) For Docker users, you take backups manually or via a host-side cron job.
That gives you a complete SQL dump of every table -- glucose, pump events, AI chat history, accounts, settings. Move the file off the host (S3, another machine, an external drive) so a host failure doesn't lose your only backup.
# Stop the API and web services so nothing writes while we restoredocker compose stop api web sidecar# Pipe the dump back incat glycemicgpt-backup-20260429.sql | docker compose exec -T db psql -U glycemicgpt glycemicgpt# Restartdocker compose start api web sidecar
Adjust the path. Make sure backups/ exists. Test the restore at least once before relying on it -- backups that have never been restored are not real backups.
This is backup -- a SQL dump suitable for restoring to another GlycemicGPT instance. If you want exports in CSV or other portable formats for use outside GlycemicGPT, see Exporting your data.
Both give you a public URL the mobile app can reach from anywhere. The Cloudflare Tunnel path works equally well on a computer at home and on a cloud VPS, and is often the simpler and more secure option. The Caddy + Let's Encrypt path is what you want if you specifically don't want Cloudflare in your data path -- you handle TLS yourself with Let's Encrypt directly.
Run GlycemicGPT and reach it publicly through a Cloudflare-managed tunnel. You do not need a public IP from your ISP, port forwarding on your router, or TLS certificates to renew. Your server makes one outbound connection to Cloudflare; all inbound traffic comes through that.
This works equally well for:
A computer at home (desktop, NAS, mini-PC, Raspberry Pi -- anything running 24/7)
A cloud VPS where you don't want to expose any inbound ports
The standard "VPS + reverse proxy + Let's Encrypt" pattern requires inbound ports 80 and 443 to be open to the entire internet. Even with a reverse proxy in front, you've put TLS termination, HTTP parsing, and your application surface directly on the public internet -- which means:
Every script kiddie scanning the internet can probe your server
Any 0-day in your reverse proxy or web stack is reachable from anywhere
Your VPS provider's firewall is the only thing between you and the world's traffic
With Cloudflare Tunnel, your server has zero inbound ports open. Cloudflare is in front of you doing TLS termination, DDoS protection, and (with Cloudflare Access if you set it up) authentication at the edge -- requests only reach your server through the tunnel after Cloudflare has already decided to forward them.
The tradeoff: Cloudflare is in your data path. They see encrypted HTTPS traffic. Per their terms they don't inspect Tunnel traffic for normal use, but if Cloudflare-as-a-third-party is in your threat model, the VPS + Caddy path keeps Cloudflare out of the picture.
glycemicgpt (or whatever you want -- this becomes part of your URL)
Domain
Your domain on Cloudflare
Type
HTTP
URL
web:3000
web:3000 is the GlycemicGPT web service inside the Docker network. The cloudflared container runs in the same Docker Compose stack and can reach it by service name.
Click Save tunnel.
If you want both the dashboard AND direct API access (e.g., for the mobile app to skip the web proxy), add a second public hostname pointing at api:8000 -- e.g., subdomain api for the API. For most users, one hostname pointing at web:3000 is enough; the web service proxies API requests internally.
Still in the deploy/examples/cloudflare-tunnel/ directory:
docker compose up -d
This pulls the prebuilt GlycemicGPT images, starts all five GlycemicGPT services plus the cloudflared connector. The tunnel registers itself with Cloudflare automatically once it starts.
You should see all six services healthy. Watch the cloudflared logs to confirm the tunnel is connected:
docker compose logs -f cloudflared
Look for a line like Connection registered. When you see that, your tunnel is live. Press Ctrl+C to stop watching the logs (the services keep running).
Visit https://glycemicgpt.yourdomain.com (the public hostname from step 4). You should see the GlycemicGPT login page over HTTPS, served through Cloudflare.
The first request might take a couple seconds while Cloudflare establishes the connection. Subsequent requests are fast.
Nothing. No inbound ports are open on your server. Your server makes a single outbound HTTPS connection to Cloudflare; inbound traffic for your domain comes through that connection.
Cloudflare can't reach your tunnel. Your server may be offline, or the cloudflared container isn't running.
Domain shows error 502 / 521 ("Web server is down"):
Tunnel is connected but the web service isn't responding. Check docker compose ps -- if web shows as unhealthy, look at its logs: docker compose logs web
Mobile app can't connect:
Verify CORS_ORIGINS in .env includes your full Cloudflare URL with https://
Restart the API service after changing CORS: docker compose restart api
The path for users who don't run hardware at home (or prefer not to). You rent a small cloud server, point a domain at it, and Caddy provisions HTTPS automatically via Let's Encrypt.
This pulls the prebuilt GlycemicGPT images from GitHub Container Registry, starts all five services, and triggers Caddy to request a Let's Encrypt certificate.
Docker Compose is the right choice for most users. Use Kubernetes only if you already run a homelab or production Kubernetes cluster and prefer to deploy GlycemicGPT alongside your other workloads. See Install with Kubernetes.
A one-click managed deploy (Railway, Fly.io) is on the roadmap -- see ROADMAP.md §Phase 4.