Start here
Where should you deploy?
Everyone's situation is different. Maybe you're just exploring, maybe you need a production server running 24/7. Find the question below that sounds like you, and we'll point you to the right path.
Side by side
Compare at a glance
Here's the full picture in one table. Scan the column that matters most to you, whether that's cost, effort, or uptime.
| Local Install | Docker | Cloud VPS | One-Click PaaS | Raspberry Pi | |
|---|---|---|---|---|---|
| Monthly cost | $0/mo | $0/mo (local) or VPS cost | $4-12/mo depending on provider | Platform pricing (Railway, Northflank, Render, Fly.io) | $45-80 one-time |
| Setup time | Minimal | Low | Medium | Minimal | Medium |
| RAM needed | 1 GB+ | 2 GB+ | 2 GB recommended (1 GB minimum with swap) | Managed (platform-dependent) | 2 GB+ (4 GB recommended) |
| Public URL | No | No | Yes | Yes | No |
| Uptime | When your machine is on | Follows host uptime | 24/7 | 24/7 (paid tier) | 24/7 (as long as it has power) |
| Best for | Developers | Teams deploying to VPS | Operators who need 24/7 uptime | Non-technical users | Tinkerers |
All five paths
The details
Below is everything you need to know about each deployment method before you commit. The summary, who it's for, what you need, and a link to the full guide.
Local Install
Your machine, zero cost, 5 minutes
The fastest way to start. One curl command installs OpenClaw, the onboarding wizard configures your model provider and channels, and the Gateway runs as a background daemon. Config lives at ~/.openclaw/ and persists across restarts. No containers, no cloud accounts needed.
This is where we tell everyone to start. Even if you plan to run on a VPS eventually, do a local install first to learn the config and commands. You can always migrate later.
Docker
Isolated, reproducible, sandbox-ready
Run the entire Gateway in a container, or keep the Gateway on the host and use Docker only for sandboxed tool execution. The docker-setup.sh script handles image builds, volume mounts, and networking.
We run Docker on every production VPS we manage. Rollbacks are painless, the host stays clean, and you get sandboxed tool execution for free. If you are going to deploy to a server, Docker is the way.
Cloud VPS
Always-on, SSH access, full control
The production path. Provision a VPS from Hetzner ($4/mo), DigitalOcean ($6/mo), Vultr ($6/mo), or any provider, install Docker, and deploy. Gateway binds to localhost only. Access remotely via SSH tunnel or Tailscale.
This is how we run every client deployment at The Operator Vault. Hetzner at $4/month is the best value. DigitalOcean and Vultr are solid alternatives if you prefer their UI.
One-Click PaaS
Zero CLI, browser-only setup
Click a deploy button, set a password, and configure through a browser-based setup wizard. No CLI knowledge required.
Of the options, Railway is the smoothest in our experience. Northflank is a close second. Render works but the free tier has no persistent disk and spins down after 15 minutes.
Raspberry Pi
Buy once, run forever, zero monthly cost
Install Node.js directly on ARM64 hardware, run the standard OpenClaw installer, and set up a systemd daemon. Use a USB SSD instead of the SD card for a significant performance boost.
Several community members run their agents this way and love it. Just don't try to run local LLMs on the Pi. Use cloud APIs instead. Break-even vs a $5/month VPS happens around month 10.
Real costs
What you will actually pay
Let's be honest about what this actually costs. OpenClaw itself is free, but you still need somewhere to run it and an AI model to power it. Here's what real monthly costs look like across every option.
API costs depend on your model choice and usage volume. Claude Opus and GPT-4 cost more per token than Sonnet or GPT-4o-mini. Running local models via Ollama eliminates API costs entirely but requires a GPU or high-RAM machine.
For most operators, the sweet spot is a $4-6/month VPS plus Anthropic API at around $10-15/month in real usage. That gives you 24/7 uptime with the best model quality. If budget is tight, start local with Ollama and upgrade when you outgrow it.
Universal
True regardless of deployment
No matter which path you pick, these fundamentals stay the same. Good to know before you start.
Node.js 22+ is required everywhere
Whether you use Docker, a VPS, or bare metal, OpenClaw needs Node.js 22 or newer. The installer handles this for you.
Config always lives at ~/.openclaw/
Agent config, credentials, session history, and workspace files. This directory is your OpenClaw state. Back it up, and you can restore on any machine.
Gateway port is 18789 by default
All deployment methods bind the Gateway to this port. Change it with --port or OPENCLAW_GATEWAY_PORT. Never expose it to the public internet without auth.
AI models run remotely, not on your hardware
Unless you opt into local models (Ollama, vLLM), all inference happens via API calls to Anthropic, OpenAI, or other providers. Your machine only runs the Gateway and tools.
You can add channels after deploy
WhatsApp, Telegram, Discord, Slack, and 20+ other channels can be configured at any time via the CLI or the Control UI dashboard. No need to decide upfront.
Back up your ~/.openclaw/ directory before any major update. We have never lost data, but it takes 10 seconds and buys you peace of mind. A simple tar -czf openclaw-backup.tar.gz ~/.openclaw/ does the trick.
Written by
Kevin Jeppesen
Founder, The Operator Vault
Kevin is an early OpenClaw adopter who has saved an estimated 400 to 500 hours through AI automation. He stress-tests new workflows daily, sharing what actually works through step-by-step guides and a security-conscious approach to operating AI with real tools.
Deployment FAQ
Common deployment questions
Ready to deploy?
Start with the Free Course.
Our workshop walks you through OpenClaw setup from scratch. Install, configure, and send your first command. One hour, lifetime access.
