यह लेख वर्तमान में केवल अंग्रेज़ी में उपलब्ध है। अनुवाद जल्द आ रहा है।
OpenClaw Installation Quickstart 2026: First Agent in 15 Minutes
This is the fastest path from zero to a running OpenClaw agent. We will install the runtime, write a Skill, declare an Agent Manifest, run the agent locally, and verify it works with the Sandbox replay tool. Total time on a fresh laptop: about 15 minutes. By the end you will have a working AI agent that calls a tool, returns a typed result, and emits an audit trail you can replay.
We assume you have Python 3.11+, Docker Desktop, and an Anthropic or OpenAI API key. No prior OpenClaw experience required. If you get stuck, the troubleshooting section at the bottom covers the issues we see most often in setup.
Key Takeaways
- OpenClaw runs as a containerized runtime; you author Skills (Python) and Agents (YAML) and the runtime executes them.
- Install the CLI:
pip install openclawplusdocker compose upfor the runtime services.- A minimal agent has three files: a Skill (
skills/*.py), a Manifest (agents/*.yaml), and an.envwith credentials.- The Sandbox mode lets you replay any production trace locally with mocked tools — first-class debugging from day one.
- Agents communicate via the Message Bus; running locally uses an in-memory bus, production uses Redis or Kafka.
- The OpenClaw CLI is your daily driver —
init,run,deploy,replay,logs.- Self-host the runtime with Docker Compose for dev and Helm for production, or use OpenClaw Cloud for managed.
- Ship the first agent today; harden it for production over the next sprint with hooks, RLS, and audit configuration.
Prerequisites
Before starting:
- Python 3.11+:
python --versionshould show 3.11 or higher. - Docker Desktop (Mac/Windows) or Docker Engine + Compose (Linux) running.
- An LLM API key: Anthropic, OpenAI, or AWS Bedrock. We use Anthropic Claude Opus 4.7 in the examples.
- Git for the example repo.
- A code editor: VS Code, Cursor, or your preferred.
Disk space: about 2 GB for the runtime images and dependencies.
Step 1: Install the CLI
pip install openclaw
openclaw --version
# openclaw 1.4.2
The CLI is your daily driver: init scaffolds projects, run executes agents, deploy ships them, replay debugs them.
If you prefer pipx for isolated CLIs:
pipx install openclaw
Step 2: Initialize a Project
mkdir my-first-agent && cd my-first-agent
openclaw init
This creates:
my-first-agent/
├── .env.example
├── docker-compose.yml
├── pyproject.toml
├── README.md
├── agents/
│ └── example_agent.yaml
├── skills/
│ └── example_skill.py
└── tests/
└── test_example.py
The scaffold is intentionally minimal. We will customize it next.
Step 3: Configure Credentials
Copy .env.example to .env and fill in:
# Required: at least one LLM provider
ANTHROPIC_API_KEY=sk-ant-api03-...
# OPENAI_API_KEY=sk-...
# Required: OpenClaw runtime
OPENCLAW_LOG_LEVEL=info
OPENCLAW_BUS_BACKEND=memory # use redis or kafka in production
OPENCLAW_AUDIT_BACKEND=local # use postgres or s3 in production
# Optional: tools your skills will use
WEATHER_API_KEY=your-key-here
The .env file is gitignored. Never commit credentials.
Step 4: Start the Runtime
docker compose up -d
This brings up the OpenClaw runtime container. On first run it pulls images (about 800 MB). Verify with:
openclaw status
Expected output:
OpenClaw Runtime: healthy
Bus backend: memory
Audit backend: local
Skills loaded: 1
Agents registered: 1
If the runtime does not start, see the troubleshooting section at the bottom.
Step 5: Write Your First Skill
A Skill is a Python function with a typed signature, registered with the @skill decorator. Replace skills/example_skill.py:
from openclaw import skill
import httpx
@skill(
name="get_weather",
description="Get the current weather for a city",
version="1.0.0",
)
def get_weather(city: str) -> dict:
"""Return current weather for the given city."""
response = httpx.get(
"https://api.example-weather.com/v1/current",
params={"city": city},
timeout=10.0,
)
response.raise_for_status()
data = response.json()
return {
"city": city,
"temp_c": data["temp"],
"condition": data["condition"],
}
Three things to notice:
- Typed inputs and outputs: OpenClaw uses these for validation and for the LLM tool-calling schema.
- Description: this becomes the tool description the model sees. Be specific.
- Version: skills are versioned independently of agents.
Step 6: Declare an Agent Manifest
Replace agents/example_agent.yaml:
name: weather-agent
version: 1.0.0
description: Answers user questions about weather.
model:
provider: anthropic
name: claude-opus-4-7
temperature: 0.2
max_tokens: 1024
goal: |
Help the user answer questions about weather conditions
for cities around the world. Use the get_weather skill
to look up current data; do not hallucinate values.
skills:
- get_weather
memory:
working: 4kb
episode: 7d
long_term: false
permissions:
- tool:weather_api
hooks:
pre_run: log_request
post_run: log_response
on_error: log_error
Key fields:
nameandversion: agent identity (used for deployment, audit).model: which LLM, with sane defaults.goal: the agent's purpose, included in the system prompt.skills: which Skills the agent can call.memory: tier configuration.permissions: explicit list of capabilities (least-privilege by default).hooks: lifecycle callbacks for logging, audit, fallback.
Step 7: Run the Agent Locally
openclaw run weather-agent --input "What's the weather in Tokyo?"
Expected output:
[2026-05-04 14:23:01] [weather-agent] Starting run abc123
[2026-05-04 14:23:01] [orchestrator] Planning: I need to call get_weather for Tokyo
[2026-05-04 14:23:02] [skill:get_weather] city=Tokyo
[2026-05-04 14:23:02] [skill:get_weather] returned {'city': 'Tokyo', 'temp_c': 22, 'condition': 'sunny'}
[2026-05-04 14:23:03] [orchestrator] Composing response
The current weather in Tokyo is 22°C and sunny.
Run abc123 completed in 2.1s
Trace: openclaw replay --trace abc123
You have a working agent.
Step 8: Inspect the Audit Trail
Every run generates a trace ID with full step-by-step audit:
openclaw logs --trace abc123
trace_id: abc123
agent: weather-agent
version: 1.0.0
started_at: 2026-05-04T14:23:01Z
ended_at: 2026-05-04T14:23:03Z
duration_ms: 2104
input: {"text": "What's the weather in Tokyo?"}
steps:
- step: orchestrator.plan
duration_ms: 720
tokens: {input: 245, output: 89}
- step: skill.get_weather
args: {city: "Tokyo"}
result: {city: "Tokyo", temp_c: 22, condition: "sunny"}
duration_ms: 380
- step: orchestrator.compose
duration_ms: 1004
tokens: {input: 312, output: 56}
output: "The current weather in Tokyo is 22°C and sunny."
status: success
This audit log is cryptographically chained — useful for SOC 2 / ISO 27001 evidence.
Step 9: Replay in Sandbox Mode
The killer feature for debugging: replay any production trace locally with mocked tools. Useful when something failed in prod and you don't want to make real API calls.
openclaw replay --trace abc123 --sandbox --mock-tools all
OpenClaw replays the same prompts, returns the same tool results from the audit log, and shows where any non-determinism would have changed behavior. Modify your skill, replay, see if the bug is fixed without touching production.
Step 10: Add Tests
The scaffold includes a test stub. Replace tests/test_example.py:
from openclaw.testing import AgentHarness
def test_weather_agent_returns_temperature():
harness = AgentHarness(agent="weather-agent")
harness.mock_skill("get_weather", lambda city: {
"city": city,
"temp_c": 22,
"condition": "sunny",
})
result = harness.run("What's the weather in Tokyo?")
assert "22" in result.output
assert "Tokyo" in result.output
assert result.skill_calls[0].name == "get_weather"
Run:
pytest tests/
The harness lets you mock skills, force errors, and assert against the agent's plan and output without making LLM calls (for snapshot-based tests) or with a small/cheap model for end-to-end tests.
Step 11: Deploy
For local development the in-memory bus and local audit are fine. For production, configure Redis (bus) and Postgres (audit) in .env, then deploy.
Self-host with Docker Compose
openclaw build --tag my-registry.com/weather-agent:1.0.0
openclaw deploy --target docker-compose --replicas 2
Self-host on Kubernetes (Helm)
openclaw build --tag my-registry.com/weather-agent:1.0.0
helm upgrade --install weather-agent ./helm-chart \
--set image.tag=1.0.0 \
--set replicas=3 \
--set bus.backend=redis \
--set audit.backend=postgres
OpenClaw Cloud (managed)
openclaw login
openclaw deploy --target cloud --version 1.0.0
OpenClaw Cloud handles scaling, observability, and rotation. Free dev tier; production tiers are usage-based.
What to Build Next
You have a working agent. The natural next steps:
- Add more Skills — wrap your CRM, ERP, internal APIs. We have a Skills Catalog overview covering authoring patterns.
- Multi-agent flows — build a second agent that consumes messages your first one emits. The Message Bus does the routing.
- Production hardening — configure Redis bus, Postgres audit, SOC-grade hooks. See our multi-tenant deployment patterns.
- Cost optimization — switch parts of the workflow to cheaper models, add caching. See our token efficiency guide.
- Security hardening — RLS, data residency, secrets rotation. See our security and compliance guide.
Troubleshooting
openclaw status shows runtime not running
Check Docker is running and ports 8000-8002 are free:
docker compose logs openclaw-runtime
Common issue: port conflict with another service. Edit docker-compose.yml to map to different host ports.
Skill not found
Skills must be in the skills/ directory and use the @skill decorator. Restart the runtime after adding a skill:
docker compose restart openclaw-runtime
In production, skills are loaded at deploy time, not at runtime.
LLM API error: invalid_api_key
Verify the key in .env and that the runtime container has it loaded:
docker compose exec openclaw-runtime env | grep ANTHROPIC
If empty, the .env file isn't being read. Check env_file: .env in docker-compose.yml.
Agent loops forever
Set max_steps: 10 in the manifest under model:. Without it, an agent can loop indefinitely on ambiguous goals. We default 10-20 steps for production agents.
Tests fail with httpx.ConnectError
Mock external HTTP calls in tests using harness.mock_skill or respx. Tests should not hit live APIs.
Frequently Asked Questions
Do I need to use Docker?
Docker is the recommended deployment for the runtime. For pure development, you can run the runtime directly with openclaw runtime serve and skip Docker. Production should always use containers.
Which LLM should I start with?
We recommend Anthropic Claude Opus 4.7 for production agents that need reasoning and Claude Sonnet 4.6 for cost-sensitive tasks. OpenAI GPT-4 and Bedrock options work fine. The skill code is identical regardless.
Can I use this with Next.js / a web app?
Yes. Expose the agent as an HTTP endpoint via OpenClaw's built-in REST adapter, or call the OpenClaw Python client from your backend. For chat UIs, see our Vercel AI SDK comparison for the recommended hybrid pattern.
How do I handle long-running agents?
OpenClaw supports background runs. Use openclaw run --async and poll for completion via the trace ID. For very long runs (hours), use the Message Bus pattern — agent emits a "DocumentReady" message when done; consumers wait on the message.
Where can I get help with my deployment?
ECOSIRE deploys OpenClaw for clients across SaaS, ERP, and regulated industries. We have reference architectures for self-hosted, hybrid, and managed deployments. Talk to our OpenClaw implementation team or browse OpenClaw products for ready-to-deploy agents and skills.
That is the fastest path from zero to a working OpenClaw agent. From here, the platform scales with you — multi-agent, multi-tenant, embedded in your SaaS, on-prem in regulated environments. The patterns are the same as what you just shipped, with more Skills, more Agents, and more hooks. Ship the first agent today; harden it for production over your next sprint.
लेखक
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
इंटेलिजेंट एआई एजेंट बनाएं
स्वायत्त एआई एजेंटों को तैनात करें जो वर्कफ़्लो को स्वचालित करते हैं और उत्पादकता बढ़ाते हैं।
संबंधित लेख
How to Add a Custom Button to an Odoo Form View (2026)
Add custom action buttons to Odoo 19 form views: Python action method, view inheritance, conditional visibility, confirmation dialogs. Production-tested.
How to Add a Custom Field in Odoo Without Studio (2026)
Add custom fields via custom module in Odoo 19: model inheritance, view extension, computed fields, store/non-store decisions. Code-first, version-controlled.
How to Add a Custom Report in Odoo Using External Layout
Build a branded PDF report in Odoo 19 using web.external_layout: QWeb template, paperformat, action binding. With print logo + footer overrides.