The promise of AI agents that can actually do things has been floating around for years. But if you've ever tried building one yourself, you know the gap between "cool demo" and "actually useful tool" is... well, massive. OpenClaw bridges that gap. It's an open-source agent framework that lets you build agent skills—modular, reusable capabilities that your AI can invoke to get stuff done in the real world.
This OpenClaw tutorial walks you through building your first skill from scratch. No prior experience required—and I promise it's more straightforward than you might think.
What Is OpenClaw and Why It Matters
OpenClaw is an agent framework designed around a simple idea: skills should be modular, discoverable, and easy to share. Think of it like an app store for AI capabilities. Each skill is a self-contained package that teaches your agent how to perform a specific task—whether that is checking the weather, deploying a server, or analyzing a codebase.
Unlike monolithic agent systems where capabilities are baked into the core, OpenClaw treats skills as first-class citizens. This matters because:
- Composability: Mix and match skills like LEGO blocks
- Maintainability: Update one skill without accidentally breaking everything else
- Shareability: Publish skills to a registry so others can benefit from your work
- Security: Skills run in isolated contexts with explicit permissions—no rogue code running wild
If you've ever built a LangChain tool or a ChatGPT plugin, the concept will feel familiar. OpenClaw just makes the plumbing cleaner—and honestly, that's a relief.
Prerequisites and Setup
Before we build anything, make sure you have:
- Node.js 18+ installed
- OpenClaw CLI installed globally:
npm install -g openclaw
- A code editor (VS Code works great)
- Basic familiarity with TypeScript or JavaScript
That is it. OpenClaw handles the heavy lifting—no Docker containers, no cloud accounts, no API keys required to get started.
Verify your installation:
openclaw --version
# Should output something like: openclaw/2.4.1 linux-x64 node-v20.11.0
Creating Your First Skill
Let us build a simple skill that fetches the current weather for a given city. This is the "Hello World" of agent skills—useful enough to be real, simple enough to understand in one sitting.
Step 1: Scaffold the Skill
OpenClaw provides a CLI command to create a new skill template:
openclaw skill:create weather-check
This creates a new directory called weather-check with the following structure:
weather-check/
├── SKILL.md # Skill definition and documentation
├── src/
│ ├── index.ts # Main entry point
│ └── types.ts # TypeScript interfaces
├── tests/
│ └── index.test.ts # Test suite
└── package.json
Step 2: Define the Skill Interface
Open SKILL.md in your editor. This file is the heart of your skill—it tells OpenClaw what your skill does, what parameters it accepts, and how to invoke it.
Replace the default content with:
# Weather Check
Fetches current weather conditions for any city worldwide.
<h2 class="text-2xl font-bold text-amber-400 mt-8 mb-4">Triggers</h2>
- "What's the weather in {city}?"
- "Is it raining in {city}?"
- "weather {city}"
<h2 class="text-2xl font-bold text-amber-400 mt-8 mb-4">Parameters</h2>
| Name | Type | Required | Description |
|------|------|----------|-------------|
| city | string | Yes | City name (e.g., "Tokyo", "New York") |
| units | enum | No | "metric" or "imperial" (default: metric) |
<h2 class="text-2xl font-bold text-amber-400 mt-8 mb-4">Returns</h2>
Object with:
temperature: Current temp
condition: Weather description
humidity: Percentage
The SKILL.md format is declarative. You describe what your skill does, not how it does it. This separation is intentional—it allows OpenClaw to optimize invocation, handle parameter extraction from natural language, and generate help documentation automatically.
Step 3: Implement the Logic
Now for the actual code. Open src/index.ts:
import { SkillContext, SkillResult } from '@openclaw/core';
interface WeatherParams {
city: string;
units?: 'metric' | 'imperial';
}
interface WeatherData {
temperature: number;
condition: string;
humidity: number;
}
export async function execute(
params: WeatherParams,
context: SkillContext
): Promise<SkillResult<WeatherData>> {
const { city, units = 'metric' } = params;
// Using OpenWeatherMap API (free tier)
const apiKey = context.secrets.OPENWEATHER_API_KEY;
const unitParam = units === 'imperial' ? 'imperial' : 'metric';
const url = https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(city)}&units=${unitParam}&appid=${apiKey};
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error(Weather API error: ${response.statusText});
}
const data = await response.json();
return {
success: true,
data: {
temperature: Math.round(data.main.temp),
condition: data.weather[0].description,
humidity: data.main.humidity
}
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
Key things to notice:
- Typed parameters: TypeScript interfaces give you autocomplete and safety
- Context object: Access secrets, logging, and other OpenClaw services
- Standardized return:
SkillResultensures consistent error handling - External API call: Just standard
fetch—no special wrappers needed
Step 4: Add Tests
OpenClaw skills should be testable in isolation. Open tests/index.test.ts:
import { execute } from '../src/index';
import { mockContext } from '@openclaw/testing';
describe('weather-check', () => {
it('returns weather data for a valid city', async () => {
const result = await execute(
{ city: 'London', units: 'metric' },
mockContext({ secrets: { OPENWEATHER_API_KEY: 'test-key' } })
);
expect(result.success).toBe(true);
expect(result.data).toHaveProperty('temperature');
expect(result.data).toHaveProperty('condition');
});
it('handles invalid city gracefully', async () => {
const result = await execute(
{ city: 'NotARealCity12345' },
mockContext({ secrets: { OPENWEATHER_API_KEY: 'test-key' } })
);
expect(result.success).toBe(false);
expect(result.error).toBeDefined();
});
});
Run tests with:
npm test
Testing and Debugging
Before your skill goes live, test it in the OpenClaw REPL:
openclaw skill:test ./weather-check
This launches an interactive session where you can invoke your skill with natural language:
> What's the weather in Tokyo?
[weather-check] Executing with params: { city: "Tokyo", units: "metric" }
Result: { temperature: 22, condition: "clear sky", humidity: 65 }
It is 22°C and clear sky in Tokyo right now.
For debugging, add context.logger.debug() calls in your code:
context.logger.debug('Fetching weather', { city, url });
Run with --verbose to see debug output:
openclaw skill:test ./weather-check --verbose
Best Practices
Based on the OpenClaw community and my own experience building skills:
- Keep skills focused: One skill should do one thing well. A "weather" skill that also sends emails is doing too much.
- Validate inputs early: Fail fast with clear error messages. Do not let bad parameters propagate to external APIs.
- Handle rate limits: If you are calling third-party APIs, implement retry logic with exponential backoff.
- Document thoroughly: The
SKILL.mdis your contract. Other developers (and the AI itself) rely on it.
- Use secrets for credentials: Never hardcode API keys. Always use
context.secrets.
- Write tests: Skills are code. Treat them with the same rigor as any production system.
Next Steps
You now have a working OpenClaw skill. Here is what to do next:
- Register your skill locally:
openclaw skill:register ./weather-check
- Try the skill in a real agent conversation:
openclaw chat
- Publish to the registry (optional):
openclaw skill:publish ./weather-check
- Explore advanced features:
- Composable skills (skills that call other skills)
- Persistent state across invocations
- Streaming responses for long-running operations
- Custom triggers beyond natural language
- Read the docs: The OpenClaw documentation covers authentication patterns, deployment strategies, and the skill registry API.
Conclusion
Building agent skills does not have to be complicated. OpenClaw gives you a structured way to teach AI agents new capabilities without getting lost in infrastructure concerns. Start small—fetching weather, checking server status, or formatting data. Then compose those building blocks into more complex workflows.
The best skills solve real problems for real people. Now go build one.