Loading...
Loading...
90 devices found
Cheapest cloud option. 2GB RAM is tight for OpenClaw. Better suited for Nanobot or PicoClaw.
ARM-based cloud instance. Graviton3 is fast and cheap. OpenClaw runs but tight at 4GB. Good for lightweight forks.
Tiny and cheap. Runs lightweight forks only.
Basic cloud instance. Runs vanilla OpenClaw comfortably. No local models.
Early 64-bit ARM SBC. 2GB RAM and slow storage limit it to PicoClaw and Nanobot. Cheap but dated.
The classic Pi. 1GB RAM is very tight. Only lightweight forks like PicoClaw and Nanobot are practical.
Raspberry Pi 3B form-factor alternative. 2GB RAM. Budget SBC for lightweight forks.
Budget ARM SBC from Pine64. 4GB RAM is enough for lightweight forks. Decent community support.
Comfortable cloud instance. OpenClaw runs well with room for services. No local models but all cloud APIs work.
Affordable RISC-V SBC with 8GB RAM. PicoClaw runs natively on RISC-V. OpenClaw works but slower than ARM equivalents.
Industrial-grade SBC with real-time PRU co-processors. Only 512MB RAM limits it to lightweight forks.
The baseline for running vanilla OpenClaw. Tight but workable.
StarFive's flagship RISC-V SBC. NVMe support is a nice touch. 8GB RAM is enough for OpenClaw if you're patient with RISC-V performance.
Pi 4 in a keyboard form factor. 4GB RAM handles mid-tier forks. Neat self-contained package for a desk setup.
Pine64's RISC-V board with 8GB RAM. Good PicoClaw target. OpenClaw works but RISC-V software ecosystem is still maturing.
The sweet spot for OpenClaw. Genuine headroom for multi-channel messaging and automation.
Hardkernel's fastest Amlogic SBC. 4GB RAM handles lightweight forks well. Rock-solid stability and mainline Linux support.
Budget GPU cloud instance. T4 handles 7B models at ~15 tokens/sec. Good balance of price and capability.
Powerful ARM SBC with RK3588S. Better CPU performance than Raspberry Pi 5. NPU for AI inference.
ASUS-quality SBC with RK3399. 4GB RAM and good I/O. Reliable but older chip compared to RK3588 boards.
Budget used Pixel. 6GB RAM runs Nanobot in Termux. Great value dedicated AI phone for $100.
High-end ARM SBC with RK3588. 16GB RAM and NVMe support make it a serious OpenClaw contender. NPU useful for local inference.
Refurbished business desktop repurposed as a server. 8GB RAM runs OpenClaw via cloud APIs. Best bang-for-buck dedicated box.
Google's Edge TPU board for ML inference. 4GB RAM handles lightweight forks. TPU accelerates specific model architectures.
Classic used ThinkPad. Runs OpenClaw fine as a background service. Great value.
Powerful ARM SBC with 16GB RAM and 6 TOPS NPU. Near-desktop performance for AI workloads.
Purpose-built home automation hub with CM4. 4GB RAM can run lightweight forks alongside Home Assistant.
Ultra-low-power N100 mini PC. 16GB RAM at $170 is incredible value. Perfect silent always-on OpenClaw server.
Premium SBC with NPU for AI acceleration. 8GB RAM and fast I/O make it good for OpenClaw with local inference.
Industrial AI SBC with 8 TOPS of neural network acceleration. 4GB RAM supports mid-tier forks. Serious edge AI platform.
Enterprise cloud GPU. 120 tokens/sec. Can run any model locally. Complete overkill for just OpenClaw.
Open-source ARM laptop. 4GB RAM limits it to lighter forks. Slow eMMC storage. Great for tinkering.
x86 SBC with Intel N5105. Full Windows/Linux compatibility. 8GB RAM runs OpenClaw natively without ARM quirks.
Popular 2-bay NAS. Runs Docker containers. Can host lightweight forks alongside file storage.
AI powerhouse SBC. Can run local models alongside OpenClaw. 22 tokens/sec on 7B models.
ARM Chromebook with Linux container support. 8GB RAM runs lightweight forks in Crostini. Limited by ChromeOS sandbox.
Affordable 2-bay NAS with N5105. 4GB RAM is tight but supports Docker. Good value for combined NAS + lightweight AI hosting.
Budget x86 mini PC with 12th-gen Intel. 16GB RAM is comfortable for OpenClaw. Good bang for the buck.
Affordable mini PC with desktop-class Ryzen. Great value for a dedicated OpenClaw box.
Enterprise-grade tiny desktop. ThinkCentre reliability. 16GB RAM and Ryzen 5600GE make it a solid always-on AI host.
Pre-built Jetson Orin Nano Super appliance. Plug-and-play OpenClaw. 25 tokens/sec.
Gaming handheld running SteamOS (Linux). Can run OpenClaw in desktop mode.
The original Apple Silicon Mac Mini. Still excellent for OpenClaw. SwiftClaw runs natively. Great used value.
ASUS mini PC with Zen 3+ and RDNA 2 iGPU. Compact and efficient. 16GB RAM handles OpenClaw plus services.
Budget desktop build. No dGPU but 16GB RAM runs OpenClaw via cloud APIs perfectly. Low power for always-on use.
Self-hosting appliance with app store. Pre-installed NVMe. 4GB RAM can run Nanobot or PicoClaw as an Umbrel app.
Powerful mini PC with 12th-gen i7. 32GB RAM and fast NVMe. Can handle multiple agents and services concurrently.
x86 SBC/mini PC hybrid. Dual Ethernet, multiple M.2 slots. Versatile platform for OpenClaw with expansion options.
4-bay NAS with x86 power. 8GB RAM and N5095 run Docker containers easily. Good dual-purpose NAS + OpenClaw host.
Top-tier mini PC with Ryzen 7840HS. 32GB RAM and iGPU handle everything including local models. Whisper-quiet.
Upgraded Steam Deck with OLED screen and faster storage. Same APU but better thermals. OpenClaw in desktop mode is smooth.
Ultra-compact x86 mini PC. Silent operation. Perfect always-on OpenClaw server.
Popular 4-bay NAS with AMD Ryzen embedded. 4GB RAM is tight but Docker support works. Expandable to 32GB.
OpenClaw runs beautifully. 45 tokens/sec on 7B models. Near-instant responses.
Ultra-small form factor enterprise desktop. 13th-gen i5 with 14 cores. Silent and reliable for always-on operation.
Flagship Android tablet. 8GB RAM and Snapdragon 8 Gen 2. Termux runs well. DeX mode provides desktop-like experience.
Google's AI-focused phone. 8GB RAM runs Nanobot in Termux well. On-device ML via Tensor G3 is a bonus.
Windows gaming handheld with Zen 4 power. 16GB RAM and RDNA 3 iGPU. Runs OpenClaw natively on Windows or Linux.
Mid-range gaming/AI PC. RTX 3060 12GB handles 7B models at 30+ tokens/sec. Great value for local AI workloads.
Flagship Android with 16GB RAM. Termux + proot-distro gives full Linux. Enough RAM for vanilla OpenClaw in Termux.
Flagship Android phone. Termux + proot-distro gives full Linux. Can run vanilla OpenClaw via cloud APIs.
Pocket PC with 32GB RAM and Ryzen 6800U. The most capable handheld for AI workloads. Can run local 7B models.
Linux-first ultrabook. Coreboot firmware. 16GB RAM runs OpenClaw natively on Pop!_OS. Great battery life.
Fanless ultrabook with Apple Silicon. 16GB unified memory runs OpenClaw and local 7B models simultaneously. SwiftClaw native.
Enterprise AMD laptop with Zen 4 and RDNA 3 iGPU. 16GB RAM handles OpenClaw well. Good Linux support.
Workstation-class mini desktop. ISV-certified. 32GB RAM and optional dGPU. Enterprise-grade reliability for production deployments.
Modular powerhouse. Runs everything plus local models with the GPU module.
Overkill for OpenClaw. Can run multiple agents simultaneously with local models.
Premium business ultrabook. 16GB RAM and fast SSD. Runs OpenClaw as a background service while you work. Enterprise-grade build.
Entry tower server with Xeon E. 32GB ECC RAM. Reliable 24/7 operation. Good for production OpenClaw deployments.
Enterprise-grade NAS with Xeon D and ECC RAM. 32GB handles OpenClaw plus ZFS without breaking a sweat.
Premium laptop with discrete GPU. RTX 4060 handles 7B models at 35+ tokens/sec. 32GB RAM is plenty for everything.
Compact server with embedded Xeon D. 64GB ECC in Mini-ITX form factor. Low power for a server. Perfect homelab AI host.
Pro laptop with M3 Pro. 18GB RAM handles 7B models at 50+ tokens/sec. Battery lasts all day even running agents.
Pro desktop powerhouse. 32GB unified memory runs 13B+ models locally. 60+ tokens/sec. Multiple concurrent agents easy.
Cutting-edge GPU cloud. H100 runs 70B models at 100+ tokens/sec. Enterprise-scale AI agent deployment.
Enterprise 2U rackmount server. 64GB RAM handles ClawLixir with thousands of concurrent users. Built for data center ops.
Absurd overkill for OpenClaw. 192GB unified memory can run 70B+ models locally. Runs dozens of concurrent agents without breaking a sweat.
Multi-GPU research cluster. 640GB total VRAM. Runs any model at absurd speeds. Way beyond what OpenClaw needs.
Ultra-low-power microcontroller. Only runs MimiClaw natively.
Tiny RISC-V SBC. Can run PicoClaw.
Sub-$30 Walmart Android phone. Can run Termux + lightweight forks. Surprisingly capable for the price.
OpenWrt travel router. 512MB RAM is tight but Claw++ can run alongside routing. Great for portable AI on the go.
Entry-level HA appliance. Only 1GB RAM severely limits AI agent options. PicoClaw is the only realistic choice.
All-in-one network appliance. 2GB RAM mostly used by UniFi. PicoClaw could run alongside but resources are tight.
Ultra-cheap RISC-V microcontroller. 400KB SRAM. Too constrained for any fork except maybe a future PicoClaw-Lite.
Tiny WiFi-capable MCU. 264KB SRAM. No OS, no fork compatibility. Could be an I/O peripheral for a larger system.
Industrial-grade Arduino with 8MB SDRAM. Runs MicroPython. Too constrained for real AI agents but useful as a sensor gateway.
Pro tablet with desktop-class M2 chip. SwiftClaw runs natively. No terminal access without jailbreak but native apps work.
Flagship iPhone with A17 Pro. SwiftClaw runs natively. No Termux equivalent but native Swift apps work beautifully.
Request a device to be added to the directory. We'll benchmark it and add compatibility verdicts.
Request a Device