Loading...
Loading...
Google's Edge TPU board for ML inference. 4GB RAM handles lightweight forks. TPU accelerates specific model architectures.
Verified benchmark results from the OpenClaw team.
No benchmark data yet.
Run ./clawbench/run.sh <device> <fork> to generate benchmarks.
Real experiences from users who tested these forks.
No community reports yet. Be the first to share your experience!
No comments yet. Be the first!
Nanobot fits easily. TPU could accelerate specific ML tasks.
5s cold start