logo

一键部署
独立龙虾

快速在Arm设备上运行
多个独立OpenClaw应用

On your Arm device terminal

curl -fsSL https://aijishu.com/install.sh | bash

Core Features

AI application infrastructure built for Arm devices

One-Click Deploy

No manual environment setup. Jishu handles everything — get running in minutes

Capability Integration

Scan to connect WeChat, Feishu — quickly use various Skills, MCP and rich tools

Security Sandbox

Container-based isolation for Claw apps and system environment, independent API Key management, selective data authorization

Multi-Instance

Run multiple Claw instances in parallel on a single device, fully utilizing hardware capabilities

Unified Management

Centrally manage versions, configs, and status of all AI apps from a single interface

Why Not PC or Cloud?

Only standalone Arm devices running Lobster deliver the best experience

vs PC
  • Independence: PCs serve multiple purposes — work and life data interfere with each other
  • Environment: Complex Windows desktop limits Lobster capabilities, must stay awake
  • Power: 5W vs 100W+ (Raspberry Pi 5 ~$70, PC costs hundreds) — significant difference for 24/7 operation
vs Cloud Server
  • Security: All data uploaded to cloud — leak and abuse risks
  • Design: Contradicts standalone AI principle of local data, local execution — not private deployment
  • Experience: Higher network latency, IPs flagged as server not personal — web access restricted
Future: Local Inference · Embodied AI
  • As local computing power improves, data interaction and Agent execution can be powered by local GPU/NPU, enabling fully offline operation and ensuring data privacy
  • Connect cameras, wheels, limbs, and sensors to turn your Arm device into a truly perceptive, actionable embodied AI terminal

Supported Hardware

Hardware: Raspberry Pi 5 or equivalent — 4-core Arm Cortex-A76+, 4GB RAM, 16GB storage OS: Ubuntu 22.04+, Debian 12+, MacOS 26

Raspberry Pi 5

BCM2712 · 4×Cortex-A76 · 4/8GB

Affordable, mature ecosystem — ideal for running standalone AI apps

Minimum: Cortex-A76 quad-core or above + 4GB RAM

Nvidia Jetson Orin / Thor

Cortex-A78AE / Neoverse-V3AE · 4~128GB

Native Nvidia GPU integration — supports local model inference at varying scales

Ref: Jetson Orin Nano · Jetson AGX Orin · Jetson Thor T4000/T5000

Mac Mini / Studio

Coming Soon

Apple Silicon M1~M5 · Unified Memory 8~192GB

Apple unified memory architecture — peak local inference, top choice for multi-instance on a single device

Rockchip RK3588

4×Cortex-A76 + 4×Cortex-A55 · up to 32GB

Cost-effective domestic edge chip, 6 TOPS NPU, octa-core big.LITTLE architecture

Ref board: Firefly EC-I3588J

CIX P1

8×Cortex-A720 + 4×Cortex-A520 · up to 64GB

Domestic high-performance Arm SoC, ARMv9.2 cores, with 30 TOPS "Zhouyi" NPU

Ref board: Radxa Orion O6

Huixi Guangzhi R1

24×Cortex-A78AE · 32~128GB

Domestic automotive-grade high-performance AI chip, 500 TOPS Rhino NPU

Mediatek Genio720

2×Cortex-A78 + 6×Cortex-A55 · up to 16GB

Leading AIoT chip, 6nm process, 10 TOPS NPU, Antutu score up to 800K

Ref board: Zelustek G720 development board

More Arm Devices

More Arm SoCs under evaluation — other chips and devices are welcome to try, stay tuned for announcements

Coming Soon

Early Partners

Welcome to join and co-build the ecosystem

Rockchip
Rhino
CIX
ZSpace
Ugreen
Edatech
Radxa
Orange Pi
Firefly
Feiling
Zelustek
Industio
Motospace
Myir
Sanstar
Shimeitai
Youyeetoo
Szqhy
Vanoak
Hengbot

Business: bd@aijishu.com

Scan to Join Beta Groups

Join our beta community groups for the latest product updates

Group 1Group 1
Group 2Group 2
Group 3Group 3