No pre-built apps. No binaries. Describe what you need in plain language, and the operating system generates it, sandboxes it, and runs it. Inspired by Karpathy's LLM OS concept.
These aren't guidelines. They're enforced at every layer — deterministic scans, AI review, and human oversight.
No telemetry. No tracking. No data exfiltration. Generated apps run in sandboxes with strict capability gates. When in doubt, deny access. User privacy is non-negotiable.
No artificial limits. No paywalls. Users can generate and run any software they want — as long as it doesn't harm others. The OS serves the user, not the other way around.
Use it freely. Adapt it as you see fit. But if you benefit from it — contribute back, even a little. Code that's contributed must not damage the core idea.
These rules aren't perfect. Neither is this code. We can always improve — as long as the core intent isn't violated. Ship working code, iterate, improve.
From natural language to a running app in seconds. Every step is security-gated.
"Make me a todo list with categories" — plain language, no code required.
Routes to the best LLM — local Ollama for simple apps, Claude API for complex ones. Prompt injection is stripped before generation.
Deterministic regex/AST scan blocks eval(), dynamic imports, parent frame access, and encoded payloads. No LLM in the loop — no recursive injection.
The app declares what it needs (storage, timers, network). You review and approve each one. The app gets nothing you don't explicitly allow.
Isolated iframe with strict CSP. The SDK communicates with the kernel via postMessage. Every call is validated against your approved capabilities.
Layered design — every layer gets replaced as we move toward a custom kernel.
Three paths. Pick the one that fits.
Open the repo in VS Code. Claude Code reads CLAUDE.md automatically and knows the entire project.
gh repo fork DayZAnder/llm-os --clone cd llm-os code . # Tell Claude Code what to build: # "Add WASM sandbox" # "Improve the shell UI" # "Audit the security"
Copy a ready-made prompt from CONTRIBUTING.md into your preferred AI assistant.
gh repo fork DayZAnder/llm-os --clone cd llm-os # Open CONTRIBUTING.md # Copy a component prompt # Paste into your AI tool # Each prompt includes values context
Fork, read the README, run the prototype, and pick an issue.
gh repo fork DayZAnder/llm-os --clone cd llm-os cp .env.example .env node src/server.js # Open http://localhost:3000 # Pick an issue from GitHub
Three layers, no single point of trust. Every contribution is checked.
Regex-based static analysis runs locally and in CI. Detects telemetry, sandbox weakening, privacy violations, tracking code. Blocks merge on critical findings.
Claude reviews every PR diff against the core values. Posts findings as comments. Catches subtle violations that regex can't see.
PR template requires values self-certification. Maintainer has final authority on edge cases. No automated system is trusted alone.
Boot a full LLM OS instance in your hypervisor. Alpine Linux + Docker + Node.js — ready to generate apps.
For Proxmox, KVM, QEMU, and libvirt. Import as a VM disk image.
Download QCOW2For Hyper-V on Windows. Create a Gen 1 VM and attach as the primary disk.
Download VHDXroot / llmos, then open http://<vm-ip>:3000 in your browser.
Configure your LLM backend with llmos-config set OLLAMA_URL http://your-ollama:11434.
Change the default password on first login!
The next operating system won't ship apps — it'll generate them.
If that future interests you, start building.
Quick links: llm-os.dev/#start · llm-os.dev/#contribute