Getting Started with Multikernel

Split your Linux kernel into independent instances for your application and device processing. No hypervisor, no virtualization overhead, full hardware access.

1

Build the Kernel

Clone and compile the multikernel-enabled Linux kernel

2

Configure Boot

Reserve a memory pool for spawned kernel instances

3

Launch Instances

Use kerf to create, configure, and run kernel instances

What is Multikernel Linux?

Multikernel splits the Linux kernel into independent instances running simultaneously on the same physical machine. Your application gets its own kernel with dedicated CPUs, memory, and zero interrupt interference. Device drivers and I/O processing run in a separate kernel on separate cores. No hypervisor, native performance.

┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐
│   Web Server    │  │    Database     │  │   ML Training   │
├─────────────────┤  ├─────────────────┤  ├─────────────────┤
│  Linux Kernel   │  │  Linux Kernel   │  │  Linux Kernel   │
│  (Web-tuned)    │  │  (I/O-optimized)│  │  (GPU-optimized)│
├─────────────────┤  ├─────────────────┤  ├─────────────────┤
│   CPU + NIC     │  │   CPU + NVMe    │  │   CPU + GPU     │
└─────────────────┘  └─────────────────┘  └─────────────────┘
                

Each workload runs inside its own kernel with a tailored configuration, fully isolated from other instances. A driver crash or kernel exploit in one instance cannot affect the others. Resources can be rebalanced between kernels at runtime using standard Linux hotplug.

Containers VMs Multikernel
Isolation Shared kernel Full (hypervisor) Separate kernels
Performance Near-native 5-20% overhead Native
Kernel customization No Yes Yes
Dynamic resources Yes Limited Yes (hotplug)
Zero-downtime updates App only With orchestration Kernel + app
Attack surface Full kernel Reduced Minimal per instance

1 Build the Multikernel Kernel

Clone the multikernel-enabled Linux kernel and build it with multikernel support enabled.

git clone https://github.com/multikernel/linux.git
cd linux
make menuconfig  # Enable CONFIG_MULTIKERNEL
make -j$(nproc)
sudo make install

This produces a standard Linux kernel with multikernel extensions. It runs as your normal kernel and can additionally spawn new kernel instances.

2 Reserve a Memory Pool

Spawned kernels need a contiguous memory region. Add the following kernel boot parameter to your bootloader (GRUB, systemd-boot, etc.):

mkkernel_pool=1023M@0x40000000

This reserves 1023 MB starting at physical address 0x40000000 for multikernel use. The primary kernel will not use this memory, and it becomes available for spawned instances.

Alternatively, you can use Lazy CMA to allocate this memory at runtime without any boot parameter.

3 Install the Kerf Management Tool

Kerf is the command-line tool for creating, configuring, and managing multikernel instances. It handles resource allocation, kernel loading, and instance lifecycle.

git clone https://github.com/multikernel/kerf.git
cd kerf
pip install -e .

4 Launch Your First Instance

Initialize a resource pool, create an instance, and launch it:

# Initialize: make CPUs 4-31 available for spawned kernels
kerf init --cpus=4-31

# Create an instance with 4 CPUs and 2 GB of memory
kerf create web-server --cpus=4-7 --memory=2GB

# Load a kernel and initrd into the instance
kerf load web-server --kernel=/boot/vmlinuz --initrd=/boot/initrd.img

# Boot the instance
kerf exec web-server

The spawned kernel boots on the assigned CPUs and memory, running directly on hardware. It has its own scheduler, its own network stack, and its own view of the devices assigned to it.

Key Components

The multikernel stack is composed of several open-source projects:

  • Multikernel Linux - Kernel patches that enable spawning and managing additional kernel instances using the kexec subsystem
  • Kerf - Orchestration tool for managing kernel instances, resources, and lifecycle
  • DAXFS - Shared filesystem across kernel instances using direct memory access, enabling shared container rootfs and zero-copy data sharing
  • Lazy CMA - Runtime contiguous memory allocator, provides memory pools for spawned kernels without boot-time reservation

How It Works Under the Hood

  • Kernel spawning via kexec - New kernels are launched using Linux's existing kexec mechanism, extended to run alongside the primary kernel rather than replacing it
  • Resource partitioning - CPUs, memory, and devices are split between kernels using standard hotplug interfaces
  • Hardware queue sharing - Modern NICs and SSDs have multiple hardware queues; each kernel gets exclusive access to specific queues for true hardware-level isolation
  • Inter-kernel communication - Kernels communicate over shared memory using vsock, a standard Linux socket type that applications can use without modification
  • Docker integration - Spawned kernels can boot directly into Docker images using DAXFS to share the container rootfs, with no OS init layer

Ready to Try It?

All multikernel components are open source. Explore the code, file issues, or reach out for a technical evaluation.