This is xv6’s physical page allocator - the simplest possible memory allocator. It manages all of physical RAM as a collection of 4096 byte pages.

The data structure:

The entire allocator is built on a free list - a singly linked list of free pages. But here’s the trick - there is no separate list node struct allocated somewhere. Each free page stores the pointer to the next free page inside itself. That’s what struct run is:

This is xv6’s physical page allocator — the simplest possible memory allocator. It manages all of physical RAM as a collection of 4096-byte pages.

The data structure

The entire allocator is built on a free list — a singly linked list of free pages. But here’s the trick: there’s no separate list node struct allocated somewhere. Each free page stores the pointer to the next free page inside itself. That’s what struct run is:

struct run {
  struct run *next;
};

A free page is 4096 bytes. The first 8 bytes of that page are used as a next pointer. The remaining 4088 bytes are unused. When the page gets allocated, those 8 bytes get overwritten with whatever the caller puts there — the struct run ceases to exist. When the page is freed, the first 8 bytes become a next pointer again.

So the free list costs zero extra memory. The list metadata lives inside the free pages themselves.

kinit

initlock(&kmem.lock, "kmem");
freerange(end, (void*)PHYSTOP);

Two things. Initialize the lock that protects the free list, then free all the memory between end and PHYSTOP. Remember end — that’s the symbol from the linker script marking where the kernel’s .bss section ends. PHYSTOP is 0x88000000 (128 MB). Everything in between is free physical memory that nobody is using.

freerange rounds up the start address to a page boundary (because end might not be page-aligned), then loops through every page-sized chunk and calls kfree on each one. It’s building the free list by freeing every page one at a time.

kfree

Takes a physical address, adds that page to the free list. First it sanity-checks — the address must be page-aligned, must be after end (you can’t free kernel code), and must be below PHYSTOP. Then it fills the entire page with the value 1 (memset(pa, 1, PGSIZE)). This is junk-filling — if any code has a dangling pointer to this page and tries to read meaningful data from it, it gets garbage instead of stale valid-looking data. Makes bugs louder. Then it casts the page address to a struct run *, sets its next to the current head of the free list, and makes this page the new head. Standard linked list prepend.

kalloc

The opposite. Pops the head of the free list. If the list is empty, returns 0 (out of memory). If it got a page, it fills it with the value 5 — again junk-filling, so the caller can’t accidentally rely on the page being zeroed. Caller is responsible for zeroing if they need that.

The whole picture

At boot, kinit takes all free physical RAM and threads it into a linked list. Every subsequent kalloc pops a page off the front. Every kfree pushes a page back on. That’s the entire physical memory allocator. No buddy system, no slabs, no size classes. Just a linked list of 4096-byte pages. The lock makes it safe for multiple harts to allocate and free simultaneously.

Everything in xv6 that needs memory — page tables, kernel stacks, user process memory, pipe buffers — gets it through kalloc, one page at a time. If you need more than a page, you call kalloc multiple times. There is no multi-page allocation.