u-Kernel

The goal of this work is to show that p-kernel based systems are usable in practice with good performance. L4 is a lean kernel featuring fast message-based synchronous IPC, a simple-to-use external paging mechanism and a security mechanism based on secure domains

The L4 u-kernel is based on two basic concepts, threads and address spaces. A thread is an activity executing inside an address space. Cross-address-space communication, also called interprocess communication (IPC), is one of the most fundamental u-kernel mechanisms. Other forms of communication, such as remote procedure call (RPC) and controlled thread migration between address space, can be constructed from the IPC primitive.

Background

Different type of Kernel Designs

  • Monolithic kernel :Linux, Unix,
  • MicroKernel: L4
  • Hybrid Kernel: Windows
  • Exokernel : research project, might be helpful for driverless car
  • Virtual Machine? Server consolidation

Monolithic Kernels

  • All OS services operate in kernel space : Multics, Unix, Linux
  • Disadvantage
    • Hard to extend for special applications
    • Too heavy-weight for special applications

Micro kernels

  • An operating system kernel that provides minimal services
    • IPC, VM, thread scheduling
    • Usually has some concept of threads or processes, address spaces, and interprocess communication (IPC)
  • Might not have a file system, device drivers, or network stack , Put the rest into user space
    • Device drivers, networking, file system, user interface
  • Benefits
    • More stable with less services in kernel space
    • More extensible and customizable
  • Disadvantages
    • Lots of system calls
    • Requires more context switches
      • Each “system call” must switch to the kernel and then to another user level process
      • Context switches are expensive
      • State must be saved and restored
      • TLB is flushed

Hybrid Kernels

  • Combine the best of both worlds
  • Speed and simple design of a monolithic kernel
  • Modularity and stability of a microkernel
  • Still similar to a monolithic kernel
  • Disadvantages still apply here

Exo-kernel

  • Follows end-to-end principle
    • Extremely minimal
    • Fewest hardware abstractions as possible
    • Just allocates physical resources to apps
  • Disadvantages
    • More work for application developers

Summary: Kernels

  • Monolithic kernels
    • Advantages: performance
    • Disadvantages: difficult to debug and maintain
  • Microkernels
    • Advantages: more reliable and secure
    • Disadvantages: more overhead
  • Hybrid Kernels
    • Advantages: benefits of monolithic and microkernels
    • Disadvantages: same as monolithic kernels
  • Exokernels
    • Advantages: minimal and simple
    • Disadvantages: more work for application developers

u-Kernel

  • 2nd generation microkernel
  • Similar to Mach
  • Started from scratch, rather than monolithic
  • Even more minimal
  • Uses user-level pages
  • Tasks, threads, IPC

L4-Linux

  • Linux source has two cleanly separated parts
    • Architecture dependent
    • Architecture independent
  • In L4Linux
    • Architecture dependent code is modified for L4
    • Architecture independent part is unchanged
  • L4 not specifically modified to support Linux

Linux kernel as L4 user service

  • Runs as an L4 thread in a single L4 address space
    • Creates L4 threads for its user processes
    • Maps parts of its address space to user process threads (using L4 primitives)
    • Acts as pager thread for its user threads
    • Has its own logical page table
    • Multiplexes its own single thread (to avoid having to change Linux source code)
  • How is Linux supported on top of L4?
    • Linux runs as a user-level server(process)
      • Kernel only modifiy arch-dependent modules
      • System calls are messages between user process and Linux server
    • full(unmodified) binary compatibility with applications
How is memory managed?
  • Physical memory from initial space(\sigma 0) mapped to Linux Server, a subset of physical address space

    • Could access directly physical memory(virtual addr = physical addr
    • Physical memory occupies low address ranges of kernel addr spaces
  • Linux server acts as pager for user processes

    • user process have virtual address larger than physical address
    • hardware page tables are kept inside L4, not accessible by user process

Interrupt handling and Device Drivers

  • The Linux tophalf intexmpt handlers are implemented as threads waiting for such messages, one thread per interrupt source:

Linux User Process

  • Linux server create task, and acts as pager
  • L4 converts user-process page fault into RPC to the Linux server
  • Linux server have completely control over Linux user spac

System Call mechanism

  • implemented using remote procedure calls, IPC between user processes and Linux server
  • 3 system-call interface

    • Modified version of a standard shared C library which uses L4 IPC primitives to call the Linux server
      • fast, most available Linux software are linked against shared library
    • Corresponding modified version of the libc.a library
      • some programs can be statically relinked against modified libc.a
    • user-level exception handler("trampoline")
      • necessary for binary compatibility
      • bacd performance
      • For unmodified native Linux applications there is a “trampoline”
        • The application traps to the kernel as normal
        • The kernel bounces control to a user-level exception handler
        • The handler calls the modified shared library
  • How is application binary compatibility provided for syscalls

    • Shared libraries
      • emulation layer between Linux API and Linux server
    • Trampoline
      • syscalls in statically linked unmodified Linux libraries are reflected back to emulation layer

S ignaling (emulated)

Scheduling (simple round-robin w/ strict priority)

Tagged TLBs

  • Translation Lookaside Buffer (TLB) caches page table lookups
  • On context switch, TLB needs to be flushed
  • A tagged TLB tags each entry with an address space label, avoiding flushes
  • A Pentium CPU can emulate a tagged TLB for small address spaces

Questions

Micro Kernel Cons

  • A significant portion of the performance penalty of using a microkernel comes from the added work toreload the page table into the TLBonevery context switch
  • Since L4 runs in a small address space, it runs with a simulated tagged TLB
  • Thus, the TLB is not flushed on every context switch
  • Note that some pages will still be evicted –but not as many

Why no longer micro-kernel?

  • specialization , worried about performance
  • Use micro-kernel to try ideas. Micro-kernel first, try file system in user level, once good, push to kernel level.

Are you convinced of the u-kernel argument? What is the demand for specialization and extensions

Similarity and difference between u-kernel and VM hypervisor? (possible exam question)

https://microkerneldude.wordpress.com/2008/04/03/microkernels-vs-hypervisors/

The short answer is that a microkernel is a possible implementation of a hypervisor (the right implementation, IMHO), but can do much more than just providing virtual machines.

A hypervisor, also called a virtual-machine monitor, is the software that implements virtual machines. It is designed for the sole purpose of running de-privileged “guest” operating systems on top (except for the deceptive pseudo-virtualizers). As such it is (or contains) a kernel (defined as software running in the most privileged mode of the hardware).

A microkernel is a minmal base for building arbitrary systems (including virtual machines). It is characterised as containing the minmal amount of code that must run in the most privileged mode of the hardware in order to build arbitrary (yet secure) systems.

By definition (the generality requirement), a microkernel can be used to implement a hypervisor.

Can a hypervisor be used to implement a microkernel? In general not. As said above, a hypervisor is designed for a single purpose, and that is to run guest OSes. It could be used to virtualize a microkernel, but that isn’t the same (and would certainly result in sucking performance).

The reason is that a hypervisor generally lacks the minimality of a microkernel. While less powerful (in the sense that it doesn’t have the generality of a microkernel) it typically has a much larger trusted computing base (TCB) than a microkernel. It contains all the virtualization logic, and all physical device drivers needed to support the virtual machines. The Xen hypervisor itself is about 5–10 times the size (in LOC) of the OKL4 microkernel. In addition, it has the privileged special virtual machine “Dom0”, which contains a complete Linux system,

microkernels are virtual-machine monitors done right, and more.

results matching ""

    No results matching ""