"THE" Multi-programming System

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.

Multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently.

Multiprogramming, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage.

Time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow several programs to execute apparently simultaneously.

Real-time systems, some waiting tasks are guaranteed to be given the CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing.

The goal of 'THE' operating system is process smoothly a continuous flow of user programs as a service to the University
  • i. reduction of turn-around time for programs of short duration
  • ii. economic use of peripheral devices
  • iii. automatic control of backing store to be combined with economic use of the central processor
  • iv. economic feasibility to use the machine for those applications for which only flexibility of a general purpose computer is needed but not the capacity nor the processing power
Microsoft PC
  • systems are optimized for the single-user experience
  • easy to use
  • more attention paid to performance
  • non paid to resource utilization
THE vs Windows
  • multi user v.s. single user
  • This system is invented along. The author want this operating system could be applied on different machines. For windows, the hardware has enough capacity to support it.

The main point of the paper?

layer-wise structure, easier to verify, debugging, and test

A hierarchy structure of five logical levels

  • level 0: real time clock interruption, scheduling
  • level 1: memory management, paging
  • level 2: message interpreter,
  • level 3: devices, buffering, stream
  • level 4: User level program
  • level 5: user

What layers does it have?

  • Layer 0 was responsible for the multiprogramming aspects of the operating system. It decided which process was allocated to the CPU, and accounted for processes that were blocked on semaphores. It dealt with interrupts and performed the context switches when a process change was required. This is the lowest level. In modern terms, this was the scheduler.
  • Layer 1 (memory management) was concerned with allocating memory to processes. In modern terms, this was the pager.
  • Layer 2 dealt with communication between the operating system and the console.
  • Layer 3 managed all I/O between the devices attached to the computer. This included buffering information from the various devices.
  • Layer 4 consisted of user programs. There were 5 processes: in total, they handled the compilation, execution, and printer of users' programs. When finished, they passed control back to the schedule queue, which was priority-based, favoring recently started processes and ones that blocked because of I/O.
  • Layer 5 was the user (as Dijkstra notes, "not implemented by us").

Pros & Cons of Layers (*)

  • Pros:
    • Clean Abstraction
    • Easy to test, verify
  • Cons:
    • 1) Not easy to design the set of layers needed
    • 2) Two layers might have dependency. Pager and scheduler. Scheduler manages pager which is a process to manage page, pager define the scheduler's data structure. Find exception to break dependency. Spend lot of time in design before doing implementation.
    • 3) Efficiency, Overhead. Communication cross layer again and again which takes time. Embedded system, clap those layers, only one application.

Synchronization

  • P (wait), V (signal)
  • There are two ways in which semaphores are used:

    1. Mutual exclusion
    2. Synchronization of cooperating processes

Goal of Sync

  • Mutual exclusion: visit of critical section, achieved by locks, semaphores
  • Coordination: maintain the order of visiting data, achieved by barriers, condition variable.
    • condition variable: a queue for waiting processes, referred as private semaphore

Synchronize between two machines

  • Same machine
    • semaphores(信号量)
    • message
    • Pipe: windows, linux, iOS for communication between processes
  • Different machine
    • message
    • distributed shared memory
    • SOC

Harmonious cooperation

  • How were the four goals achieved?
  • What were the two major mistakes made?
    • Assuming everything works correct
    • Lack of debugging support
  • deadlock: waiting for answer from a dead process, send a dummy answer

Summary

  • Layered OS structure
    • level 0: real time clock interruption, scheduling
    • level 1: memory management, paging
    • level 2: message interpreter,
    • level 3: devices, buffering, stream
    • level 4: User level program
    • level 5: user
  • Central abstraction: sequential concurrent processes
    • Semaphores for synchronization
    • Correctness (proofs + testing)
  • not very efficient

results matching ""

    No results matching ""