IPL Process

Home Up Search Credits Site Map Contact Info. Links Disclaimer Download

This write-up gives a quick overview of the process by which any OS/360 system is initialized, how storage is used (particularly in OS/360/HASP), and describes how OS/360 is modified by the use of HASP.  HASP is the short name for the Houston Automatic Spooling Priority System.  The storage layout described is from a University system using an IBM 360/67 system.


Overview of OS/360 with HASP


This write-up gives a quick overview of the process by which any OS/360 system is initialized, how storage is used (particularly in OS/360/HASP), and describes how OS/360 is modified by the use of HASP.  HASP is the short name for the Houston Automatic Spooling Priority System.  The storage layout described is from a University system using an IBM 360/67 system.


1         Initialization – Getting a system up and running

Consider a computer with no operating system currently in it.  The first necessity is to get a workable operating system in it, so that JOBs can be run.  This is not a trivial process: not that there is not program fetch resident in the machine no I/O access method routines, and not even a current set of PSW’s in low core for directing interrupt actions.

 For OS/360, the initialization process is composed of two parts: IPL and NIP.  IPL (Initial Program Loader) initialized memory and some other things, and bring the nucleus (the core of the OS) into memory.  NIP (Nucleus Initialization Program) performs the remaining actions required to set up a specific nucleus to be ready to execute.

1.1      IPL – Initial Program Loader

The process of getting an OS/360 system running is called IPLing, and includes the following main steps:

  1. The operator makes sure the disk pack called SYSRES (System Residence) is mounted on a disk drive.  The load unit switches are set to show the device address of the SYSRES disk pack, and the load button is pressed.  This causes the control record to be read from the first record on the disk pack, consisting of a PSW and two CCWs, placed at location 0 in memory.  PSW is the Program Status Word and in System/360 controls how the computer functions.  CCW is the Channel Command Word and is defines how data is read from an I/O device.  It ends with a LPDS to give control to the IPL program at memory location 0.
  1. IPL selects which nucleus will be loaded (here may be a choice which can be given by switches on the System/360 operator consoles).  The System/370 and System/390 consoles have service frames where the nucleus information is stored.
  1. IPL clears all memory about itself to zero, also obtaining the size of memory; i.e. it stores until an addressing interrupt occurs.
  1. IPL clears the floating-point registers, thus finding out if the floating point is installed.  This is only on System/360 machines.  Newer machines have a floating-point support as a default.
  1. IPL brings the nucleus into memory; first, it relocates the part of itself not yet executed into high memory (near 252K), so that the nucleus can be placed beginning at memory location zero.  It then simulated program fetch, loading the CSECTS of the nucleus load module into memory.  The first CSECT loaded is the NIP. Loaded job below IPL, followed by the I/O interrupt handling at zero (0) (which thus defines all of the special PSW’s in low memory).  IPL then passes control to NIP.

1.2      NIP – Nucleus Initialization Program

 The IPL process described above applies to all version of OS/360.  The NIP is generated in different ways, depending on the specific type of system and choice of options desired.  Note: NIP is a CSECT, which is Link-Edited with the nucleus, so that it can refer to sections of the nucleus via address constants, and provide efficient, and specific initialization services.  It includes the following steps:

  1. The CVT (Communications Vector Table) is initialized, and its location placed at location 16, so that it can be accessed from any routine, whether part of the nucleus or not.
  1. NIP determines whether the System/360 computer has LCS (Large Core Storage) attached to it or not.  Yes that memory was truly core storage with little donuts hung between crossing wires!
  1. NIP checks the workability of operator console(s), and also checks the workability of ready direct-access storage devices (DASD) (using test I/O (TIO) instructions.  It particularly checks that the SYSRES volume is mounted and contains certain datasets needed by the system.
  1. NIP performs various housekeeping actions, such as checking and setting the timer to make sure it is working correctly, initializing some pointer for storage management, initializing the SVC table (which give a pointer to each routine associated with a defined SVC number.  It also sets up to be able to obtain modules from the SYS1.LINLIB dataset, which contains the heaviest-used load modules for the system and also establishes communications with the operator.
  1. For any system having one, NIP loads reentrant modules into the Link Pack.  These modules can be used during following executing, and are loaded at the high end of memory.  In a system with fast memory and LCS, the Link Pack can be split, residing at both the high end of fast memory and at the high end of LCS.  In Virtual systems, the Link Pack can reside below 16Megs and above 16Megs.
  1. With the addition of various other miscellaneous operations, NIP prepares a region, which will contain the Master Scheduler, which is the program doing overall JOB scheduling and operator communication.  The system is finally ready to run JOBs.

At this point, memory layout is as follows:


High Address

Link Pack

Reentrant modules


Master Scheduler



Free Area



SQS (System Queue Space)

Dynamic area for problem programs

Low Address


Control blocks – CVT, TCBs and others


2         Running JOBs in an OS/360 System

This section describes how JOBs are fun in a standard OS/360 system, using MVT.  There are two versions of OS/360 system developed.  The fist was MFT (Multiprogramming with a Fixed number of Tasks) and MVS (Multiprogramming with a Variable number of Tasks). 

2.1      Reading Input Streams 

For each existing input stream (card reader, or input on tape), the operator can issue a start READER command (S RDR).  This causes a copy of the reader/interpreter program (referred to hereafter as RDR) to read card images from the requested input device.

During its operations, a RDR reads an input stream, scans JOC cards and converts them to a standard internal text form, and also obtains cataloged procedures definitions from the procedure library (PROCLIB).  From the internal text, it builds input queue entries, representing the information on the user JCL cards.  It also writes any input data cards onto disk, while placing pointers to the data into the input queue entries do that it can be found later.  The JOB’s input queue entry is enqueded in priority order with others JOBs awaiting execution. 

When all the cards for a JOB have been read, it had in effect been split up into the following: 

  1. Input queue entries, in priority order, in a special system data set used for work queue entries, in priority order, in a special system data set used only for work queue entries, referenced to a SYSJOBQUE. 
  1. Input stead data sets, placed on DASD (using normal OS/360 Direct Access Device Storage Management (DASDM) routines.  Note: DASDM space often requires a fair number of accesses to disk to look for free space on one, and to allocate the space appropriately.  The DASM routines are quite general and powerful, but also create some overhead.

2.2      Initiating JOB STEPs 

The operator may start one or more initiators, each of which can initiate JOBs from one or more classes (Categories) of JOBs.  Each initiator will then attempt to initiate the highest priority JOB from the first class of JOBs, which has a ready JOB.  If there are not JOBs awaiting execution in its allowed classes, it waits for one to become available.  Not that is essentially removes input queue entries from SYSJOBQUE.  Like every RDR, each initiator is executed as a separate task. (Initiator may be abbreviated INIT). 

When an allowable JOB become available, the imitator obtains a region for the JOB (from the free area and equal to the region specified in the JOBCARD or STEP JCL card).  The region obtained is from the free area, also called the dynamic area.  The initiator then uses information from the RDR internal text to allocated DASD storage, tape drives, and other I/O devices.  It then attaches the first module of the program to be executed (thus creating the JOB STEP task), and waits until the JOB STEP completes. 

When a JOB STEP is finished, the terminator (Part of the initiator really, so that the whole unit is called an Initiator-Terminator) effectively cleans up, performing disposition of I/O devices (DISP parameter in JCL), and reassesses the region, which had been acquired for the JOB STEP. 

During this process, JOB STEPs are essentially independent, i.e. they could required different sizes of REGIONS, and might execute in different locations.  Note that the Initiator-Terminator must also control the skipping of STEPs as controlled by the JCL COND option. 

During execution, SYSOUT datasets are written to DASD, to be printer/punched later.  When the last JOB STEP of a JOB completes, the INIT creates a work queue entry calling for the JOB’s output to be printed/punched. 

2.3      Writing System Output 

A program called a system output writer (WTR) can be started by the operator to transcribe output from DASD to printers or punches, or even tapes to be printer/punched later.  Output can originally be grouped into classes, which can be written according to priority or otherwise treaded differently as desires. 

Comments on the process above: 

The process described above is quite flexible and general.  However, it does require a fair amount of time to set up any JOB, even a small one.  As such, it is quiet satisfactory for any installation which runs JOBs which require a fair amount of time, since when the setup time is negligible.  However, due to the use of OS DASM for SPOOLed input and output, DASD space can become fragmented, disk head movement can become excessive, and much time can be used up allocating and de-allocating disk space.  Although OS/360 is quite reasonable in a commercial installation, or in one running a few large JOBs, it seems to have too much overhead for University or other installations, which often run, may small JOBs (such as a development site).  For this reason, larger System/360 computers typically use some method to reduce the overhead in running small JOBs.  All of the methods involve “faking out” OS/360 in some aspects or other.  The method emphasized here (which happens to be the most popular one) is HASP. 

3         Running an OS/360 HASP System 

In any OS/360 system, it is fairly typical to have one or more special JOBs in the system, which are loaded before normal user JOBs, and typically remain resident from one IPL to the next.  Such JOBs may control remote batch terminals, timesharing typewriter terminals (note…early 360 systems did not have CRTs), or provide any other service, which the installation desires.  Such JOBs are normally placed into the high-address sections of the free area.  When HASP is used, it is normally the first JOB submitted to OS/360 and it essentially takes over the system, even though it appears to OS/360 as just another JOB. 

3.1      HASP Initialization 

There are two possible cases when starting HASP up after an IPL.  A COLD Start occurs when the system is completely empty, i.e., there are no JOBs already enqueued on disk which can be executed or printed/punched.  If there are disk packs on the system containing previously-read JOBs, the start is called a WARM start.  A warm start normally occurs if the system was previously taken down on purpose, such as for systems programming, or if either information has been saved previous to a “crash”.  A COLD start only occurs when the system crashed badly, and destroys records of JOBS already SPOOLed onto disk.  In this case, the JOBs must be read in again.  The operator can also request a COLD start via the operator reply to the HASP initialization message. 

When HASP first gains control, it uses a special SVC call, which return to HASP with Storage Protect Key Zero (can write to any part of memory) and supervisor state, Also supplying HASP with some useful pointer to control blocks in the nucleus.  Not this special SVC can only be called one time, since it locks itself after its first usage after an IPL. 

UCB (Unit Control Blocks) exits for every device connected to the computer system.  HASP now scans the UCBs, and essentially allocates to itself the following devices: 

  1. All real unit-records devices (card reader, printers, and punches)
  1. All disk packs which have a volume serial label starting with SPOOL (the volser mask can be changed to any other 5 character string)

It also obtains effective control of the operator’s console(s), plus remote terminals if any. 

Finally, HASP modifies the SVC table (which contains pointers to the routines which are called for each specific SVC number), so that the following ones go into HASP, rather than to the original routines (also saving these addresses for later use for itself): 

            SVC 0             (EXCP – all Input/Output)
            SVC 34           (WTL – Write to Log)
            SVC 35           (WTO, WTOR – Write to the operator with/without reply) 

3.2      Running Normal User JOBs under OS/360 with HASP

  1. Input Stage – HASP continually reads cards from whatever card readers are active in the system.  It checks for JOB cards, performs various accounting checks on input JOBs, and transcribes the JOBs to disk.  In this stage, each JOB is split up into tow sections: the JCL cards (with certain modifications), and the input data cards.  It enquees the JOBs according to a priority scheme, which can be found from manly different sources of information.  These include category, CPU time, OUTPUT, Storage requirements, originating site of JOB, and commands from the operator to change priority of either single JOBs or entire groups of JOBs.  The disk allocation scheme used is quite efficient, and is described later.
  1. Execution Stage – HASP had the ability to control witch JOBs may be executed and using various priority and storage requirements, it selects JOBs from its queue to be executed.  One OS RDR exists, permanently stated to a card reader.  This card reader dos not actually exits (i.e. it has a device address which does not correspond to a real card reader).  Since SVC ‘s are intercepted by HASP anyway, HASP effectively selects a JOB and feeds it to the OS RDR includes an exit list, which allows it to call some routines after it has scanned each JCL card, but before the JCL card’s data is actually recorded.  HASP is entered, and takes this opportunity to modify any JCL that it wishes to, for example, removing any REGION= requests on the JOB or EXE cards.  HASP has a special treatment for any system input or output data sets, as described below:

    //XXXXXXXX DD * or DATA:  The OS RDR would normally expect data to follow such a card, and would normally thus SPOOL such to disk itself.  HASP does not want this to occur, since it has already SPOOLed the data.  It happens that there is large number of UCB’s for pseudo card readers already in the system.  HASP selects one of these UCBs which is not being used, and effectively changes the tables for its type of card so that it appears as:

    As a result, the OS RDR thinks that the data set will be read from UNIT=XXX, so that it does not try to SPOOL the input.  In any case, the input no longer follows that JCL card, because HASP frees the RDR only the JCL cards of a user JOB.  During this process, HASP connects up the device address XXX to the specific input data set which had be previously SPOOLed.

    //XXXXXXXX DD SYSOUT=X HASP also has a large number of UCBs for nonexistent, pseudo printer/punches.  It does the same thing to this kind of card as it does to the DD * cards, except that it only allocates the pseudo devices, and will later save the output which is written to them.

    As soon as the RDR finishes reading a JOB, an initiator can immediately initiate it, since HASP chooses JOBs appropriately.  When the INITIATOR chooses I/O devices, it finds that it can always allocate devices for unit-record I/O, since HASP had already checked to make sure a pseudo reader/printer/punch was available for each SYSIN or SYSOUT data set.

Finally, a JOB STEP of the user JOB executes.  When it wishes to read cards or print lines, it acts as though it was using a real device attached to the system, and OS/360 accepts this.  Whenever an SVC 0 is issued to request such I/O, HASP intercepts it.

HASP may be entered for any of the following reasons:

  1.  WTO, WTOR, and WTL – HASP adds own processing as desire.

  2. I/O to DISK, Drum, tape, terminals, etc. – HASP does not interfere, but passes these on to the real I/O supervisor.

  3. I/O to real Unit-Record devices – These have probably been issued by HASP in the first place, so it passes control to the real I/O routines to let them perform the I/O.

  4. I/O to a pseudo device – These must be caused by user program.  For input, HASP fetches the card images from disk into memory (if they are not already present), and feeds the requested card image(s) to the user program by MVCing them there (using user protect key for safety).  For output, it blocks up output and eventually writes it to disk.  In all classes, HASP simulates the effect of having real card readers/printers/punches, which are different only in processing at a great speed: i.e. the effect on OS/360 is of having issued an I/O request and having it complete immediately.

During execution, HASP can also provide extra services, such as monitoring time used, output records, etc. 

5.      OUTPUT stages – Print and punch – after a JOB has been executed, it enters the print queue, is printed, enters the punch queue, and without the knowledge of OS/360, which believes the JOB disappeared is it s disk space released.  This allow for JOBs to be saved across system IPLs and for such useful services as repeating output by operator control. 

3.3      DASD Storage Management in HASP 

HASP manages its DASD storage quiet efficiently, not only needing no access to DASD to allocate or de-allocate space, but also doing a good job of minimizing arm movement on moveable-head devices.  HASP requires the use of entire volumes for SPOOLing input/output.  The management of this storage works as follows: 

A master cylinder bit-map is maintained in HASP.  This is a string of byes, in which each bit represents 1 cylinder on the SPOOL disk.  A one-bit represents a free cylinder, while a zero-bit shows that the give cylinder is allocated to some JOB.  HASP also remembers for each disk which cylinder was the last referenced, thus always noting the current position of the read/write heads. 

Two key bit-maps exist for each JOB, one for SYSIN data and the other for SYSOUT data.  Whenever a cylinder is required for a JOB, HASP searches for a free one in the following fashion: 

  1. It first searches the master bit-map for a free cylinder at the current position of any read/write head, i.e. where it can read or write without moving a head.
  1. It then searches for a free cylinder at +1 from the current head position, then –1 from each, followed by +2, -2, etc.  up to +8, -8 cylinders away from the current head position.
  1. If the above fails, it searches sequentially through all cylinders in the master bit-map.  When a cylinder is found, its bit is turned off (to zero) in the master bit-map, and turned on in the appropriate JOB bit-amp.  The overall effect of this process is to minimize head movement.  When disk storage for a JOB is to be released, the de-allocation scheme is extremely fast and efficient: The JOB’s bit-maps are just ORed into the master bit-map, thus returning all of the cylinders to free storage.

Credits:  This document was a green-bar listing found from about 1972.  It appears to be from the Pennsylvania State University.

The information on this site is the combined effort of a lot of people, please credit the authors if you use their information.
Please read the Disclaimer page for the restrictions, copyright, and other uses of the information contained on this site.
For problems or questions regarding this web contact Bob.
Last updated: January 25, 2004.