It is convenient to store similar information together and this is the idea behind both manual and computer files.
Data Field: The smallest unit of data is the data field. The data field consists of a group of related characters treated as a single entity.
Record: A collection of related data items treated as a single unit is called a record.
File: Records are grouped to form files. A file is a number of related records that are treated as unit representing a particular transaction.
Master File: Master files are perpetual files, i.e. apart from the time of their creation they are never empty. Further, they maintain information that remains constant over a relatively long period of time. When the information changes the master file may be updated. The normal methods of updation are by adding, deleting or editing records in a file.
Transaction file : Transaction files are files in which data prior to the stage of processing is recorded.
The data in transaction records may be collected automatically or may be initially recorded on source documents and later converted to machine-readable format.
For information to be useful it should not only be recorded, but it should also be easy to access and retrieve the information. File organisation may be:
* Direct access
* Indexed sequential access
Serial Organisation: In serial file organisation, records are held and accessed in a predetermined sequence of keys. Records can be organised in numerical, alphabetical or chronological order.
Direct Access Organisation: Direct access files are stored on magnetic disks or other devices where each record is assigned a physical address.
Indexed Sequential Access: The computer records of an indexed sequential file are stored in the main storage portion of the file, which is divided into sections called segments. Usually, all segments are the same physical size e.g. one cylinder.
Types of Processing Systems
Batch Processing System: In batch processing, data are gathered from time to time and collected into a group or batch before they are entered into a computer system and processed. When batch processing is used, the input data are typically recorded on source documents before being converted into a machine-readable form.
On-line Systems: An on-line system is one in which the system interacts directly with the user. As soon as the user inputs data, it is processed immediately. The system validates data at various points, and ensures that correct data is being entered.
Real-Time Processing: Real-time systems are on-line systems with tighter constraints on response time. In these systems the data is processed and results are generated fast enough to influence on-going activity.
Time Sharing: As its name implies, it has the ability to process several tasks simultaneously. In the time-sharing mode, the computer switches from one job to the other at a rapid rate. The jobs are entered into the computer through different terminals connected to the computer by cables. After processing the first user's job, it proceeds to the second and then the third, for short bursts of time or 'time slices' before returning to the first user's job from where it was earlier suspended.
This cycle continues indefinitely: when one programme is finished it is replaced by another one.
Integrity, Fallback and Recovery: With any system whether batch, on-line, or real-time, there is a danger that the system might break down. Certain procedures need to be followed to ensure that data is not lost, or the exact amount of data lost is known.
Integrity: Features of the system which make it less likely to fail, are classified as 'integrity'. This is the most vital part, since system crashes may result in the loss of data and time.
Fallback: There are some procedures which have been created for use when the system fails, e.g. in some airlines enquiry systems, when the system fails, the fallback procedure allows the terminals to keep collecting data, though without validation. This allows some work to go on even though the main computer is down.
Recovery: This is the process of bringing back the computer into full use when the system
fails. It involves bringing back the data, i.e. restoring the data back to the stage it was in before the break down.
Software: Software is the part of the computer system which enables the hardware to operate. Computer software can be divided into two major classifications:
* System software
* Application software
System software includes the computer programmes that run a computer system itself or that assist a computer in running application programmes. It also includes the documentation that describes how these programs operate. System software consists of:
* Operating system
Operating system: An Operating System (OS) is an integrated set of specialized programs which permit the continuous operation of a computer from one program to the next with minimum amount of operator intervention.
Through the OS the computer can supervise its own operations by automatically calling in the applications programs, translating any other special service programs, and managing the data to produce the desired output.
Multiprogramming: Multiprogramming is the process of combining hardware and software to create a situation in which more than one program may be held in main store at any one time. It is thus possible to process several tasks simultaneously. The main objective is to minimise unused CPU time.
Multiprocessing: Multiprocessing is the execution of two or more different programs at the same time.
Typically, in multiprocessing, multiple CPUs sharing a common memory are used. Instructions from different and independent programs can be processed at the same instant by different processors. On the other hand, the processors may simultaneously execute different instructions from the same programme.
Loosely Coupled Multiprocessing: In it a collection of relatively autonomous systems are used. Each CPU has its own main memory and input/ output channels.
Functionally Specialised Processors: Such as an I/O processor. There is a master, general-purpose CPU. Specialised processors provide services to the CPU and are controlled by it.
Tightly Coupled Multiprocessing: In it a set of processors share a common memory.
They are controlled by the operating system.
Parallel Processing: They are tightly coupled multiprocessors that can execute a job in parallel.
A number of desirable features of a comprehensive operating system are:
* Job control language
* Failure and recovery
* File security
* Monitoring system status
* Multi-access control
Job Control Language: During the processing of application programs, the operating system provides automatic job-to job linkages. These linkages are handled by a job control program. In some systems macro commands (macros) may be used to supplement the command language. The macros can be either system or user defined.
Failure and Recovery: Invalid conditions and fault conditions cause interrupts to be raised, to signal the operating system.
The operating system is called in if, for example.
* An invalid instruction is encountered in program.
* A program attempts access to storage areas reserved for another.
* An overflow occurs in allotted storage area during arithmetic calculations.
According to its instructions, the operating system may either halt the process and signal the operator or switch control to error recovery routines provided by the user.
Dumping: It is a facility whereby all the contents in specified storage areas are written out as output.
File Security: The security of the system may also be monitored. Any attempt to use unauthorised passwords from on-line terminals may be recorded.
It is possible for files to be either private to a particular user or to be shared by a
number of users under flexible controls. The operating system must provide various
* Safety from accidental or malicious access by other users.
* Safety from accidental damage caused by the owner of the files.
* Access limited to either the owner of the files or a specified user or group of users.
* Safety from hardware or software malfunction.
These safeguards can be achieved through:
* The use of passwords.
* Allowing the owner of the files to specify which other users may access his file.
* Mode of access being specified - read only, write, append.
Logging: The OS also keeps a log of all the system actions that relate to a particular user's
job. It also maintains a log of all the jobs that are run at larger computer systems. These jobs are clocked in and out of the system. In instances of program failure, this log helps in locating the causes of program failure.
System Scheduling: Multiple tasks are scheduled to balance input/ output and processing requirements. This often involves overlapping input/ output and processing operations. The operating system allocates specific areas of storage to each program. When a program is completed the remaining programs are re-positioned in storage and a new program or programs are added to take up the available space.
Monitoring System Status: The OS constantly monitors the status of the computer system during processing operations.
* It may respond to user "help" commands, and supply information about its function and operation.
* It also directs the computer to send messages to the operator's terminal when I/O devices need attention, when errors occur in the job stream, or when other abnormal conditions arise.
* In a larger computer system, the computer does not wait for the operator to take appropriate action. Rather, the message is printed and control passes to the next job.
Multi-access Control: In larger computers the processing power can often be utilized more efficiently if a number of individuals are able to access them at the same time. With these multiuser systems, the OS
* Allocates limited CPU time among users.
* Separates job requests.
* Must avoid mix-ups.
Software Utilities: These are programs or routines which carry out certain procedures which are common to virtually all applications.
Utility software performs needed services such as
* Sorting records into a particular sequence for processing.
* Merging several sorted files into a single large updated file.
* Transferring date from one I/O device to another.
* Printing of files held on backing storage.
* Printing the contents of main memory.
Translating Programs: Translating programs trans-form instructions written in humanly convenient form to machine language codes required by computers. These translating programs are loaded into the computer where they control the translating process. Compilers and interpreters are used to translate programs to machine language codes.
Compiler: A compiler translates a program written in a high-level language to executable machine instructions. The compiler treats source-program instructions as data. Each instruction is accessed in turn and translated into one or more lines of object code in machine language.
Interpreter: Some high-level programming languages often use an interpreter instead of a compiler to translate instructions into machine code. Instead of translating the source program and permanently saving the object code produced during a compiling run for future use, the source program is loaded into the computer along with the data to be processed. When a program is to be executed, the interpreter accesses only the first instruction, translates it into one or more lines of machine code and then if possible the instructions are executed. The interpreter then accesses the next instruction and the process is repeated. This process continues until all the source program instructions are translated and executed.
Application Programs: An application program is designed to handle a particular task required by the end-user. It handles all aspects of a routine application, including error situation, the display of menus to aid the user, thus making it possible for a user having very little computer expertise to process the application.