Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

TCS Aspire

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 39

3.1.

Assembler
Introduction
System software refers to the files and programs that make up the computer's operating system.
System files consist of libraries of functions, system services, drivers for printers, other hardware,
system preferences and various configuration files.
System Software consists of the following programs:

Assemblers

Compilers

File Management Tools

System Utilities

Debuggers

The system software is installed on the computer when the operating system is installed. The
software can be updated by running programs such as "Windows Update" for Windows or
"Software Update" for Mac OS X. Application programs are used by end user, whereas system
software is not meant to be run by the end user. For example, while Web browser like Internet
Explorer are used by users every day, they don't have to use an assembler program (unless the
user is a computer programmer).
System software runs at the most basic level of the computer and hence it is called "low-level"
software. User interface is generated by the system software and allows the operating system to
interact with the hardware.
System software is the interface between the computer hardware and the application software.

Assembler
An assembler is a program that converts basic computer instructions into a pattern of bits. These
bits are used by computer's processor to perform its basic operations. Some people call these
instructions as assembler language and others use the term assembly language. This is how it
works:
Most computers come with a specified set of very basic instructions that corresponds to the basic
machine operations that the computer can perform. For example, a "Load" instruction moves a
string of bits from a location in the processor's memory to a special holding place called a
register. In computer architecture, a processor register is a small amount of storage available on
the CPU and the contents of register can be accessed more quickly. Most of the modern
computer architectures operate on the principle of shifting data from main memory into registers,
operate on the data and then move the result back into main memory. Assuming the processor
has at least eight numbered registers, the following instruction would move the value (string of
bits of a certain length) at memory location 2000 into the holding place called register 8:
L 8,2000

The programmer can write a program using a sequence of assembler instructions.

These assembler instructions, which are also known as the source code or source
program, is then specified to the assembler program when it is started.

Each program statement in the source program is taken by the assembler program and a
corresponding bit stream or pattern (a series of 0's and 1's of a given length) is
generated.

The output of the assembler program is called the object code or object program relative
to the input source program. The object program consisting of a sequence of 0's and 1's
is also known as machine code.

This object program can be run (or executed) whenever required.

3.2. Assembly Language


The native language of the computer is assembly language. The processor of the computer
understands machine code (consisting of 1s and 0s). In order to write the machine code
program, it has to be first written in assembly language and then an assembler is used to convert
the assembly language program into machine code.

Assembly language program consists of mnemonic codes which are easy to remember as they
are similar to words in English language. It consists of mnemonic codes for each of the different
machine code instructions that the machine understands. A mnemonic is an abbreviation of the
actual instruction. It is a programming code that is easy to remember because the codes
resemble the original words, for example, ADD for addition and SUB for subtraction. Examples of
assembly language program:
MOV EAX,2 ; set eax register to 2 (eax = 2)
SHL EAX,4 : shift left value in the register
MOV ECX,16 ; set ecx register to 16
SUB EAX,EBX ; substracts ecx from eax
An assembler converts this set of instructions into a series of 1s and 0s, also known as a
executable program, that the machine can understand.

Advantages of Assembly language

Assembly language can be optimized extremely well as it is an extremely low level


language. Therefore assembly language is used in application which require utmost
performance.

Assembly language can be used for communication with the machine at the hardware
level and hence it is often used for writing device drivers.

Another advantage of assembly language is the size of the resulting programs. Since
conversion from a higher level by a compiler is not required, the resulting programs can
be exceedingly small.

3.2.1. Assembly Language Programming


The assembler is a program which converts the assembly language source program into a
format that can be run on the processor. The machine code instruction containing binary or hex
value is replaced by a mnemonic.
Example

Advantages of using mnemonics are:

Mnemonics are easy to understand than hex or binary values.

Its less likely to make an error.

Mnemonics are easy to remember.

Assembly language statements are written one on each line. A machine code program consists
of a sequence of assembly language statements in which each statement contains a mnemonic.
A line of an assembly language program can contain the following four fields:
i. Label
ii. Opcode
iii. Operand
iv. Comments
The label field is optional. A label is an identifier or a text. Labels are used in programs to reduce
reliance upon programmers remembering where data or code is located.A label is used to refer
to the following:

Memory location

A data value

The address of a program or a sub-routine or a portion of code.

The maximum length of a label differs for different types of assemblers. Some assemblers accept
up to 32 characters long, others only four characters. When a label is declared, it is suffixed by a
colon, and begins with a valid character (A..Z). Consider the following example.
LOOP: LDAA #24H
Here, the label LOOP is equal to the address of the instruction LDAA #24H. The label can be
used as a reference in a program, as shown below
JMP LOOP
When the above instruction is executed, the processor will execute the instruction associated
with the label LOOP, i.e.LDAA #24H. When a label is referenced later in a program, it is
referenced without the colon suffix.
An advantage of using labels is that inserting or re-arranging code statements do not require reworking actual machine instructions. It only requires a simple re-assembly. In hand-coding,
changes can take hours to perform.
The opcode field consists of a mnemonic. Opcode is the operation code, ie, a machine code
instruction. Opcode may also have additional information in the form of operands Operands are
separated from the opcode by using a space.
Operands consists of additional information or data that the opcode requires. Operands are used
to specify

Constants or labels

Immediate data

Data present in another accumulator or register

An address

Examples of operands
LDAA 0100H ; two byte operand
LDAA LOOP ; label operand
LDAA #1 ; immediate operand (i.e constant value as operand)
The comment field is optional, and is used by the programmer to explain how the coded program
works. The comments are prefixed by a semi-colon. Comments are ignored by the assembler
when the instructions are generated from the source file.

3.3. Loaders and Linkers


In a computer operating system , a loader is a component that locates a program (which can be
an application or a part of the operating system itself) in an offline storage (for eg. hard disk ),
loads it into main storage (also called random access memory), and gives that program control of
the computer (i.e. allows it to execute its instructions).
A program that is loaded may itself contain components that are not initially loaded into main
storage, but are loaded if and when their logic is required. In a multitasking operating system, a
program that is known as a dispatcher juggles the computer processor's time among different
tasks and calls the loader when a program associated with a task is not already in the main
storage. (Program here, means a binary file that is the result of a programming language
compilation or linkage editing or some other program preparation process).
A linker, also known as link editor is a computer program that takes one or more object files
generated by a compiler and combines them into a single executable program.
Computer programs comprise several parts or modules but all these parts/modules need not be
contained within a single object file. In such case they refer to each other by means of symbols.
An object file can have three kinds of symbols:

Defined symbols, which is called by other modules

Undefined symbols, which call the other modules where the symbols are defined.

Local symbols which are used internally within the object file to facilitate relocation.

For most compilers, each object file is the result of compiling one input source code file. If a
program comprises multiple object files, then the linker combines these files into a unified
executable program by resolving the symbols as it goes along.
Linkers can take objects from a collection called a runtime library. A runtime library is a collection
of object files which will contain machine code for any external function used by the program file
which in turn is used by the linker. This machine code is copied by the linker into the final
executable output. Some linkers do not include the entire library in the output, they only include

the symbols that are referenced from other object files or libraries. The libraries exist for diverse
purposes. The system libraries are usually linked in by default.
The linker also arranges the objects in a program's address space. This involves relocating code
that assumes a specific base address to another base address. Since a compiler seldom knows
where an object will reside, it assumes a fixed base location (for example, zero). The relocation
of machine code may involve re-targeting of loads, stores and absolute jumps.
The executable output by the linker may need another relocation pass when it is finally loaded
into memory (just before the execution). This relocation pass is usually omitted on hardware
offering virtual memory in which every program is put into its own address space and so there is
no conflict even if all programs load at the same base address. This relocation pass may also be
omitted if the executable is a position independent executable.

3.4. Compilers
A compiler is a special program that processes statements written in a particular programming
language and turns them into machine language or code that a computer's processor uses. A
programmer writes language statements in a language such as Pascal or C one line at a time
using an editor. The file created contains the source statements or source code. The programmer
then runs the appropriate compiler for the language, specifying the name of the file that contains
the source statements.
When executing (running), the compiler first parses (or analyzes) the language statements
syntactically one after the other and then builds the output code in one or more successive
stages or "passes" and makes sure that statements that refer to other statements are referred in
the final code correctly. The output of the compilation is called object code or sometimes an
object module. The object code is machine code that the processor can execute, one instruction
at a time.
The Basic Structure of a Compiler

In the five stages of a compiler, the high level language is translated to a low level language
which is generally closer to that of the target computer. Each stage of the compiler fulfills a single
task and has one or more classic techniques for implementation. The following are the five
stages of a compiler:

Lexical Analyzer: Analyzes the source code, removes "white space" and comments,
formats it for easy access by creating tokens, then tags language elements with type
information and begins to fill the information in the SYMBOL TABLE. The Symbol Table is
a data structure that contains information about symbols and groups of symbols in the
program being translated.

Syntactic Analyzer: Analyzes the tokenized code for structure, groups symbols into
syntactic groups, tags groups with type information.

Semantic Analyzer: Analyzes the parsed code for meaning, fills in assumed or any
missing information and tags the groups with the meaning.

Code Generator: Linearizes the qualified code and produces the object code.

Optimizer: Checks the object code to determine whether there are more efficient means
of execution.

3.5. Interpreters
Interpreter is a program that executes instructions written in a high-level language. There are two
methods to run programs written in a high-level language. The most common method is to
compile the program and the other method is to pass the program through an interpreter.

An interpreter translates high-level instructions into an intermediate form and then executes it. In
contrast to the above method, a compiler translates high-level instructions directly into machine
language. The compiled programs generally run faster than interpreted programs. One of the
advantage of using an interpreter is that it does not need to go through the compilation stage
during which machine instructions are generated. This process of compilation can be timeconsuming if the program is very long. The interpreter, on the other hand, immediately execute
high-level programs. Hence interpreters are sometimes used during the development of a
program in which the programmer wants to add small sections at a time and test them quickly. In
addition to this, interpreters are often used in education because they allow students to program
interactively.

Advantages of using interpreter

Execution is done in a single stage.

Compilation stage is not required.

Alteration of code is possible during runtime.

Really useful for debugging the codes.

Helps in interactive code development.

Interpreter vs Compiler

The primary difference between a compiler and interpreter is the way in which a program
is executed. The compiler converts the source code to machine code and then saves it
as an object code before creating an executable file for the same. The compiled program
is executed directly using the machine code or the object code. But an interpreter does
not convert the source code to an object code before execution.

An interpreter executes the source code, line by line and conversion to native code is
performed line by line while execution is going on (at runtime). Hence the run time
required for an interpreted program will be high compared to a compiled program.

Even though the run time required for interpreted program is more, the execution using
an interpreter has its own advantages. For example interpreted programs can modify
themselves at runtime by adding or changing functions. Compiled program has to be
recompiled fully even for the small modifications done in the program. But in the case of
an interpreter there is no such problem (only the modified section needs to be recompiled
- refer Figure 3.6).

4.1. Basic Resources Managed by Operating


Systems
Introduction
Operating system is a collection of programs. It is a software that supports computer's basic
functions. It acts as an interface between applications and computer hardware as shown below in
the figure 4.1. Examples of operating systems are Windows, Linux, MacOS, etc.

Operating system also does the task of managing computer hardware. Operating system
manages hardware by creating a uniform set of functions or operations that can be performed on
various classes of devices (for instance read x bytes at address y from hard drive of any type,
where x is the number of bytes read). These general functions rely on a layer of drivers that
provide specific means to carry out operations on specific hardware. Drivers usually are provided
by the manufacturers, but the OS must also provide a way to load and use drivers. OS must
detect the device and select an appropriate driver if several of them are available.
In the figure 4.2 below, OS interface is a set of commands or actions that an operating system
can perform and a way for a program or person to activate them.
Operating system is the core software component of a computer. It performs many functions and
acts as an interface between computer and the outside world. A computer consists of several
components including monitor, keyboard, mouse and other parts. Operating system provides an
interface to these parts through 'drivers'. This is why, when sometimes we install a new printer or
other piece of hardware, our system will ask us to install more software called driver. A brief on
device drivers is given in section 4.2.4, Device Management.
Operating system basically manages all internal components of a computer. It was developed to
better use the computer system.

4.1. Basic Resources Managed By Operating System


Following are the basic resources that are managed by operating system:

Memory space (Memory management)

CPU time (Process management)

Disk space (File system management)

Input-output device (Device management)

4.1.a. Basics Of Memory Management


Any program needs to be loaded in memory (RAM) before it can actually execute. Random
Access Memory (RAM) is a type of computer memory that can be accessed randomly, i.e. any
byte of memory can be accessed without touching or traversing through the preceding bytes.
These days, RAM is available with various storage capabilities (256MB, 512MB, 1GB, 2GB,
4GB, 6GB and 8GB sizes).
Memory management deals with managing computer's primary memory. It decides the amount of
memory to be allocated to a particular program. It also keeps track of free or unallocated as well
as allocated memory. Memory management applies only to RAM.
The sole motto of memory management done by operating system is to utilize the available
memory effectively. In this process we should try to minimize fragmentation.

i. Fragmentation

Fragmentation occurs when there are many free small blocks in memory that are too small to
satisfy any request. In computer storage, fragmentation is a phenomenon in which storage space
is used inefficiently, resulting in reduced capacity as well as performance. Fragmentation also
leads to wastage of storage space. The term also refers to the wasted space itself.
There are three different types of fragmentation:

External fragmentation

Internal fragmentation

Data fragmentation

It can be present in isolation or conjunction. When a computer program is finished with a


partition, it can put the partition back to the computer. The size and amount of time for which a
partition is held by a program varies. During its life span, a computer program can request and
free many partitions of memory.
When a program starts, there are long and contiguous free memory areas. Over the time and
with use, these long contiguous regions become fragmented into smaller contiguous areas.
Eventually, it may become impossible for the program to request large partitions of memory.
Internal Fragmentation
Internal fragmentation is the space wasted inside the allocated memory block because of the
restriction on allowed sizes of allocated blocks. Allocated memory can be slightly larger than
requested memory. The size difference is the memory internal to partition but would not be used.
Internal fragmentation is difficult to reclaim. Design change is the best way to remove internal
fragmentation. For example, in dynamic memory allocation, memory pools cut internal
fragmentation by spreading the space overhead over a larger number of objects.
External Fragmentation
External fragmentation happens when a dynamic memory allocation algorithm allocates some
memory and a small piece is left over which cannot be used effectively. The amount of usable
memory reduces drastically, if too much external fragmentation occurs. The total memory space
available is enough to satisfy a request but is not contiguous and so, is wasted. Here the jargon
'external' is used to depict that the storage that is further unusable is outside the allocated
regions.
For example, consider a situation wherein a program allocates three continuous blocks of
memory and then frees the middle one. The memory allocator can use this free block of memory
for future allocations. However, this free block cannot be used for allocation if the memory to be
allocated is larger in size than the free block. External fragmentation also occurs in file systems
as many files of different sizes are created, size changed and deleted. The effect is even worse if
a file which is divided into many small pieces is deleted, because this leaves almost equal small
regions of free space.
Data Fragmentation
Data fragmentation occurs when a collection of data in memory is broken up into many pieces
that are not close enough. It is typically the result of attempting to insert a large object into
storage that has already suffered external fragmentation.

ii. Memory Management Techniques

Single Contiguous Allocation

Partitioned Memory Management Scheme (refer Glossary for scheme definition)

Paged Memory Management

Segmented Memory Management

Single Contiguous Allocation


Single Contiguous Allocation is the simplest form of memory management. In this type of
memory management, all memory (with an exception of small portion reserved for running the
operating system) is available for a single program to run. So only one program is loaded in all
available memory and so generally the rest of memory is wasted. Simplest example of this type
of memory management is MS-DOS. Advantage of single contiguous memory allocation is, it
supports fast sequential and direct access. Provides good performance and the number of disk
seek required is minimal. Disadvantage of single contiguous memory allocation is fragmentation.

Partitioned Memory Management Scheme


This type of memory management divides primary memory into multiple memory partitions,
usually contiguous areas of memory. Memory management here consists of allocating a partition
to a job or program when it starts and unallocating it, when job or program ends. A typical
scenario of partitioned memory management is depicted in the figure 4.4 below. This type of
memory management also needs some hardware support.
The figure 4.4 below shows multiple memory partitions. Each partition runs a process (Refer
Glossary). Process 1, Process 2, etc. are the processes allocated to a particular partition in
memory. Once the process is completed, partition gets empty. A is the base register address.
Base register contains the lowest memory address a process may refer to. We can get length of
a partition from Bounds register. Bounds register stores upper and lower bounds on addresses in
a time sharing environment. Time sharing environment refers to concurrent use of computer by
more than one user users share the computers time. Time sharing is the term used
synonymously with multi-user.

Partitioned memory management is further divided in to 2 types of partitioning schemes:

Fixed memory partitioning (Static partitioning)

Main memory is divided into a number of static partitions when a computer system starts. A
process may be loaded into a partition of equal or greater size. Advantage of such type of
partitioning is that it is simple to implement and has a little operating system overhead.
Disadvantages are inefficient use of memory due to internal fragmentation. Also maximum
number of active processes is fixed.

Variable memory partitioning (Dynamic partitioning)

Such partitions are created dynamically so that each process is loaded into a partition of exactly
the same size as that process. Advantage of such type of partitioning is that there is no internal
fragmentation. Also main memory is used more efficiently. Disadvantage is inefficient use of
processor due to the need for compaction to counter external fragmentation.
For further details on types of partitioning please refer below link:
http://www.csbdu.in/econtent/Operating System/unit3.pdf
Paged Memory Management
This type of memory management divides computer's primary memory into fixed-size units
known as page frames. The program's address space is divided into pages of the same size.
Usually with this type of memory management, each job runs in its own address space.
Advantage of paged memory management is that there is no external fragmentation. But on the
contrary there is a small amount of internal fragmentation.
Below is the Illustration representing paged memory management. It depicts how logical pages
address in memory can be mapped to physical address. The operating system uses base
address as a measure to find addresses. Base address means starting address of a particular
memory block. According to the program written, CPU generates address. In this case, address
generated by CPU is called logical address. This address is added to base address so that it

forms physical address. To translate a logical address into corresponding physical address, we
need to divide the logical address into page number and offset. Offset refers to a value being
added to base address to produce a second address.
For example, if B represents address 200, then the expressions, B+10 would signify the address
210. 10 in expression is the offset. To specify addresses using an offset is called relative
addressing because the resulting address is relative to some other point. Offset is also known as
displacement. In figure below, p represents page number and d is the offset. We use page
number as index to page table and the entry gives corresponding frame number. 'f' in page table
represents frame number. Frame number is concatenated with the offset to get a corresponding
physical address.

Segmented Memory Management


Segmented memory means, the division of computer's primary memory into segments or
sections as shown in figure 4.6. In this type of memory management, there is reference to a
memory location that includes a value . This value identifies a segment and an offset within that
segment. We may create different segments for different program modules, or for different types
of memory usage such as code and data segments. Certain memory segments may even be
shared between programs. Segmented memory does not provide user's program with a linear
and contiguous (or continuous) address space. Linear address space is a memory addressing
scheme used in processors where the whole memory can be accessed using a single address
that fits in a single address or instruction.
Segments are areas of memory that usually correspond to a logical grouping of information such
as code procedure or a data array (Refer Glossary). Segmentation allows better access
protection than other memory management schemes because memory references are relative to
a specific segment and hardware will not permit the application to reference memory not defined
for that segment. It is possible to implement segmentation with or without paging.
Segmentation With Paging has the following advantages

1. Paging eliminates external fragmentation. As a result it provides efficient use of main


memory.
2. Segmentation which is visible to the programmer, includes strength of paging. It has the
ability to handle growing data structures, support for sharing, and modularity, and protection.
To combine fragmentation and paging, a user's address space is broken into a number of
segments. Each of these segments in turn is broken into fixed-size pages. If size of a segment is
less than a page in length, then the segment occupies just one page.

4.1.b. Process Management


CPU has multiple processes running at a time. CPU always has certain programs scheduled in a
queue waiting for their turn to execute. These all programs must be scheduled by operating
system.
Operating system must allocate resources to processes, enable processes to exchange and
share information, enable synchronization among processes, and protect resources of each
process from other processes. Synchronization means the coordination of events or processes
so that a system operates in unison.
Process creation involves four principle events:

System initialization or start up. When an OS is booted, typically several processes are
created.

A process is run to execute a process creation system call. Most of the times, a running
process will issue system calls to create one or more new processes to help it do its job.
Creation of new processes is useful when the work to be done can be easily formulated
as several related, but otherwise independent interacting processes.

Request to create a new process is issued by user. In interactive systems, users can
start a program by clicking or double clicking the icons or thumbnails or even by typing a
command on command prompt.

Batch job initiation. Users can submit batch jobs to the system (most of the times,
remotely). When OS knows that it has the resources to run another job, a new process is
created and the next job is run from the input queue.

Several processes are created when an operating system boots. Some of them are foreground
processes while others are the background processes. Foreground processes interact with a
user and perform work for users. In multiprocessing environment, the process that accepts input
from the keyboard or other input device at a particular point in time is at times called the
foreground process. This means that any process that actively interacts with user is a foreground
process. Background processes are the processes which are not associated with a particular
user, but instead have some specific function to be performed. For example email notification is
an example of background processes. System clock displayed at the bottom right corner of
status bar in windows is another example of background process.
Some of the common reasons for process termination are:

User logs off

A service request to terminate is executed by process

Error and fault conditions

Normal completion

Time limit exceeded

Memory unavailable

I/O failure

Fatal error, etc.

Figure 4.7 below contains great deal of information.

Let us consider a running process 'P' that issues an input-output request


o

The process blocks.

Thereafter, at certain point of time later, a disk interrupt occurs and the driver
detects that P's request is satisfied.

P is unblocked, i.e. the state of process P is changed from blocked to ready.

Later, at some point of time, the operating system looks for a ready job to run and
picks the process/job P from the queue.

A preemptive scheduler in figure 4.7 below has the dotted line 'Preempt'; where as a nonpreemptive scheduler doesn't.

The number of processes change for two arcs namely, create and terminate.

Suspend and resume are a part of medium term scheduling


o

It is done on a bit longer time scale.

It involves memory management as well.

It is also known as two level scheduling.

There are 2 types of scheduling:


1. Preemptive scheduling:

In a computer, tasks are assigned priorities. At times it is necessary to run a certain task that has
high priority before another task (even if the task is in running state). Therefore, the running task
is interrupted for some time, put to either blocked or suspended state and resumed later, when
the priority task has finished its execution. This process is called preemptive scheduling.
2. Non preemptive scheduling:
In non-preemptive scheduling, a running task executes till it completes fully. It cannot be
interrupted. That means when a process enters the running state, it cannot be deleted from the
scheduler until it finishes its service time.

The figure 4.7 above shows various states of a process. Initially the process is created. Once
created, it goes to the ready state. Ready state means the process is ready to execute. It has
been loaded into the main memory and is waiting for execution on a CPU. There can be many
ready processes waiting for execution at a given point of time. A queue of processes which are
ready to execute gets created in memory. One of the processes from that queue is picked up for
execution and its state gets changed to running. A running process can be blocked. Blocked
state means a process is blocked on some event.
A process may be blocked due to various reasons such as when a particular process has
exhausted, the CPU time allocated to it, it is waiting for an event to occur. Blocked process can
either move to ready state or can move to suspended state. In systems that support virtual
memory, a process may be swapped out, that is, it would be removed from main memory and
would be placed in virtual memory by the mid-term scheduler. This is called suspended state of a
process. From here the process may be swapped back in to the ready state. Such state is called
ready suspended state. Process that are blocked may also be swapped out. Such a state of
process where a process is both swapped out and blocked is called blocked suspended state.
Suspended processes can be sent back to ready state only once they are released. This cycle
continues till a process finishes its execution i.e. terminated. A process may be terminated from

the running state by completing its execution or can be killed explicitly. In either of these cases,
we say that the process is terminated.

4.1.c. File System Management


Computer contains numerous files. They need to be organized in a proper way so that we can
keep track of those files. File retrieval should be easier as well as faster. File system
management helps us achieve this.
File system manager is used by the operating system to do file system management. File system
manager organizes and keeps track of all the files and directories on secondary storage media.

Figure 4.8 above shows a typical hierarchical file system structure. Operating system keeps track
of following tasks for providing efficient file management:

It is able to identify numerous files by giving unique names to them.

It maintains a list to keep track of exact file location.

Provide fast and simple algorithms to write and read files in co-operation with device
manager.

Grant and deny access rights on files to programs and users.

Allocate and de-allocate files so that files can be processed in co-operation with process
manager.

Provide programs and users with simple commands for handling files.

On storage medium, file gets saved in blocks or sectors. To access these files, file manager and
device manager works together. The device manager knows where to find each sector on disk,
but only file manager has a list telling in what sectors either file is stored. This list is called File
Allocation Table (FAT). FAT has been in use under DOS for a long time. Some of its alterations
are still used by Win95 and Win98.
Below are the different ways of allocating files:

Contiguous file allocation:

In this type of file allocation, at the time of file creation, a single set of blocks is allocated to a file.
Each file is stored contiguously in sectors, one sector after another. The advantage is that the
File Allocation Table has a single entry for each file, indicating the start sector, the length, and the
name. Moreover, it is also easy to get a single block because its address can be easily
calculated. The disadvantage may be that it can be difficult to get a sufficiently large block of
contiguous blocks. Contiguous file allocation is now a days only used for tapes and recordable
CDs.

Non-contiguous file allocation:

With this type of allocation, all blocks of a file can be distributed all over the storage medium. The
File Allocation Table (FAT) lists not only all files, but also has an entry for each sector the file
occupies. As all information is stored in the FAT, and there is no assumption on the distribution of
the file, this method of allocation, at times, is also known as FAT.
The advantage is that it is very easy to get a single block, because each block has its entry in the
FAT. Also, it is a very simple allocation method where no overhead is produced and no search
method for free blocks is needed. The disadvantage is that FAT can grow to an enormous size,
which can slow down the system. Compaction would be needed time to time. This type of file
allocation is used for disk allocation in MS-DOS.

Chained allocation:

With chained file allocation only the first block of file gets an entry in the FAT. A block has sectors
in it. Of these, the first sector has got data as well as a pointer at its end that points to the next
sector to be accessed after it (or indicates that it was the last). That means each sector (if it is not
a last sector) has got a pointer at its end pointing to the next sector that should be accessed. The
advantage again is that the FAT has a single entry for each file, indicating position of the first
sector and file name. There is no need to store the files contiguously.
The disadvantage of chained allocation is that it takes much time to retrieve a single block
because that information is neither stored anywhere nor can it be calculated. If we want to
access a particular sector, all preceding sectors have to be read in order one after another. This
way, the information about location of the next block is identified. Unix i-node is an example of
this type of allocation. i-nodes are data structures (Refer Glossary) of a file system used to store
all the important properties of each file: owner's group id and user id, access permission on that
file, size of the file and more. Unix i-node is an operating system which has i-nodes as its data
structure.

Indexed allocation:

With indexed file allocation also only the first block of file gets an entry in the FAT. In the first
sector, no data is stored. It has only pointers to where the file is on storage medium. That is why
the first block is known as the index block. It just contains the pointers or index to file location on
a storage medium. Here also the FAT will only contain a single entry for each file, indicating file
name and position of the first sector. It is easy to retrieve a single block because the information
about its location is stored in the first block.

The disadvantage of indexed allocation is that for each file an additional sector is needed to keep
track of the location of a file. A very small file also always occupies at least two blocks, where as
the data could have easily fit in one block. This results in secondary storage space wastage.
Indexed allocation is implemented in all UNIX systems. It is reliable as well as fast. Also, now-adays, the wastage of storage space does not matter much.
The main concern of file system management is to provide a strategy that lets the FAT not to
grow too large and that makes it possible to retrieve a special sector of file such that the storage
space is not wasted much.

4.1.d. Device Management


Operating system also deals with device management, specially input and output devices. In a
multi-user system, there are shared devices like printer, scanner, etc. Management of these
devices by sending them appropriate commands is the major task of operating system.
A software routine which knows how to deal with each device is called a driver and operating
system requires drivers for the peripherals attached to computer. When a new peripheral is
added, device driver is installed in to the operating system.
Operating system also deals with the access time of these devices. It helps make the device
access fast and in the most efficient way possible.
Major concern of operating system in device management is to prevent dead lock situation.
Dead Lock

Above diagram depicts a typical dead lock situation, that serially used devices can have. Dead
lock is a situation where each set of processes is waiting for an event that only other process in
the set can cause.
For example, in the above diagram, Process Task 1 requests a mutex or lock on object which is
already held by Process Task 2. In turn, Process Task 2 needs a resource to complete its
process which is already held by Process Task 1. So neither of the processes are releasing their
resources or mutex and waiting for another process to release a mutex so that they can get it to

complete its pending task. This situation would cause an indefinite wait for processes leading to
a situation known as dead lock.
Mutex is the short form for Mutual Exclusion Object. A mutex is a logical unit. It is a program
object that allows multiple program threads to share the same resource, but not simultaneously.
File access is an example of mutex. A mutex with unique name is created when the program
starts. Mutex must be locked by a thread that needs the resource. When data is no longer
needed, the mutex is set to unlock.
OS helps in dealing with dead lock situation up to a certain extent. Below are some of the
strategies for dealing with deadlock:

Ostrich algorithm, i.e. ignoring the problem altogether. Ostrich algorithm is a strategy of
ignoring potential problems on the basis that they may be exceedingly rare. Ostrich
algorithm assumes that ignoring the problem would be more cost-effective as compared
to allowing the problem to occur and then attempt for its prevention. It may occur very
infrequently. This can be followed if cost of detection/prevention is not worth the time and
cost spent.

Detection and Recovery.

Avoiding deadlock by careful resource allocation.

Prevention of deadlock by structurally negating one of the four necessary conditions.

4.2. Resource Sharing Management


The primary task of operating system as discussed above is to keep track of resources like
memory, CPU utilization, storage device, and input output devices, to grant resource requests, to
mediate conflicting requests from different programs, etc.

4.2.1. Multiplexing
Multiplexing is a method by which multiple analog data signals/digital data streams are combined
into one signal over a shared medium. Multiplexing is used to share an expensive resource and
thus help reduce the cost.
Communication channel is used for transmission of a multiplexed signal. This channel may be a
physical transmission medium. Multiplexing basically works on the principle of division of signals.
It divides the capacity of the high level communication channel into several low level logical
channels. Each of these low level logical channel maps to each message signal or data stream
to be transferred. A reverse of this process is known as demultiplexing. Demultiplexing can
extract the original channels or signals on the receiver side.
A device that performs the task of multiplexing is called a multiplexer (MUX), and a device that
performs the task of demultiplexing is known as a demultiplexer (DEMUX).
Resource management basically includes multiplexing the resources in two ways:
Time Multiplexing

When a particular resource is time multiplexed, different programs or users get their turn to use
that resource one after another. Turns for resource usage are decided as per the predefined
operating system algorithm. Printer is one of the best examples of in-time multiplexing. In a
network printer, different users get their turn to use the printer one after another based on time.
Space Multiplexing
When a particular resource is space multiplexed, the resource is shared equally in the available
space. Time period for which a particular resource is used does not matter anymore. That is each
one gets a part of the resource. Hard disk, main memory, etc are some of the examples of space
multiplexing. These resources are most of the times divided equally between all the users.
To summarize, operating system is one of major programs, without which it would be almost
impossible for us to access the computer as we are accessing it these days. Without operating
system, computer would be nothing but a monitor executing binary and machine language data.

1.1. About TATA Group


TATA Group

India's largest conglomerate

100 operating companies in 7 business sectors

Passionate commitment to developing the communities in which we operate

Tata Group History

The foundation of the TATA group was laid by Jamsetji Nusserwanji Tata in 1868, exactly
100 years before TCS was founded.

In 1938, JRD Tata was appointed as the chairman who led the TATA Group for next 53
years. During his time, TATA Group expanded regularly into new spheres of business.

The more prominent of these ventures were Tata Chemicals, Tata Motors, Tata
Industries, Voltas, Tata Tea, Tata Consultancy Services and Titan Industries.

In early ninety (1991), Ratan Tata took over as chairman of the TATA Group. Under his
stewardship, Tata tea acquired Tetley, Tata Motors acquired Jaguar Land Rover and Tata
Steel acquired Corus, which have turned the TATA Group from a largely India-centric
company into global business. Ratan Tata retired from all executive responsibility in the
TATA Group in December 2012 and he is succeeded by Cyrus Mistry, the present
Chairman.

To learn about more about the Tata Group's 150+ years of history, please click on Tata
Group History.

1.2. About TCS


Company Overview
Tata Consultancy Services is an IT services, consulting and business solutions organization that
delivers real results to global business, ensuring a level of certainty no other firm can match.

Figure 1.3: Company Overview


Tata Consultancy Services is an IT services, consulting and business solutions organization that
delivers real results to global business, ensuring a level of certainty no other firm can match. TCS
offers a consulting-led, integrated portfolio of IT, BPO, infrastructure, engineering and assurance
services. This is delivered through its unique Global Network Delivery Model, recognized as the
benchmark of excellence in software development. A part of the Tata Group, India's largest
industrial conglomerate, TCS has over 3,35,620 of the world's best trained consultants in 55
countries. The Company generated consolidated revenues of US $ 15.45 billion for year ended
31 March, 2015 and is listed on the National Stock Exchange and Bombay Stock Exchange in
India.
Experience Certainty
Tata Consultancy Services helps customer experience certainty by reliably delivering business
results, providing leadership to drive transformation and partnering for success.

For further details, click here


The TCS Advantage

Customer-centric Engagement Model

Your dedicated team will have domain and technology capabilities resulting in specialized
services / solutions

Our engagement models are uniquely flexible, enabling design that fits the size and scale
of your operations

You have access to partnership gain-share and risk-share models focused on your
success

Your inputs and our expertise are merged through our Centers of Excellence (COEs) to
deliver leading solutions

Global Network Delivery Model


Our unique delivery model offers multiple levers of time zone, language, skills and local business
knowledge to deliver high quality solutions across the globe, 24x7

TCS Delivery Centers

Full Services Portfolio

You benefit from TCS' combining traditional IT and remote infrastructure services with
knowledge-based services

You derive single-source business value

You realize accelerated agility and TCO reduction through our services integration model

You gain more predictable IT spends from utility-based operating models

TCS Innovation Labs

Comprehensive 360o interconnected research ecosystem with 19 labs worldwide

Collaborate with a wide network of partners, institutions and venture capitalists on


forward-looking solutions

Your business innovation is fueled by our dedicated labs on advanced and emerging
technology trends and scientific research Co-Innovation Network (COIN)TM

(COIN)TM is a rich and diverse network that drives innovation in an open community

Providing extended capabilities in areas such as:

Image Processing

Biometrics

Enterprise Security

RFID Enabled Asset Tracking

Analytics

Dynamic Pricing

Customer Interaction Optimization

Smart Card Management

SaaS & others

For further details, click here.


Corporate Facts
Click here to know about TCS corporate facts.
Heritage and values

Established in 1968

Largest IT services firm in Asia

World's first organization to achieve an enterprise-wide Maturity Level 5 on both CMMI(R)


and P-CMM(R), using SCAMPI(SM), the most rigorous assessment methodology.

TCS Mission

To help customers achieve their business objectives by providing innovative, best-inclass consulting, IT solutions and services.

To make it a joy for all stakeholders to work with us.

TCS Values

Leading change

Integrity

Respect for the individual

Excellence

Learning and sharing.

TCS Executive Profile

Chairman: Cyrus Mistry

Chief Executive Officer (CEO) and Managing Director (MD): N. Chandrasekaran

Chief Financial Officer (CFO): Rajesh Gopinathan

Head Global Human Resource: Ajoyendra Mukherjee

Chief Technology Officer (CTO): K Ananth Krishnan

TCS Process Excellence

ISO27001 Helps organization strengthen its information security policies and


processes

ISO14001 Helps organization strengthen its environment policies and processes

OHSAS18001 Helps organization strengthen its health and safety policies and
processes

CMM Level 5 Helps organization to manage and optimize its processes

PCMM Helps organization to continuously improve the management and development


of its human resources

ISO 9000 family Helps organization strengthen its quality management system

Services
TCS helps clients optimise business processes for maximum efficiency and galvanise their IT
infrastructure to be both resilient and robust. TCS offers the following solutions:

Assurance Services

BI & Performance Management

Business Process Outsourcing

Connected Marketing Solutions

Consulting

Engineering & Industrial Services

Enterprise Solutions

iON Small and Medium Business

IT Infrastructure Services

IT Services

Mobility Solutions and Services

Platform BPO Solutions

Software

TCS BaNCS

TCS Technology Products

For further details, click here.


Industries
TCS has the depth and breadth of experience and expertise that businesses need to achieve
business goals and succeed amidst fierce competition. TCS helps clients from various industries
solve complex problems, mitigate risks, and become operationally excellent. Some of the
industries it serves are:

Banking and Financial Services

Energy,Resources and Utilities

Government Healthcare

High Tech

Insurance

Life Sciences

Manufacturing

Media and Information Services

Retail and Consumer Products

Telecom

Travel,Transportation & Hospitality

For further details, click here.


Case Studies

Click here to know about case studies.


Awards & Recognition (2012-13):

In year 2012, TCS has bagged many awards in different sectors. Listed below some of the
achievements but the list is endless.
I. BT 500 ranks TCS as the most valuable company of 2012
TCS was ranked as the most valuable company of 2012 in the BT 500 list that was released
recently. This is the first time that our company has emerged on top of this rankings list released
each year by the Business Today magazine. The issue dated November 11, 2012 credits CEO N.
Chandra for much of this success. Click here to read CEO Chandras interview.
II. N. Chandra wins Best CEO of the Year award at Forbes India Leadership Awards
CEO & MD, N. Chandra won the Best CEO of the Year award at the Forbes India Leadership
Awards 2012. He was presented this award at a function held in Mumbai on 28 September 2012.
Our CEO won the honour for being able to balance aggression needed to achieve stretched
goals, with conservatism and for building a solid team for next generation of managers.
III. CEO & MD, N Chandra, won CNBCs Asian Business Leader of the Year
N. Chandra won the Asian Business Leader of the Year award on 16 November 2012. Our CEO
got this recognition during CNBCs 11th edition of Asia Business Leaders Awards (ABLA) function
held at Bangkok, Thailand. Click here to know more.
IV. CEO & MD, N Chandra, won Pathfinder CEO of 2012 by National HRD Network (NHRDN)
TCS CEO & MD, N. Chandra was presented with the Pathfinder CEO award by National HRD
Network (NHRDN) during its 16th National Conference held from 29 November to 1 December
2012 at Hyderabad in India. NHRDN recognises individuals and organisations who have made
noteworthy contributions in the area of Human Resource Development in the Corporate sector,
Academic sphere and broader Business and Social arena.
V. TCS received the Forbes Asia 'Fabulous 50' Award
TCS was presented the Forbes Asia 'Fabulous 50' Award in a award ceremony held in Macau,
China on 4 December 2012. This came about after Forbes Asia, a leading pan-Asia business
magazine, listed TCS in its prestigious and influential annual 'Asia's Fab 50' list of the most
compelling companies in Asia, earlier this year.
VI.TCS Ex-CFO, S Mahalingam, won CFO of the Year award at CFO Innovation Asia Awards on
28 November 2012
TCS Finance swept the two main categories at the CFO Innovation Asia Awards on 28
November 2012 in Singapore, with TCS CFO S Mahalingam winning the marquee 'CFO of the
Year' award, and TCS Finance team recognised as 'Finance Team of the Year'. The event was
attended by over 120 Asia-based CFOs and top-tier financial executives and featured over 15
award categories pitting the biggest names in the finance arena, including banks (HSBC, Citi),

accountancy firms (KPMG, PwC, Deloitte) and service providers (Accenture, Infosys). Click
here to know more.
Achievements:

News and Events


Click here to know about latest TCS information and events
References
For further reading, please visit the link given below:
Tata group profile

3.2. Effective Listening Skills

God gave us two ears and one mouth so that we can


hear twice as much as we say!

Effective communication plays an important role in our day to day life. The success of our
communication is dependent on our ability to effectively convey our thoughts, ideas, and
emotions. In this process, believe it or not, active listening plays an important role in improving
our relationships with those around us and reducing arguments and misunderstandings.
Let us think of situations in real life where you have felt that people are not listening to you. Did it
not create various emotions such as confusion, worry and frustration in you? Dont you think
every time you choose not to listen you are creating a similar situation for the speaker?

In our day to day life we come across many situations where ineffective listening leads to
misunderstanding between individuals. This can happen because we are busy trying to frame our
response to the other person or, our own subconscious thought process is interfering with our
listening process.

For example, while asking for direction in a new locality, you would listen very keenly because
you do not want to lose your way in a strange place.
However, there are also situations in which we may not listen actively. A case in point is, while
dancing to your favorite song, you are more focused to the tune and rhythm of the music than to
the
actual
lyrics.
With this we can come to a very obvious conclusion:
"Hearing is a physical ability while listening is a skill!"

We listen more when we:

Want to obtain information

Need to understand

Want to learn

Choose to listen

Barriers to effective listening


Reflecting on the activity that you did earlier, it is clear that various factors affect listening. It could
be internal or external. By internal factors it could mean aspects such as the mindset of the
listener, assumptions and prejudices/bias, physical wellbeing etc. External factors could be the
noise level in the surrounding, the environment, physical barriers etc.

How to listen well?


Now that we have understood the barriers to listening and why should we still listen, let us get to
know how to listen.
Create genuine interest.

Get involved in what is being said and connect with the speaker.

Maintain eye contact and lean towards the speaker. It shows your interest and attention
towards what the speaker is saying.

Interact with the speaker.

Consider each point (pay attention to) that is being delivered. Good listeners tune out
distractions and focus on the speaker and the message.

Occasionally nod your head to indicate that you are listening.

Maintain eye contact with the listener.

Lean forward in your chair. You could also turn in your chair to focus on the speaker and
avoid any disturbances.

Consider it a learning opportunity.

Every interaction with any speaker is an opportunity for you to learn. Creative people are
always on the lookout for new information.

Stay free from prejudices and assumptions.

It is important to keep an open mind and listen without filters. Filters tend to obscure
messages. Prejudices and assumptions influence our willingness and ability to hear.

Be objective in your approach and look beyond your barriers of listening. Wait till you
have listened to the whole message.

Participate in the communication process.

Communication is a two way process which needs the active participation of both the
speaker and the listener. A good listener gets the best out of the speaker. Get beyond the

manner of delivery to the underlying message. Do not judge the message by the style of
delivery.
Carry a good posture.

Posture conveys a great deal about the attitude of the listener. Avoid restless, distracted
movements. Sit in a comfortable position.

Empathize!

Try to look at things from the speaker's perspective by keeping an open mind to what is
being said.

Hold back any judgments, jumping to conclusions, the urge to speak or the I know all
this attitude. You will be amazed at how you understand the speaker once you stop
criticizing!

Be receptive to non-verbal cues.

Do not limit your listening to verbal cues alone but, listen to non-verbal cues as well. This
would include being sensitive to the non-verbal cues such as body language, gesture,
expressions, tone of voice etc. This would help you to interpret the message more
effectively.

Benefits of good listening skills


Having spoken this long about listening, let us understand what the benefits of listening are, or in
other
words,
why
we
should
listen.
Listening helps to:

Ensure proper understanding - the listener is attuned to both the verbal and non-verbal
cues of the speaker.

Reduce misunderstanding - when the listener takes a conscious decision to avoid


distraction, prejudices and bias, there is more transparency and effectiveness in the
message that is conveyed.

Improve relationship - an attentive listener is able to create a good rapport with the
speaker. This can be done by having a keen interest in what is being said and mirroring a
positive body language.

Conclusion

Developing good listening skills takes time and effort. We have to make a conscious effort to
overcome our barriers and filters while listening. While listening, we need to listen with the
intention to understand, reflect on both the message and the non verbal cues and paraphrase to
ensure understanding.
Did you know that we actually spend a lot of time listening? Out of the total time we spend
communicating, statistics say we spend 9% of the time writing, 16 % of the time reading, 30% of
the time speaking and 45% of the time Listening!! Thats all the more reason why we need to pay
attention to and work on our listening skills. So thats all on listening for now. I hope whatever we
have discussed has been useful and will improve your listening skills! Thank you!

You might also like