TCS Aspire
TCS Aspire
TCS Aspire
Assembler
Introduction
System software refers to the files and programs that make up the computer's operating system.
System files consist of libraries of functions, system services, drivers for printers, other hardware,
system preferences and various configuration files.
System Software consists of the following programs:
Assemblers
Compilers
System Utilities
Debuggers
The system software is installed on the computer when the operating system is installed. The
software can be updated by running programs such as "Windows Update" for Windows or
"Software Update" for Mac OS X. Application programs are used by end user, whereas system
software is not meant to be run by the end user. For example, while Web browser like Internet
Explorer are used by users every day, they don't have to use an assembler program (unless the
user is a computer programmer).
System software runs at the most basic level of the computer and hence it is called "low-level"
software. User interface is generated by the system software and allows the operating system to
interact with the hardware.
System software is the interface between the computer hardware and the application software.
Assembler
An assembler is a program that converts basic computer instructions into a pattern of bits. These
bits are used by computer's processor to perform its basic operations. Some people call these
instructions as assembler language and others use the term assembly language. This is how it
works:
Most computers come with a specified set of very basic instructions that corresponds to the basic
machine operations that the computer can perform. For example, a "Load" instruction moves a
string of bits from a location in the processor's memory to a special holding place called a
register. In computer architecture, a processor register is a small amount of storage available on
the CPU and the contents of register can be accessed more quickly. Most of the modern
computer architectures operate on the principle of shifting data from main memory into registers,
operate on the data and then move the result back into main memory. Assuming the processor
has at least eight numbered registers, the following instruction would move the value (string of
bits of a certain length) at memory location 2000 into the holding place called register 8:
L 8,2000
These assembler instructions, which are also known as the source code or source
program, is then specified to the assembler program when it is started.
Each program statement in the source program is taken by the assembler program and a
corresponding bit stream or pattern (a series of 0's and 1's of a given length) is
generated.
The output of the assembler program is called the object code or object program relative
to the input source program. The object program consisting of a sequence of 0's and 1's
is also known as machine code.
Assembly language program consists of mnemonic codes which are easy to remember as they
are similar to words in English language. It consists of mnemonic codes for each of the different
machine code instructions that the machine understands. A mnemonic is an abbreviation of the
actual instruction. It is a programming code that is easy to remember because the codes
resemble the original words, for example, ADD for addition and SUB for subtraction. Examples of
assembly language program:
MOV EAX,2 ; set eax register to 2 (eax = 2)
SHL EAX,4 : shift left value in the register
MOV ECX,16 ; set ecx register to 16
SUB EAX,EBX ; substracts ecx from eax
An assembler converts this set of instructions into a series of 1s and 0s, also known as a
executable program, that the machine can understand.
Assembly language can be used for communication with the machine at the hardware
level and hence it is often used for writing device drivers.
Another advantage of assembly language is the size of the resulting programs. Since
conversion from a higher level by a compiler is not required, the resulting programs can
be exceedingly small.
Assembly language statements are written one on each line. A machine code program consists
of a sequence of assembly language statements in which each statement contains a mnemonic.
A line of an assembly language program can contain the following four fields:
i. Label
ii. Opcode
iii. Operand
iv. Comments
The label field is optional. A label is an identifier or a text. Labels are used in programs to reduce
reliance upon programmers remembering where data or code is located.A label is used to refer
to the following:
Memory location
A data value
The maximum length of a label differs for different types of assemblers. Some assemblers accept
up to 32 characters long, others only four characters. When a label is declared, it is suffixed by a
colon, and begins with a valid character (A..Z). Consider the following example.
LOOP: LDAA #24H
Here, the label LOOP is equal to the address of the instruction LDAA #24H. The label can be
used as a reference in a program, as shown below
JMP LOOP
When the above instruction is executed, the processor will execute the instruction associated
with the label LOOP, i.e.LDAA #24H. When a label is referenced later in a program, it is
referenced without the colon suffix.
An advantage of using labels is that inserting or re-arranging code statements do not require reworking actual machine instructions. It only requires a simple re-assembly. In hand-coding,
changes can take hours to perform.
The opcode field consists of a mnemonic. Opcode is the operation code, ie, a machine code
instruction. Opcode may also have additional information in the form of operands Operands are
separated from the opcode by using a space.
Operands consists of additional information or data that the opcode requires. Operands are used
to specify
Constants or labels
Immediate data
An address
Examples of operands
LDAA 0100H ; two byte operand
LDAA LOOP ; label operand
LDAA #1 ; immediate operand (i.e constant value as operand)
The comment field is optional, and is used by the programmer to explain how the coded program
works. The comments are prefixed by a semi-colon. Comments are ignored by the assembler
when the instructions are generated from the source file.
Undefined symbols, which call the other modules where the symbols are defined.
Local symbols which are used internally within the object file to facilitate relocation.
For most compilers, each object file is the result of compiling one input source code file. If a
program comprises multiple object files, then the linker combines these files into a unified
executable program by resolving the symbols as it goes along.
Linkers can take objects from a collection called a runtime library. A runtime library is a collection
of object files which will contain machine code for any external function used by the program file
which in turn is used by the linker. This machine code is copied by the linker into the final
executable output. Some linkers do not include the entire library in the output, they only include
the symbols that are referenced from other object files or libraries. The libraries exist for diverse
purposes. The system libraries are usually linked in by default.
The linker also arranges the objects in a program's address space. This involves relocating code
that assumes a specific base address to another base address. Since a compiler seldom knows
where an object will reside, it assumes a fixed base location (for example, zero). The relocation
of machine code may involve re-targeting of loads, stores and absolute jumps.
The executable output by the linker may need another relocation pass when it is finally loaded
into memory (just before the execution). This relocation pass is usually omitted on hardware
offering virtual memory in which every program is put into its own address space and so there is
no conflict even if all programs load at the same base address. This relocation pass may also be
omitted if the executable is a position independent executable.
3.4. Compilers
A compiler is a special program that processes statements written in a particular programming
language and turns them into machine language or code that a computer's processor uses. A
programmer writes language statements in a language such as Pascal or C one line at a time
using an editor. The file created contains the source statements or source code. The programmer
then runs the appropriate compiler for the language, specifying the name of the file that contains
the source statements.
When executing (running), the compiler first parses (or analyzes) the language statements
syntactically one after the other and then builds the output code in one or more successive
stages or "passes" and makes sure that statements that refer to other statements are referred in
the final code correctly. The output of the compilation is called object code or sometimes an
object module. The object code is machine code that the processor can execute, one instruction
at a time.
The Basic Structure of a Compiler
In the five stages of a compiler, the high level language is translated to a low level language
which is generally closer to that of the target computer. Each stage of the compiler fulfills a single
task and has one or more classic techniques for implementation. The following are the five
stages of a compiler:
Lexical Analyzer: Analyzes the source code, removes "white space" and comments,
formats it for easy access by creating tokens, then tags language elements with type
information and begins to fill the information in the SYMBOL TABLE. The Symbol Table is
a data structure that contains information about symbols and groups of symbols in the
program being translated.
Syntactic Analyzer: Analyzes the tokenized code for structure, groups symbols into
syntactic groups, tags groups with type information.
Semantic Analyzer: Analyzes the parsed code for meaning, fills in assumed or any
missing information and tags the groups with the meaning.
Code Generator: Linearizes the qualified code and produces the object code.
Optimizer: Checks the object code to determine whether there are more efficient means
of execution.
3.5. Interpreters
Interpreter is a program that executes instructions written in a high-level language. There are two
methods to run programs written in a high-level language. The most common method is to
compile the program and the other method is to pass the program through an interpreter.
An interpreter translates high-level instructions into an intermediate form and then executes it. In
contrast to the above method, a compiler translates high-level instructions directly into machine
language. The compiled programs generally run faster than interpreted programs. One of the
advantage of using an interpreter is that it does not need to go through the compilation stage
during which machine instructions are generated. This process of compilation can be timeconsuming if the program is very long. The interpreter, on the other hand, immediately execute
high-level programs. Hence interpreters are sometimes used during the development of a
program in which the programmer wants to add small sections at a time and test them quickly. In
addition to this, interpreters are often used in education because they allow students to program
interactively.
Interpreter vs Compiler
The primary difference between a compiler and interpreter is the way in which a program
is executed. The compiler converts the source code to machine code and then saves it
as an object code before creating an executable file for the same. The compiled program
is executed directly using the machine code or the object code. But an interpreter does
not convert the source code to an object code before execution.
An interpreter executes the source code, line by line and conversion to native code is
performed line by line while execution is going on (at runtime). Hence the run time
required for an interpreted program will be high compared to a compiled program.
Even though the run time required for interpreted program is more, the execution using
an interpreter has its own advantages. For example interpreted programs can modify
themselves at runtime by adding or changing functions. Compiled program has to be
recompiled fully even for the small modifications done in the program. But in the case of
an interpreter there is no such problem (only the modified section needs to be recompiled
- refer Figure 3.6).
Operating system also does the task of managing computer hardware. Operating system
manages hardware by creating a uniform set of functions or operations that can be performed on
various classes of devices (for instance read x bytes at address y from hard drive of any type,
where x is the number of bytes read). These general functions rely on a layer of drivers that
provide specific means to carry out operations on specific hardware. Drivers usually are provided
by the manufacturers, but the OS must also provide a way to load and use drivers. OS must
detect the device and select an appropriate driver if several of them are available.
In the figure 4.2 below, OS interface is a set of commands or actions that an operating system
can perform and a way for a program or person to activate them.
Operating system is the core software component of a computer. It performs many functions and
acts as an interface between computer and the outside world. A computer consists of several
components including monitor, keyboard, mouse and other parts. Operating system provides an
interface to these parts through 'drivers'. This is why, when sometimes we install a new printer or
other piece of hardware, our system will ask us to install more software called driver. A brief on
device drivers is given in section 4.2.4, Device Management.
Operating system basically manages all internal components of a computer. It was developed to
better use the computer system.
i. Fragmentation
Fragmentation occurs when there are many free small blocks in memory that are too small to
satisfy any request. In computer storage, fragmentation is a phenomenon in which storage space
is used inefficiently, resulting in reduced capacity as well as performance. Fragmentation also
leads to wastage of storage space. The term also refers to the wasted space itself.
There are three different types of fragmentation:
External fragmentation
Internal fragmentation
Data fragmentation
Main memory is divided into a number of static partitions when a computer system starts. A
process may be loaded into a partition of equal or greater size. Advantage of such type of
partitioning is that it is simple to implement and has a little operating system overhead.
Disadvantages are inefficient use of memory due to internal fragmentation. Also maximum
number of active processes is fixed.
Such partitions are created dynamically so that each process is loaded into a partition of exactly
the same size as that process. Advantage of such type of partitioning is that there is no internal
fragmentation. Also main memory is used more efficiently. Disadvantage is inefficient use of
processor due to the need for compaction to counter external fragmentation.
For further details on types of partitioning please refer below link:
http://www.csbdu.in/econtent/Operating System/unit3.pdf
Paged Memory Management
This type of memory management divides computer's primary memory into fixed-size units
known as page frames. The program's address space is divided into pages of the same size.
Usually with this type of memory management, each job runs in its own address space.
Advantage of paged memory management is that there is no external fragmentation. But on the
contrary there is a small amount of internal fragmentation.
Below is the Illustration representing paged memory management. It depicts how logical pages
address in memory can be mapped to physical address. The operating system uses base
address as a measure to find addresses. Base address means starting address of a particular
memory block. According to the program written, CPU generates address. In this case, address
generated by CPU is called logical address. This address is added to base address so that it
forms physical address. To translate a logical address into corresponding physical address, we
need to divide the logical address into page number and offset. Offset refers to a value being
added to base address to produce a second address.
For example, if B represents address 200, then the expressions, B+10 would signify the address
210. 10 in expression is the offset. To specify addresses using an offset is called relative
addressing because the resulting address is relative to some other point. Offset is also known as
displacement. In figure below, p represents page number and d is the offset. We use page
number as index to page table and the entry gives corresponding frame number. 'f' in page table
represents frame number. Frame number is concatenated with the offset to get a corresponding
physical address.
System initialization or start up. When an OS is booted, typically several processes are
created.
A process is run to execute a process creation system call. Most of the times, a running
process will issue system calls to create one or more new processes to help it do its job.
Creation of new processes is useful when the work to be done can be easily formulated
as several related, but otherwise independent interacting processes.
Request to create a new process is issued by user. In interactive systems, users can
start a program by clicking or double clicking the icons or thumbnails or even by typing a
command on command prompt.
Batch job initiation. Users can submit batch jobs to the system (most of the times,
remotely). When OS knows that it has the resources to run another job, a new process is
created and the next job is run from the input queue.
Several processes are created when an operating system boots. Some of them are foreground
processes while others are the background processes. Foreground processes interact with a
user and perform work for users. In multiprocessing environment, the process that accepts input
from the keyboard or other input device at a particular point in time is at times called the
foreground process. This means that any process that actively interacts with user is a foreground
process. Background processes are the processes which are not associated with a particular
user, but instead have some specific function to be performed. For example email notification is
an example of background processes. System clock displayed at the bottom right corner of
status bar in windows is another example of background process.
Some of the common reasons for process termination are:
Normal completion
Memory unavailable
I/O failure
Thereafter, at certain point of time later, a disk interrupt occurs and the driver
detects that P's request is satisfied.
Later, at some point of time, the operating system looks for a ready job to run and
picks the process/job P from the queue.
A preemptive scheduler in figure 4.7 below has the dotted line 'Preempt'; where as a nonpreemptive scheduler doesn't.
The number of processes change for two arcs namely, create and terminate.
In a computer, tasks are assigned priorities. At times it is necessary to run a certain task that has
high priority before another task (even if the task is in running state). Therefore, the running task
is interrupted for some time, put to either blocked or suspended state and resumed later, when
the priority task has finished its execution. This process is called preemptive scheduling.
2. Non preemptive scheduling:
In non-preemptive scheduling, a running task executes till it completes fully. It cannot be
interrupted. That means when a process enters the running state, it cannot be deleted from the
scheduler until it finishes its service time.
The figure 4.7 above shows various states of a process. Initially the process is created. Once
created, it goes to the ready state. Ready state means the process is ready to execute. It has
been loaded into the main memory and is waiting for execution on a CPU. There can be many
ready processes waiting for execution at a given point of time. A queue of processes which are
ready to execute gets created in memory. One of the processes from that queue is picked up for
execution and its state gets changed to running. A running process can be blocked. Blocked
state means a process is blocked on some event.
A process may be blocked due to various reasons such as when a particular process has
exhausted, the CPU time allocated to it, it is waiting for an event to occur. Blocked process can
either move to ready state or can move to suspended state. In systems that support virtual
memory, a process may be swapped out, that is, it would be removed from main memory and
would be placed in virtual memory by the mid-term scheduler. This is called suspended state of a
process. From here the process may be swapped back in to the ready state. Such state is called
ready suspended state. Process that are blocked may also be swapped out. Such a state of
process where a process is both swapped out and blocked is called blocked suspended state.
Suspended processes can be sent back to ready state only once they are released. This cycle
continues till a process finishes its execution i.e. terminated. A process may be terminated from
the running state by completing its execution or can be killed explicitly. In either of these cases,
we say that the process is terminated.
Figure 4.8 above shows a typical hierarchical file system structure. Operating system keeps track
of following tasks for providing efficient file management:
Provide fast and simple algorithms to write and read files in co-operation with device
manager.
Allocate and de-allocate files so that files can be processed in co-operation with process
manager.
Provide programs and users with simple commands for handling files.
On storage medium, file gets saved in blocks or sectors. To access these files, file manager and
device manager works together. The device manager knows where to find each sector on disk,
but only file manager has a list telling in what sectors either file is stored. This list is called File
Allocation Table (FAT). FAT has been in use under DOS for a long time. Some of its alterations
are still used by Win95 and Win98.
Below are the different ways of allocating files:
In this type of file allocation, at the time of file creation, a single set of blocks is allocated to a file.
Each file is stored contiguously in sectors, one sector after another. The advantage is that the
File Allocation Table has a single entry for each file, indicating the start sector, the length, and the
name. Moreover, it is also easy to get a single block because its address can be easily
calculated. The disadvantage may be that it can be difficult to get a sufficiently large block of
contiguous blocks. Contiguous file allocation is now a days only used for tapes and recordable
CDs.
With this type of allocation, all blocks of a file can be distributed all over the storage medium. The
File Allocation Table (FAT) lists not only all files, but also has an entry for each sector the file
occupies. As all information is stored in the FAT, and there is no assumption on the distribution of
the file, this method of allocation, at times, is also known as FAT.
The advantage is that it is very easy to get a single block, because each block has its entry in the
FAT. Also, it is a very simple allocation method where no overhead is produced and no search
method for free blocks is needed. The disadvantage is that FAT can grow to an enormous size,
which can slow down the system. Compaction would be needed time to time. This type of file
allocation is used for disk allocation in MS-DOS.
Chained allocation:
With chained file allocation only the first block of file gets an entry in the FAT. A block has sectors
in it. Of these, the first sector has got data as well as a pointer at its end that points to the next
sector to be accessed after it (or indicates that it was the last). That means each sector (if it is not
a last sector) has got a pointer at its end pointing to the next sector that should be accessed. The
advantage again is that the FAT has a single entry for each file, indicating position of the first
sector and file name. There is no need to store the files contiguously.
The disadvantage of chained allocation is that it takes much time to retrieve a single block
because that information is neither stored anywhere nor can it be calculated. If we want to
access a particular sector, all preceding sectors have to be read in order one after another. This
way, the information about location of the next block is identified. Unix i-node is an example of
this type of allocation. i-nodes are data structures (Refer Glossary) of a file system used to store
all the important properties of each file: owner's group id and user id, access permission on that
file, size of the file and more. Unix i-node is an operating system which has i-nodes as its data
structure.
Indexed allocation:
With indexed file allocation also only the first block of file gets an entry in the FAT. In the first
sector, no data is stored. It has only pointers to where the file is on storage medium. That is why
the first block is known as the index block. It just contains the pointers or index to file location on
a storage medium. Here also the FAT will only contain a single entry for each file, indicating file
name and position of the first sector. It is easy to retrieve a single block because the information
about its location is stored in the first block.
The disadvantage of indexed allocation is that for each file an additional sector is needed to keep
track of the location of a file. A very small file also always occupies at least two blocks, where as
the data could have easily fit in one block. This results in secondary storage space wastage.
Indexed allocation is implemented in all UNIX systems. It is reliable as well as fast. Also, now-adays, the wastage of storage space does not matter much.
The main concern of file system management is to provide a strategy that lets the FAT not to
grow too large and that makes it possible to retrieve a special sector of file such that the storage
space is not wasted much.
Above diagram depicts a typical dead lock situation, that serially used devices can have. Dead
lock is a situation where each set of processes is waiting for an event that only other process in
the set can cause.
For example, in the above diagram, Process Task 1 requests a mutex or lock on object which is
already held by Process Task 2. In turn, Process Task 2 needs a resource to complete its
process which is already held by Process Task 1. So neither of the processes are releasing their
resources or mutex and waiting for another process to release a mutex so that they can get it to
complete its pending task. This situation would cause an indefinite wait for processes leading to
a situation known as dead lock.
Mutex is the short form for Mutual Exclusion Object. A mutex is a logical unit. It is a program
object that allows multiple program threads to share the same resource, but not simultaneously.
File access is an example of mutex. A mutex with unique name is created when the program
starts. Mutex must be locked by a thread that needs the resource. When data is no longer
needed, the mutex is set to unlock.
OS helps in dealing with dead lock situation up to a certain extent. Below are some of the
strategies for dealing with deadlock:
Ostrich algorithm, i.e. ignoring the problem altogether. Ostrich algorithm is a strategy of
ignoring potential problems on the basis that they may be exceedingly rare. Ostrich
algorithm assumes that ignoring the problem would be more cost-effective as compared
to allowing the problem to occur and then attempt for its prevention. It may occur very
infrequently. This can be followed if cost of detection/prevention is not worth the time and
cost spent.
4.2.1. Multiplexing
Multiplexing is a method by which multiple analog data signals/digital data streams are combined
into one signal over a shared medium. Multiplexing is used to share an expensive resource and
thus help reduce the cost.
Communication channel is used for transmission of a multiplexed signal. This channel may be a
physical transmission medium. Multiplexing basically works on the principle of division of signals.
It divides the capacity of the high level communication channel into several low level logical
channels. Each of these low level logical channel maps to each message signal or data stream
to be transferred. A reverse of this process is known as demultiplexing. Demultiplexing can
extract the original channels or signals on the receiver side.
A device that performs the task of multiplexing is called a multiplexer (MUX), and a device that
performs the task of demultiplexing is known as a demultiplexer (DEMUX).
Resource management basically includes multiplexing the resources in two ways:
Time Multiplexing
When a particular resource is time multiplexed, different programs or users get their turn to use
that resource one after another. Turns for resource usage are decided as per the predefined
operating system algorithm. Printer is one of the best examples of in-time multiplexing. In a
network printer, different users get their turn to use the printer one after another based on time.
Space Multiplexing
When a particular resource is space multiplexed, the resource is shared equally in the available
space. Time period for which a particular resource is used does not matter anymore. That is each
one gets a part of the resource. Hard disk, main memory, etc are some of the examples of space
multiplexing. These resources are most of the times divided equally between all the users.
To summarize, operating system is one of major programs, without which it would be almost
impossible for us to access the computer as we are accessing it these days. Without operating
system, computer would be nothing but a monitor executing binary and machine language data.
The foundation of the TATA group was laid by Jamsetji Nusserwanji Tata in 1868, exactly
100 years before TCS was founded.
In 1938, JRD Tata was appointed as the chairman who led the TATA Group for next 53
years. During his time, TATA Group expanded regularly into new spheres of business.
The more prominent of these ventures were Tata Chemicals, Tata Motors, Tata
Industries, Voltas, Tata Tea, Tata Consultancy Services and Titan Industries.
In early ninety (1991), Ratan Tata took over as chairman of the TATA Group. Under his
stewardship, Tata tea acquired Tetley, Tata Motors acquired Jaguar Land Rover and Tata
Steel acquired Corus, which have turned the TATA Group from a largely India-centric
company into global business. Ratan Tata retired from all executive responsibility in the
TATA Group in December 2012 and he is succeeded by Cyrus Mistry, the present
Chairman.
To learn about more about the Tata Group's 150+ years of history, please click on Tata
Group History.
Your dedicated team will have domain and technology capabilities resulting in specialized
services / solutions
Our engagement models are uniquely flexible, enabling design that fits the size and scale
of your operations
You have access to partnership gain-share and risk-share models focused on your
success
Your inputs and our expertise are merged through our Centers of Excellence (COEs) to
deliver leading solutions
You benefit from TCS' combining traditional IT and remote infrastructure services with
knowledge-based services
You realize accelerated agility and TCO reduction through our services integration model
Your business innovation is fueled by our dedicated labs on advanced and emerging
technology trends and scientific research Co-Innovation Network (COIN)TM
(COIN)TM is a rich and diverse network that drives innovation in an open community
Image Processing
Biometrics
Enterprise Security
Analytics
Dynamic Pricing
Established in 1968
TCS Mission
To help customers achieve their business objectives by providing innovative, best-inclass consulting, IT solutions and services.
TCS Values
Leading change
Integrity
Excellence
OHSAS18001 Helps organization strengthen its health and safety policies and
processes
ISO 9000 family Helps organization strengthen its quality management system
Services
TCS helps clients optimise business processes for maximum efficiency and galvanise their IT
infrastructure to be both resilient and robust. TCS offers the following solutions:
Assurance Services
Consulting
Enterprise Solutions
IT Infrastructure Services
IT Services
Software
TCS BaNCS
Government Healthcare
High Tech
Insurance
Life Sciences
Manufacturing
Telecom
In year 2012, TCS has bagged many awards in different sectors. Listed below some of the
achievements but the list is endless.
I. BT 500 ranks TCS as the most valuable company of 2012
TCS was ranked as the most valuable company of 2012 in the BT 500 list that was released
recently. This is the first time that our company has emerged on top of this rankings list released
each year by the Business Today magazine. The issue dated November 11, 2012 credits CEO N.
Chandra for much of this success. Click here to read CEO Chandras interview.
II. N. Chandra wins Best CEO of the Year award at Forbes India Leadership Awards
CEO & MD, N. Chandra won the Best CEO of the Year award at the Forbes India Leadership
Awards 2012. He was presented this award at a function held in Mumbai on 28 September 2012.
Our CEO won the honour for being able to balance aggression needed to achieve stretched
goals, with conservatism and for building a solid team for next generation of managers.
III. CEO & MD, N Chandra, won CNBCs Asian Business Leader of the Year
N. Chandra won the Asian Business Leader of the Year award on 16 November 2012. Our CEO
got this recognition during CNBCs 11th edition of Asia Business Leaders Awards (ABLA) function
held at Bangkok, Thailand. Click here to know more.
IV. CEO & MD, N Chandra, won Pathfinder CEO of 2012 by National HRD Network (NHRDN)
TCS CEO & MD, N. Chandra was presented with the Pathfinder CEO award by National HRD
Network (NHRDN) during its 16th National Conference held from 29 November to 1 December
2012 at Hyderabad in India. NHRDN recognises individuals and organisations who have made
noteworthy contributions in the area of Human Resource Development in the Corporate sector,
Academic sphere and broader Business and Social arena.
V. TCS received the Forbes Asia 'Fabulous 50' Award
TCS was presented the Forbes Asia 'Fabulous 50' Award in a award ceremony held in Macau,
China on 4 December 2012. This came about after Forbes Asia, a leading pan-Asia business
magazine, listed TCS in its prestigious and influential annual 'Asia's Fab 50' list of the most
compelling companies in Asia, earlier this year.
VI.TCS Ex-CFO, S Mahalingam, won CFO of the Year award at CFO Innovation Asia Awards on
28 November 2012
TCS Finance swept the two main categories at the CFO Innovation Asia Awards on 28
November 2012 in Singapore, with TCS CFO S Mahalingam winning the marquee 'CFO of the
Year' award, and TCS Finance team recognised as 'Finance Team of the Year'. The event was
attended by over 120 Asia-based CFOs and top-tier financial executives and featured over 15
award categories pitting the biggest names in the finance arena, including banks (HSBC, Citi),
accountancy firms (KPMG, PwC, Deloitte) and service providers (Accenture, Infosys). Click
here to know more.
Achievements:
Effective communication plays an important role in our day to day life. The success of our
communication is dependent on our ability to effectively convey our thoughts, ideas, and
emotions. In this process, believe it or not, active listening plays an important role in improving
our relationships with those around us and reducing arguments and misunderstandings.
Let us think of situations in real life where you have felt that people are not listening to you. Did it
not create various emotions such as confusion, worry and frustration in you? Dont you think
every time you choose not to listen you are creating a similar situation for the speaker?
In our day to day life we come across many situations where ineffective listening leads to
misunderstanding between individuals. This can happen because we are busy trying to frame our
response to the other person or, our own subconscious thought process is interfering with our
listening process.
For example, while asking for direction in a new locality, you would listen very keenly because
you do not want to lose your way in a strange place.
However, there are also situations in which we may not listen actively. A case in point is, while
dancing to your favorite song, you are more focused to the tune and rhythm of the music than to
the
actual
lyrics.
With this we can come to a very obvious conclusion:
"Hearing is a physical ability while listening is a skill!"
Need to understand
Want to learn
Choose to listen
Get involved in what is being said and connect with the speaker.
Maintain eye contact and lean towards the speaker. It shows your interest and attention
towards what the speaker is saying.
Consider each point (pay attention to) that is being delivered. Good listeners tune out
distractions and focus on the speaker and the message.
Lean forward in your chair. You could also turn in your chair to focus on the speaker and
avoid any disturbances.
Every interaction with any speaker is an opportunity for you to learn. Creative people are
always on the lookout for new information.
It is important to keep an open mind and listen without filters. Filters tend to obscure
messages. Prejudices and assumptions influence our willingness and ability to hear.
Be objective in your approach and look beyond your barriers of listening. Wait till you
have listened to the whole message.
Communication is a two way process which needs the active participation of both the
speaker and the listener. A good listener gets the best out of the speaker. Get beyond the
manner of delivery to the underlying message. Do not judge the message by the style of
delivery.
Carry a good posture.
Posture conveys a great deal about the attitude of the listener. Avoid restless, distracted
movements. Sit in a comfortable position.
Empathize!
Try to look at things from the speaker's perspective by keeping an open mind to what is
being said.
Hold back any judgments, jumping to conclusions, the urge to speak or the I know all
this attitude. You will be amazed at how you understand the speaker once you stop
criticizing!
Do not limit your listening to verbal cues alone but, listen to non-verbal cues as well. This
would include being sensitive to the non-verbal cues such as body language, gesture,
expressions, tone of voice etc. This would help you to interpret the message more
effectively.
Ensure proper understanding - the listener is attuned to both the verbal and non-verbal
cues of the speaker.
Improve relationship - an attentive listener is able to create a good rapport with the
speaker. This can be done by having a keen interest in what is being said and mirroring a
positive body language.
Conclusion
Developing good listening skills takes time and effort. We have to make a conscious effort to
overcome our barriers and filters while listening. While listening, we need to listen with the
intention to understand, reflect on both the message and the non verbal cues and paraphrase to
ensure understanding.
Did you know that we actually spend a lot of time listening? Out of the total time we spend
communicating, statistics say we spend 9% of the time writing, 16 % of the time reading, 30% of
the time speaking and 45% of the time Listening!! Thats all the more reason why we need to pay
attention to and work on our listening skills. So thats all on listening for now. I hope whatever we
have discussed has been useful and will improve your listening skills! Thank you!