SFM Notes
SFM Notes
SFM Notes
Course Outline:
Mergers- Introduction.
Merger is a financial tool that is used for enhancing long-term profitability by expanding
their operations. Mergers occur when the merging companies have their mutual consent
as different from acquisitions, which can take the form of a hostile takeover.
The business laws in US vary across states and hence the companies have limited options
to protect themselves from hostile takeovers. One way a company can protect itself from
hostile takeovers is by planning shareholders rights, which is alternatively known as “
poison pill. If we trace back to history, it is observed that very few mergers have actually
added to the share value of the acquiring company. Corporate mergers may promote
monopolistic practices by reducing costs, taxes etc.
Such activities may go against public welfare. Hence mergers are regulated d supervised
by the government, for instance, in US any Merger required\s the prior approval of the
Federal Trade Commission and the Department of Justice. In US regulation son mergers
began with the Sherman Act in 1890.
In the pure sense of the term, a merger happens when two firms, often of about the same
size, agree to go forward as a single new company rather than remain separately owned
and operated. This kind of action is more precisely referred to as a "merger of equals."
Both companies' stocks are surrendered and new company stock is issued in its place. For
example, both Daimler-Benz and Chrysler ceased to exist when the two firms merged,
and a new company, DaimlerChrysler, was created.
The combining of two or more companies, generally by offering the stockholders of one
company securities in the acquiring company in exchange for the surrender of their stock.
1
“A merger is a combination of two or more corporations in which only one corporation
survives and the merged corporations go out of business”.
Benefits of Mergers:-
• Staff reductions-As every employee knows, mergers tend to mean job losses.
Consider all the money saved from reducing the number of staff members from
accounting, marketing and other departments.
Acquisitions
Acquisitions or takeovers occur between the bidding and the target company. There may
be either hostile or friendly takeovers. Reverse takeover occurs when the target firm is
larger than the bidding firm. In the course of acquisitions the bidder may purchase the
share or the assets of the target company.
Business firms opt for mergers and acquisitions mostly for consolidating a fragmented
market and also for increasing their operational efficiency, which give them a
competitive edge. Nations across the globe have promulgated Mergers and Acquisitions
Laws to monitor the functioning of the business units therein. An estimate made in 2007
put the number of global competition laws at 106. They possess merger control
provisions.
While most mergers and acquisitions increase the operational efficiency of business firms
some can also lead to a building up of monopoly power. The anti-competitive effects are
achieved either through coordinated effects or unilateral effects. Sometimes mergers and
acquisitions tend to create a collusive market structure.
However, free and fair competition is seen to maximize the consumers' interests both in
terms of quantity and price.
2
Objectives of Mergers and Acquisitions
Merger refers to the process of combination of two companies, whereby a new company
is formed. An acquisition refers to the process whereby a company simply purchases
another company. In this case there is no new company being formed. Benefits of
mergers and acquisitions are quite a handful.
Mergers and acquisitions generally succeed in generating cost efficiency through the
implementation of economies of scale. It may also lead to tax gains and can even lead to
a revenue enhancement through market share gain.
Birds Eye View of the Benefits Accruing from Mergers and Acquisitions
The principal benefits from mergers and acquisitions can be listed as increased value
generation, increase in cost efficiency and increase in market share.
Mergers and acquisitions often lead to an increased value generation for the company. It
is expected that the shareholder value of a firm after mergers or acquisitions would be
greater than the sum of the shareholder values of the parent companies.
An increase in market share is one of the plausible benefits of mergers and acquisitions.
In case a financially strong company acquires a relatively distressed one, the resultant
organization can experience a substantial increase in market share. The new firm is
usually more cost-efficient and competitive as compared to its financially weak parent
organization.
It can be noted that mergers and acquisitions prove to be useful in the following
situations:
Firstly, when a business firm wishes to make its presence felt in a new market. Secondly,
when a business organization wants to avail some administrative benefits. Thirdly, when
a business firm is in the process of introduction of new products. New products are
developed by the R&D wing of a company.
A) Expansion:-
i) Merger and acquisitions
ii) Tender offers
iii) Joint Venture
3
B) Contraction:-
i) Spin-offs
ii) Split-offs
iii) Divestitures
iv) Equity carve-out
C) Corporate Control:-
I) Premium Buyback
ii) Standstill Agreements
iii) Anti takeover Amendments
IV) Proxy contests
D) Changes in Ownership Structures:-
i) Exchange offers
ii) Share repurchases
iii) Going private
iv) Leveraged buyout
A-Tender offer
The acquirer pursues takeover (with out consent of the acquiree) by making a tender offer
directly to shareholders of the target company to sell their shares. This offer is made for
cash.
Joint Venture
This is an agreement between two or more companies where there will be an agreed
contribution and participation of the respective companies.
B-Spin Off
This takes place when part of a company’s undertaking is transferred to a newly formed
or an existing company. Some or that part of the shares of the first company are also
transferred to the new company. The reminder of the first company’s undertaking
continues to be vested in it and the share holders of the main company gets reduced by
that extent.
Split Off
This occurs when equity shares of a subsidiary company are distributed to some of the
parent company’s shareholders in exchange for their holdings in parent company.
Split Up
It is s diversion of a company into two or more parts through transfer of stock and parent
company ceases to exist.
4
Divestiture
They are sale, for cash or for securities, of a segment of a company to a third party which
is an outsider.
It is a type of divestiture and different to spin off. It resembles the IPO of some portion of
equity stock of a wholly owned subsidiary by the parent company. Some of the
subsidiary’s shares are offered for sale to general public for increasing cash inflow
without losing control.
C-Corporate Control:
Corporate control involves obtaining control over the management of the firm. The
various techniques of obtaining corporate control are as-
Premium Buyback-
Standstill Agreement-
Exchange Offers: It provides one or more classes of securities, the right or option to
exchange part or all of their holdings for a different class of securities of the firm.
Share Buyback/repurchases: section 77(A) allows companies buy back their own
shares as well as other specified securities. (So the acquirer will not get chance to buy
shares….)
Going Private: it refers to the transformation of a public corporation into a privately hold
firm. It involves purchase of the entire equity interest in a previously public corporation
by a small group of investors.
Leveraged Buy outs: This is the acquisition of a company by its management personnel.
It is also known as management buyout. Management may raise capital from the market
or institutions to acquire the company on the strength of its assets.
5
Types of Mergers:-
Horizontal Mergers
Vertical Mergers
Conglomerate Mergers
Horizontal Mergers:-
In a horizontal merger, one firm acquires another firm that produces and sells an identical
or similar product in the same geographic area and thereby eliminates competition
between the two firms.
Horizontal mergers are those mergers where the companies manufacturing similar kinds
of commodities or running similar type of businesses merge with each other. The
principal objective behind this type of mergers is to-
Nevertheless, the horizontal mergers do not have the capacity to ensure the market
about the product and steady or uninterrupted raw material supply. Horizontal
mergers can sometimes result in monopoly and absorption of economic power in the
hands of a small number of commercial entities.
Vertical Mergers:-
Vertical mergers refer to a situation where a product manufacturer merges with the
supplier of inputs or raw materials. In can also be a merger between a product
manufacturer and the product's distributor.
Vertical mergers may violate the competitive spirit of markets. It can be used to block
competitors from accessing the raw material source or the distribution channel. Hence, it
is also known as "vertical foreclosure". It may create a sort of bottleneck problem.
This type merger is made to avail the following type benefits. They are as-
o Technological economies
o Elimination of transaction costs
o Improved planning for inventory and production.
o Avoidance of transportation cost
o Working capital investment
o Reduction of inventory/stock
Conglomerate Mergers:-
6
As per definition, a conglomerate merger is a type of merger whereby the two companies
that merge with each other are involved in different sorts of businesses. The importance
of the conglomerate mergers lies in the fact that they help the merging companies to be
better than before.
i) Financial Conglomerates:-
It plays a role in operating decisions and provides staff expertise and staff services to the
operating entities.
7
Difference between Market & Product Extension:-
• The difference between market extension merger and product extension merger
lies in the fact that the later is meant to add to the existing variety of products and
services offered by the respective merging companies; while, in case of the
former the two merging companies are dealing in similar products.
• In case of the market extension merger the two merging companies are operating
in the same market and as far as product extension merger is concerned the two
merging companies are operating in different markets.
There are several reasons as to why a company may go for a conglomerate merger.
Among the more common reasons are adding to the share of the market that is owned by
the company and indulging in cross selling. The companies also look to add to their
overall synergy and productivity by adopting the method of conglomerate mergers.
There are several advantages of the conglomerate mergers. One of the major benefits is
that conglomerate mergers assist the companies to diversify. As a result of conglomerate
mergers the merging companies can also bring down the levels of their exposure to risks.
8
• Industry roll ups- consolidation of fragmented industries.
• Shift from over capacity area to area with more favorable sale-to-capacity ratios.
• Combined company can better meet customers’ demands for a wide range of
services; strengthens distribution systems.
Business Alliances.
Divestitures
Indian competition law grants a maximum time period of 210 days for the determination
of the combination, which comprises acquisitions, mergers, amalgamations and the like.
One needs to take note of the fact that this stated time frame is clearly distinct from the
minimum compulsory wait period for applicants.
As per the law, the compulsory period of waiting for applicants can either be 210 days
starting from the day of notice filing or receipt of the Commission's order, whichever
occurs earlier.
The threshold limits for firms entering business combinations are substantially high under
the Indian law. The threshold limits are set either in terms of the asset value or or in terms
the firm's turnover. Indian threshold limits are greater than those for the EU. They are
twice as high when compared with UK.
The Indian law also provides for the modern day phenomenon of merger and
acquisitions, which are cross border in nature. As per the law domestic nexus is a pre-
requisite for notification on this type of combinations.
It can be noted that Competition Act, 2002 has undergone a recent amendment. This has
replaced the the voluntary notification regime with a mandatory regime. Of the total
number of 106 countries, which possess competition laws only 9 are thought to be
credited with a voluntary notification regime. Voluntary notification regimes are
generally associated with business uncertainties.
Indian Income Tax Act has provision for tax concessions for mergers/demergers between
two Indian companies. These mergers/demergers need to satisfy the conditions pertaining
to section 2(19AA) and section 2(1B) of the Indian Income Tax Act as per the applicable
situation.
9
In case of an Indian merger when transfer of shares occur for a company they are entitled
to a specific exemption from the capital gains tax under the “Indian I-T tax Act”. These
companies can either be of Indian origin or foreign ones.
A different set of rules is however applicable for the 'foreign company mergers'. It is a
situation where an Indian company owns the new company formed out of the merger of
two foreign companies.
It can be noted that for foreign company mergers the share allotment in the merged
foreign company in place of shares surrendered by the amalgamating foreign company
would be termed as a transfer, which would be taxable under the Indian tax law.
Also as per conditions set under section 5(1), the 'Indian I-T Act' states that, global
income accruing to an Indian company would also be included under the head of 'scope
of income' for the Indian company.
International mergers and acquisitions are growing day by day. These mergers and
acquisitions refer to those mergers and acquisitions that are taking place beyond the
boundaries of a particular country. International mergers and acquisitions are also termed
as global mergers and acquisitions or cross-border mergers and acquisitions.
Globalization and worldwide financial reforms have collectively contributed towards the
development of international mergers and acquisitions to a substantial extent.
International mergers and acquisitions are taking place in different forms, for example
horizontal mergers, vertical mergers, conglomerate mergers, congeneric mergers, reverse
mergers, dilutive mergers, accretive mergers and others.
International mergers and acquisitions are performed for the purpose of obtaining some
strategic benefits in the markets of a particular country. With the help of international
mergers and acquisitions, multinational corporations can enjoy a number of advantages,
which include economies of scale and market dominance.
International mergers and acquisitions play an important role behind the growth of a
company. These deals or transactions help a large number of companies penetrate into
new markets fast and attain economies of scale. They also stimulate foreign direct
investment or FDI.
The reputed international mergers and acquisitions agencies also provide educational
programs and training in order to grow the expertise of the merger and acquisition
professionals working in the global merger and acquisitions sector.
The rules and regulations regarding international mergers and acquisitions keep on
changing constantly and it is mandatory that the parties to international mergers and
acquisitions get themselves updated with the various amendments. Numerous investment
bank professionals, consultants and attorneys are there to offer valuable and
knowledgeable recommendations to the merger and acquisition clients.
10
Usually, the following methods are implemented for funding international mergers and
acquisitions:
Following are the instances of the major international mergers and acquisitions:
• The merger of British Petroleum (BP) with Amoco (erstwhile Standard Oil of
Indiana)
• The acquisition of Mannesmann AG by Vodafone Airtouch PLC
• The merger of Exxon with Mobil (The name of the company formed as a result
of the merger is ExxonMobil)
• The acquisition of AirTouch Communications by the Vodafone Group
• The acquisition of Compaq by Hewlett-Packard
• The acquisition of Shell Transport & Trading Company by Royal Dutch
Petroleum Company
• The merger of Bank One Corporation with JPMorgan Chase & Company
The following elements influence the international mergers and acquisitions from
many aspects:
• Corporate governance
• Company acts
• The capacity of average workers
• Expectation of the consumers
• Political features of a country
• Tradition and culture of a country
11
Module II :Strategic Financial Management
* Strategic cost management,
* Activity based costing,
* Financial management in knowledge
* Intensive companies,
* Financial Innovations and Financial Engineering: Management
* Leverage,
* Leverage buys out-operations, norms for financing leverage buyouts,
* Share Repurchase,
* Corporate Control Mechanisms.
Introduction:
Many of the terms are not new: cost reduction, target costing, total cost management, or
cost avoidance. These efforts have been targeted in several organizations. But how many
purchasing and supply organizations have adopted these tactics for the short-term gain
and how many have taken a strategic approach that spans several links in the supply
chain? More and more will be taking the strategic approach, focusing on strategic cost
management. It has now a days become abuzz word in the street of corporate houses.
Corporate houses are now searching out for ways to manage their huge conglomerates.
The downsizing and reengineering initiatives so prevalent in the early '90s have largely
proved financially short-sighted. With hindsight, we now know that almost half of
downsizing companies reported lower profits the year following their cutbacks. Cost-
cutters' stock prices grew more slowly than those of companies which successfully grew
both their top and bottom lines. Less than one in five cost-cutters were subsequently able
to put their companies back on a profitable growth track. Pressures on costs come from
many external quarters, including shifting customer priorities, the emergence of new
competitors and channels, and increasingly inquisitive financial mark.
Strategic cost management can be defined as" scrutinizing every process within your
organization, knocking down departmental barriers, understanding your suppliers'
business, and helping improve their processes"
There are three basic business areas where strategic cost management can be applied.
Strategy:
A strategy in general terms refers to a plan of action that will shape the direction of
organization's success. Companies of late have realized the importance of clear
articulation of strategy and its effective implementation. Before formulating any strategy,
the management should think about the business model whether it is still relevant or need
to be changed? Or whether the objectives of the business are going to be accomplished
through laid out strategy.
Operations:
By setting the priorities according to its significance we can operate the tasks effectively
and efficiently.
Organization:
Company should time and again check whether it is allocating its limited resources in the
businesses which generate more value for the entire organization. Resources as such are
the liming factors for any organization and that's why the company should be focus on
the structure of the business and it should decide well in advance whether it should own
all resources or not?
13
The Strategic cost management framework provides a clear plan of attack for addressing
costs and decisions that affect them. Following are the three core components of this
framework.
Core Functions:
Core functions elaborate on the nature of the business. It answers the very obvious
question what type of business are we in? At this stage the company has to clearly
identify its courses of actions with respect to strategy planning, research and
development, and product development.
This step emphasizes more on value addition with various activities such as marketing,
sales, manufacturing, quality assurance and control, sourcing, procurement and logistics,
engineering and maintenance, customer service and technical support etc. Excellence in
those activities can create a sort of competitive advantage for the company if it could
harness its resources intelligently than its competitors.
Support Functions:
As the name suggests, to support the core activities of business some secondary activities
are to be carried out which includes IT, Finance and Accounting, HR management
General administration. These activities will facilitate the performance of the core
activities in a way that goals of the business can be accomplished successfully without
wasting limited resources. They will also help in synchronizing the different tasks which
are to be carried out simultaneously.
SCM Programme includes following five steps. These steps can be detailed out as
follows:
1. Focus:
Focus state starts with reviewing the different strategies of the company. Reviewing the
strategies will lead to clear identification of performance gaps and this will help to bridge
14
the gap by improving targets already set beforehand. Modifying the targets will lead to
developed plan of attack which will foster better internal communication within the
organization.
3. Fact Finding:
This stage includes the tasks such as data gathering, conducting interview, developing
benchmarks, conducting and customer surveys.
Analysis of activities plays a crucial role in ascertaining the cost of the company. It can
be done by various strategic cost management analytical tools viz. cost driver analysis,
activity-based costing, selective business process reengineering etc. An action plan for
proposed change should address the following questions what, who, when how aspects of
the activities.
5. Implementation:
Each individual organization needs to review their various supply needs and supply
chains and determine what enablers are of prime importance to their situations. We will
discuss an approach to that problem later in the paper. In this section we will discuss a
number of generally applicable enablers, some of which are likely to be present in many
supply situations. The enablers are grouped by the three phases present in most cost
management approaches: analysis and planning, implementation, and ongoing
management and control. Some apply to more than one phase and are so listed but
discussed only at the first listing.
• Top management support and sponsorship - Without this forget the whole idea
of cost management. However, to get this support, top management must
understand the value of supply chain management to the bottom line. If
management seems reluctant to recognize this from internal efforts alone,
cooperative efforts with suppliers and/or customers may help to convince them.
15
• Information systems - To capture spending by commodity or service, supplier,
and geographical area. Information can be used to: identify opportunities for
synergy with other supply chain members in areas such as leveraging spend,
pooling knowledge, acquiring/providing/sharing technology, identify areas where
transfer of best practices will reduce costs, optimize location and use of resources,
such as inventories, in the supply chain, and help to identify total cost drivers.
• Identity of total cost drivers - What are all of the elements that make up the total
cost in a given supply chain? Total cost drivers may vary by geographical areas
and may include items such as logistics, transportation, inventory, lead time, lack
of infrastructure, lack of qualified or trained personnel, lack of qualified suppliers,
and production impact of particular products or services. Additional drivers that
may be present in a global analysis could include tariffs and duties, currency
exchange rates, hostile political or geographical environments.
• Cost models - If models of major costs in the supply chain are not available, they
may need to be developed. Cost models may have to be adjusted by country or
region in global supply chain situations. Some techniques for modeling costs
include learning curve analysis, experience effect analysis, price productivity
analysis, implied set-up cost analysis, should-cost analysis, comparative process
analysis, and cost breakdown analysis. Some approaches to cost and price
modeling and analysis are presented in Chapter 19 of the current edition of The
Purchasing Handbook.
• A strategic cost management plan - There must be known cost management
objectives and a plan as to how you are going to achieve them. One approach to
prepare such a plan is to use a three-step approach that includes: classifying
purchases, matching cost analysis tools with the purchase classifications, and
focusing on strategic cost management techniques to achieve cost management
results.
• Effective cross-functional teams - Vital to the success of any cost management
effort because of the varied departments and functions that are affected and need
to be involved to implement cost management initiatives. All parties either
affected by the costs in question or involved in generating those costs need to be
involved in the applicable cost management teams.
• Known Business strategies - To develop purchasing cost management strategies,
overall business strategies must be known. The maximum effect of strategic cost
management in the supply chain can only be achieved when supply chain
strategies are aligned with overall business strategies. Obviously, to achieve
alignment, overall business strategies must be known to the supply chain team.
• Alignment of supply strategies with business strategies - To be most effective,
purchasing cost management strategies must be aligned with overall business
strategies. This enabler is key to successful strategic cost management and also to
obtaining full support of top management. Do not make the mistake of
concentrating cost management efforts in an area that management considers
unimportant, or in an area of the business that is not strategically important to the
company and/or may be up for divestiture.
• Total cost approach to procurement - Frequently the most significant cost
reductions in a supply chain do not result from lower prices. Price is important but
it is not the only cost. Costs other than price may have more reduction potential or
may be easier to reduce than price itself. Do not overlook any cost element.
Sometimes costs that are indirectly linked to the use of products or services may
contain large reduction potential as a result of changes in the purchased products
or services.
16
• Balanced approach to sourcing - It is inefficient to either purchase everything
through alliances or purchase everything on a transactional basis. All purchases
must be analyzed and categorized according to criteria such as total spend, long-
term vs. short-term need, strategic importance, and supply base capabilities. From
such an analysis individual purchase categories can be identified as candidates for
strategic alliances, small-value purchase techniques, and transactional approaches.
• Performance measurements - Without measurements you don't know where you
are, where you came from, or where you are going. Performance measurements
should be established for all aspects of a strategic cost management plan that are
critical to its success. Therefore the first step in establishing measurements is to
identify critical success factors and then develop indicators to measure how well
they are being achieved. The results of measurement can be used to report
success, to identify problem areas, and as the basis for taking corrective action.
• Redefinition of procurement business processes - Necessary to accommodate
balanced sourcing and efficient methods for handling transactional purchasing
activities. Adoption of a continuous process improvement approach to supply
chain operations will cause continual redefinition of business processes in the
supply chain.
• Maximize the leverage effect of purchasing - Use the information available
from data systems to determine global spend by product, supplier, and
geographical area to identify leverage opportunities. Leverage benefits can
include price, quality, service, availability, knowledge, and other factors
Implementation Enablers:
Activity-based costing
In this way an organization can precisely estimate the cost of its individual products and
services for the purposes of identifying and eliminating those which are unprofitable and
lowering the prices of those which are overpriced.
Historical development
Traditionally cost accountants had arbitrarily added a broad percentage of expenses into
the indirect cost
However as the percentages of indirect or overhead costs had risen, this technique
became increasingly inaccurate because the indirect costs were not caused equally by all
the products. For example, one product might take more time in one expensive machine
than another product, but since the amount of direct labor and materials might be the
same, the additional cost for the use of the machine would not be recognized when the
same broad 'on-cost' percentage is added to all products. Consequently, when multiple
products share common costs, there is a danger of one product subsidizing another.
1. The concepts of ABC were developed in the manufacturing sector of the United
States during the 1970s and 1980s. During this time, the Consortium for
18
Advanced Management-International, now known simply as CAM-I, provided a
formative role for studying and formalizing the principles that have become more
formally known as Activity-Based Costing.
Robin Cooper and Robert S. Kaplan, proponents of the Balanced Scorecard, brought
notice to these concepts in a number of articles published in Harvard Business Review
beginning in 1988. Cooper and Kaplan described ABC as an approach to solve the
problems of traditional cost management systems. These traditional costing systems are
often unable to determine accurately the actual costs of production and of the costs of
related services. Consequently managers were making decisions based on inaccurate data
especially where there are multiple products.
Instead of using broad arbitrary percentages to allocate costs, ABC seeks to identify
cause and effect relationships to objectively assign costs. Once costs of the activities have
been identified, the cost of each activity is attributed to each product to the extent that the
product uses the activity. In this way ABC often identifies areas of high overhead costs
per unit and so directs attention to finding ways to reduce the costs or to charge more for
costly products.
Activity-based costing was first clearly defined in 1987 by Robert S. Kaplan and W.
Bruns as a chapter in their book Accounting and Management: A Field Study Perspective.
Like manufacturing industries, financial institutions also have diverse products and
customers which can cause cross-product cross-customer subsidies. Since personnel
expenses represent the largest single component of non-interest expense in financial
institutions, these costs must also be attributed more accurately to products and
customers. Activity based costing, even though originally developed for manufacturing,
may even be a more useful tool for doing this.
Uses
Limitations
Even in activity-based costing, some overhead costs are difficult to assign to products and
customers, such as the chief executive's salary. These costs are termed 'business
sustaining' and are not assigned to products and customers because there is no meaningful
method. This lump of unallocated overhead costs must nevertheless be met by
contributions from each of the products, but it is not as large as the overhead costs before
ABC is employed.
19
Although some may argue that costs untraceable to activities should be "arbitrarily
allocated" to products, it is important to realize that the only purpose of ABC is to
provide information to management. Therefore, there is no reason to assign any cost in an
arbitrary manner.
Financial engineering
Leveraged buyout
Companies of all sizes and industries have been the target of leveraged buyout
transactions, although because of the importance of debt and the ability of the acquired
firm to make regular loan payments after the completion of a leveraged buyout, some
features of potential target firms make for more attractive leverage buyout candidates,
including:
20
Rationale
1. The use of debt increases (leverages) the financial return to the private equity
sponsor. Under the Modigliani-Miller theorem the total return of an asset to its
owners, all else being equal and within strict restrictive assumptions, is unaffected
by the structure of its financing. As the debt in an LBO has a relatively fixed,
albeit high, cost of capital, any returns in excess of this cost of capital flow
through to the equity.
2. The tax shield of the acquisition debt, according to the Modigliani-Miller theorem
with taxes, increases the value of the firm. This enables the private equity sponsor
to pay a higher price than would otherwise be possible. Because income flowing
through to equity is taxed, while interest payments to debt are not, the capitalized
value of cash flowing to debt is greater than the same cash stream flowing to
equity.
Germany currently introduces new tax laws, taxing parts of the cash flow before debt
interest deduction. The motivation for the change is to discourage leveraged buyouts by
reducing the tax shield effectiveness.
Historically, many LBOs in the 1980s and 1990s focused on reducing wasteful
expenditures by corporate managers whose interests were not aligned with shareholders.
After a major corporate restructuring, which may involve selling off portions of the
company and severe staff reductions, the entity would likely be producing a higher
income stream. Because this type of management arbitrage and easy restructuring has
largely been accomplished, LBOs today focus more on growth and complicated financial
engineering to achieve their returns. Most leveraged buyout firms look to achieve an
internal rate of return in excess of 20%.
Management buyouts
A special case of such acquisition is a management buyout (MBO), which occurs when a
company's managers buy or acquire a large part of the company. The goal of an MBO
may be to strengthen the managers' interest in the success of the company. In most cases,
the management will then make the company private. MBOs have assumed an important
role in corporate restructurings beside mergers and acquisitions. Key considerations in an
MBO are fairness to shareholders, price, the future business plan, and legal and tax
issues. One recent criticism of MBOs is that they create a conflict of interest—an
incentive is created for managers to mismanage (or not manage as efficiently) a company,
thereby depressing its stock price, and profiting handsomely by implementing effective
management after the successful MBO, as Paul Newman's character attempted in the
Coen brothers' film The Hudsucker Proxy.
It is fairly easy for a top executive to reduce the price of his/her company's stock - due to
information asymmetry. The executive can accelerate accounting of expected expenses,
delay accounting of expected revenue, engage in off balance sheet transactions to make
the company's profitability appear temporarily poorer, or simply promote and report
severely conservative (eg. pessimistic) estimates of future earnings. Such seemingly
21
adverse earnings news will be likely to (at least temporarily) reduce share price. (This is
again due to information asymmetries since it is more common for top executives to do
everything they can to window dress their company's earnings forecasts).
A reduced share price makes a company an easier takeover target. When the company
gets bought out (or taken private) - at a dramatically lower price - the takeover artist gains
a windfall from the former top executive's actions to surreptitiously reduce share price.
This can represent 10s of billions of dollars (questionably) transferred from previous
shareholders to the takeover artist. The former top executive is then rewarded with a
golden parachute for presiding over the firesale that can sometimes be in the 100s of
millions of dollars for one or two years of work. (This is nevertheless an excellent
bargain for the takeover artist, who will tend to benefit from developing a reputation of
being very generous to parting top executives).
Similar issues occur when a publicly held asset or non-profit organization undergoes
privatization. Top executives often reap tremendous monetary benefits when a
government owned or non-profit entity is sold to private hands. Just as in the example
above, they can facilitate this process by making the entity appear to be in financial
crisis - this reduces the sale price (to the profit of the purchaser), and makes non-profits
and governments more likely to sell. Ironically, it can also contribute to a public
perception that private entities are more efficiently run reinforcing the political will to
sell of public assets.
Again, due to asymmetric information, policy makers and the general public see a
government owned firm that was a financial 'disaster' - miraculously turned around by the
private sector (and typically resold) within a few years.
Nevertheless, the incentive to artificially reduce the share price of a firm is higher for
management buyouts, than for other forms of takeovers or LBOs.
Share repurchase
In some countries, including the United States and the United Kingdom, corporations can
buy back their own stock in a share repurchase, also known as a stock repurchase or
share buyback. There has been a meteoric rise in the use of share repurchases in the U.S.
in the past twenty years, from $5 billion in 1980 to $349 billion in 2005.A share
repurchase distributes cash to existing shareholders in exchange for a fraction of the
firm's outstanding equity. That is, cash is exchanged for a reduction in the number of
shares outstanding. The firm either retires the shares or keeps them as treasury stock,
available for re-issuance. Under U.S. corporate law there are five primary methods of
stock repurchase: open market, private negotiations, repurchase put rights, and two
variants of self-tender repurchase, a fixed price tender offer and a Dutch auction.
Companies making profits typically have two uses for those profits. Firstly, some part of
profits are usually repaid to shareholders in the form of dividends. The remainder, termed
stockholder's equity, are kept inside the company and used for investing in the future of
the company. If companies can reinvest most of their retained earnings profitably, then
they may do so. However, sometimes companies may find that some or all of their
retained earnings cannot be reinvested to produce acceptable returns.
22
Share repurchases are one possible use of leftover retained profits. When a company
repurchases its own shares, it reduces the number of shares held by the public. The
reduction of the float, or publicly traded shares, means that even if profits remain the
same, the earnings per share increase. So, repurchasing shares, particularly when a
company's share price is perceived as undervalued or depressed, may result in a strong
return on investment.
One reason why companies may prefer to keep a substantial portion of earnings rather
than distribute them to shareholders, even if they aren't able to reinvest them all
profitably, is that it is considered very embarrassing for companies to be forced to cut
dividends. Normally, investors have more adverse reaction in dividend cut than
postponing or even abandoning the share buyback program. So, rather than pay out larger
dividends during periods of excess profitability then have to reduce them during leaner
times, companies prefer to pay out a conservative portion of their earnings, perhaps half,
with the aim of maintaining an acceptable level of dividend cover.
Another reason why executives, in particular, may prefer share buybacks is that
Executive compensation is often tied to executives' ability to meet earnings per share
targets. In companies where there are few opportunities for organic growth, share
repurchases may represent one of the few ways of improving earnings per share in order
to meet targets. Therefore, safeguards should be in place to ensure that increasing
earnings per share in this way will not affect executive or managerial rewards, even
though this does not always occur. Furthermore, increasing earnings per share does not
equate to increase in shareholders value. This investment ratio is influenced by
accounting policy choices and fails to take into account the cost of capital and future cash
flows, which are the determinants of shareholder value.
Share repurchases also allow companies to covertly distribute their earnings to investors
without inflicting them with double taxation. This only holds true in jurisdictions which
do not operate imputation tax credit systems. For example, if a company were to pay
$100,000 in dividends on one million shares or as 10¢ dividend per share, investors may
incur tax upon this disbursement. This means that instead of receiving 10¢ of already
taxed earnings per share, they receive 8.5¢ (.10×(1 − .15)) at a 15% tax rate with 1.5¢
going to the government. An investor with 10 shares will receive 85¢. As the company
has to pay out this money the share price drops according, from $10 to $9.90, so the
investor with 10 shares now has; $99 + 85¢ dividend, or $99.85.
Compare this with spending $100,000 buying back shares. This will remove 10,000
shares from the market, leaving 990,000 shares at $10 each (10,000,000 − 100,000 =
9900000/990000), meaning our investor with 10 shares still has $100, and the
government receives no tax revenue. Ultimately there is no net change in investor wealth
assuming a fully equity financed business.
The most common share repurchase method in the United States is the open-market stock
repurchase, representing almost 95% of all repurchases. A firm may or may not announce
that it will repurchase some shares in the open market from time to time as market
conditions dictate and maintains the option of deciding whether, when, and how much to
repurchase. Open market repurchases can span months or even years. There are, however,
daily buy-back limits which restrict the amount of stock that can be bought over a
particular time interval.
Prior to 1981, all tender offer repurchases were executed using a fixed price tender offer.
This offer specifies in advance a single purchase price, the number of shares sought, and
the duration of the offer, with public disclosure required. The offer may be made
conditional upon receiving tenders of a minimum number of shares, and it may permit
withdrawal of tendered shares prior to the offer's expiration date. Shareholders decide
whether or not to participate, and if so, the number of shares to tender to the firm at the
specified price. Frequently, officers and directors are precluded from participating in the
tender offer. If the number of shares tendered exceeds the number sought, then the
company purchases less than all shares tendered at the purchase price on a pro rata basis
to all who tendered at the purchase price. If the number of shares tendered is below the
number sought, the company may choose to extend the offer’s expiration date.
Types of buy-backs
Selective buy-backs
In broad terms, a selective buy-back is one in which identical offers are not made to
every shareholder, for example, if offers are made to only some of the shareholders in the
company. The scheme must first be approved by all shareholders, or by a special
resolution (requiring a 75% majority) of the members in which no vote is cast by selling
shareholders or their associates. Selling shareholders may not vote in favour of a special
resolution to approve a selective buy-back. The notice to shareholders convening the
meeting to vote on a selective buy-back must include a statement setting out all material
information that is relevant to the proposal, although it is not necessary for the company
to provide information already disclosed to the shareholders, if that would be
unreasonable.
A company may also buy back shares held by or for employees or salaried directors of
the company or a related company. This type of buy-back, referred to as an employee
share scheme buy-back, requires an ordinary resolution.
A listed company may also buy back its shares in on-market trading on the stock
exchange, following the passing of an ordinary resolution if over the 10/12 limit . The
stock exchange’s rules apply to on-market buy-backs.
A listed company may also buy unmarketable parcels of shares from shareholders (called
a minimum holding buy-back). This does not require a resolution but the purchased
shares must still be cancelled.
24
Module III :Value Based Management
When to use it
Use Value Analysis to analyze and understand the detail of specific situations.
It is particularly suited to physical and mechanical problems, but can also be used in other
areas.
Quick X Long
Logical X Psychological
Individual X Group
How to use it
Identify the item to be analyzed and the customers for whom it is produced.
List the basic functions (the things for which the customer is paying). Note that there are
usually very few basic functions.
Identify the secondary functions by asking ‘How is this achieved?’ or ‘What other
functions support the basic functions?’.
Find the components of the item being analyzed that are used to provide the key
functions. Again, the question ‘How’ can come in very useful here.
Measure the cost of each component as accurately as possible, including all material and
production costs.
Seek improvements
Eliminate or reduce the cost of components that add little value, especially high-cost
components.
Enhance the value added by components that contribute significantly to functions that are
particularly important to customers.
Example
In analyzing a pen, the following table is used to connect components with the functions
to which they contribute and hence identify areas of focus.
How it works
Value Analysis (and its design partner, Value Engineering) is used to increase the value
of products or services to all concerned by considering the function of individual items
and the benefit of this function and balancing this against the costs incurred in delivering
it. The task then becomes to increase the value or decrease the cost.
26
THE CONCEPT OF VALUE
The value of a product will be interpreted in different ways by different customers. Its
common characteristic is a high level of performance, capability, emotional appeal, style,
etc. relative to its cost. This can also be expressed as maximizing the function of a
product relative to its cost:
Value is not a matter of minimizing cost. In some cases the value of a product can be
increased by increasing its function (performance or capability) and cost as long as the
added function increases more than its added cost. The concept of functional worth can
be important. Functional worth is the lowest cost to provide a given function. However,
there are less tangible "selling" functions involved in a product to make it of value to a
customer.
Lawrence Miles conceived of Value Analysis (VA) in the 1945 based on the application
of function analysis to the component parts of a product. Component cost reduction was
an effective and popular way to improve "value" when direct labor and material cost
determined the success of a product. The value analysis technique supported cost
reduction activities by relating the cost of components to their function contributions.
Value analysis defines a "basic function" as anything that makes the product work or sell.
A function that is defined as "basic" cannot change. Secondary functions, also called
"supporting functions", described the manner in which the basic function(s) were
implemented. Secondary functions could be modified or eliminated to reduce product
cost.
As VA progressed to larger and more complex products and systems, emphasis shifted to
"upstream" product development activities where VA can be more effectively applied to
a product before it reaches the production phase. However, as products have become
more complex and sophisticated, the technique needed to be adapted to the "systems"
approach that is involved in many products today. As a result, value analysis evolved into
the "Function Analysis System Technique" (FAST) which is discussed later.
Identifying the function in the broadest possible terms provides the greatest potential for
divergent thinking because it gives the greatest freedom for creatively developing
alternatives. A function should be identified as to what is to be accomplished by a
solution and not how it is to be accomplished. How the function is identified determines
the scope, or range of solutions that can be considered.
That functions designated as "basic" represent the operative function of the item or
product and must be maintained and protected. Determining the basic function of single
components can be relatively simple. By definition then, functions designated as "basic"
will not change, but the way those functions are implemented is open to innovative
speculation.
As important as the basic function is to the success of any product, the cost to perform
that function is inversely proportional to its importance. This is not an absolute rule, but
rather an observation of the consumer products market. Few people purchase consumer
products based on performance or the lowest cost of basic functions alone. When
purchasing a product it is assumed that the basic function is operative. The customer's
attention is then directed to those visible secondary support functions, or product features,
which determine the worth of the product. From a product design point of view, products
that are perceived to have high value first address the basic function's performance and
stress the achievement of all of the performance attributes. Once the basic functions are
satisfied, the designer's then address the secondary functions necessary to attract
customers. Secondary functions are incorporated in the product as features to support and
enhance the basic function and help sell the product. The elimination of secondary
functions that are not very important to the customer will reduce product cost and
increase value without detracting from the worth of the product.
The cost contribution of the basic function does not, by itself, establish the value of the
product. Few products are sold on the basis of their basic function alone. If this were so,
the market for "no name" brands would be more popular than it is today. Although the
cost contribution of the basic function is relatively small, its loss will cause the loss of the
market value of the product.
One objective of value analysis or function analysis, to improve value by reducing the
cost-function relationship of a product, is achieved by eliminating or combining as many
secondary functions as possible.
28
VALUE ANALYSIS PROCESS
The first step in the value analysis process is to define the problem and its scope. Once
this is done, the functions of the product and its items are derived. These functions are
classified into "basic" and "secondary" functions. A Cost Function Matrix or Value
Analysis Matrix is prepared to identify the cost of providing each function by associating
the function with a mechanism or component part of a product. Product functions with a
high cost-function ratio are identified as opportunities for further investigation and
improvement. Improvement opportunities are then brainstormed, analyzed, and selected.
The objective of the Function Cost Matrix approach is to draw the attention of the
analysts away from the cost of components and focus their attention on the cost
contribution of the functions. The Function Cost Matrix displays the components of the
product, and the cost of those components, along the left vertical side of the graph. The
top horizontal legend contains the functions performed by those components. Each
component is then examined to determine how many functions that component performs,
and the cost contributions of those functions.
Detailed cost estimates become more important following function analysis, when
evaluating value improvement proposals. The total cost and percent contribution of the
functions of the item under study will guide the team, or analyst, in selecting which
functions to select for value improvement analysis.
A variation of the Function-Cost Matrix is the Value Analysis Matrix. This matrix was
derived from the Quality Function Deployment (QFD) methodology. It is more powerful
in two ways. First, it associates functions back to customer needs or requirements. In
doing this, it carries forward an importance rating to associate with these functions based
on the original customer needs or requirements. Functions are then related to
mechanisms, the same as with the Function-Cost Matrix. Mechanisms are related to
functions as either strongly, moderately or weakly supporting the given function. This
relationship is noted with the standard QFD relationship symbols. The associated
weighting factor is multiplied by customer or function importance and each columns
value is added.
These totals are normalized to calculate each mechanism's relative weight in satisfying
the designated functions. This is where the second difference with the Function-Cost
Matrix arises. This mechanism weight can then be used as the basis to allocate the overall
item or product cost. The mechanism target costs can be compared with the actual or
estimated costs to see where costs are out of line with the value of that mechanism as
derived from customer requirements and function analysis.
Function Analysis System Technique is an evolution of the value analysis process created
by Charles Bytheway. FAST permits people with different technical backgrounds to
effectively communicate and resolve issues that require multi-disciplined considerations.
FAST builds upon VA by linking the simply expressed, verb-noun functions to describe
complex systems.
FAST is not an end product or result, but rather a beginning. It describes the item or
system under study and causes the team to think through the functions that the item or
system performs, forming the basis for a wide variety of subsequent approaches and
29
analysis techniques. FAST contributes significantly to perhaps the most important phase
of value engineering: function analysis. FAST is a creative stimulus to explore innovative
avenues for performing functions.
The FAST diagram or model is an excellent communications vehicle. Using the verb-
noun rules in function analysis creates a common language, crossing all disciplines and
technologies. It allows multi-disciplined team members to contribute equally and
communicate with one another while addressing the problem objectively without bias or
preconceived conclusions. With FAST, there are no right or wrong model or result. The
problem should be structured until the product development team members are satisfied
that the real problem is identified. After agreeing on the problem statement, the single
most important output of the multi-disciplined team engaged in developing a FAST
model is consensus. Since the team has been charged with the responsibility of resolving
the assigned problem, it is their interpretation of the FAST model that reflects the
problem statement that's important. The team members must discuss and reconfigure the
FAST model until consensus is reached and all participating team members are satisfied
that their concerns are expressed in the model. Once consensus has been achieved, the
FAST model is complete and the team can move on to the next creative phase.
FAST differs from value analysis in the use of intuitive logic to determine and test
function dependencies and the graphical display of the system in a function dependency
diagram or model. Another major difference is in analyzing a system as a complete unit,
rather than analyzing the components of a system. When studying systems it becomes
apparent that functions do not operate in a random or independent fashion. A system
exists because functions form dependency links with other functions, just as components
form a dependency link with other components to make the system work. The importance
of the FAST approach is that it graphically displays function dependencies and creates a
process to study function links while exploring options to develop improved systems.
There are normally two types of FAST diagrams, the technical FAST diagram and the
customer FAST diagram. A technical FAST diagram is used to understand the technical
aspects of a specific portion of a total product. A customer FAST diagram focuses on the
aspects of a product that the customer cares about and does not delve into the
technicalities, mechanics or physics of the product. A customer FAST diagram is usually
applied to a total product.
The FAST model has a horizontal directional orientation described as the HOW-WHY
dimension. This dimension is described in this manner because HOW and WHY
questions are asked to structure the logic of the system's functions. Starting with a
function, we ask HOW that function is performed to develop a more specific approach.
This line of questioning and thinking is read from left to right. To abstract the problem to
a higher level, we ask WHY is that function performed. This line of logic is read from
right to left.
There is essential logic associated with the FAST HOW-WHY directional orientation.
First, when undertaking any task it is best to start with the goals of the task, then explore
methods to achieve the goals. When addressing any function on the FAST model with the
question WHY, the function to its left expresses the goal of that function. The question
HOW, is answered by the function on the right, and is a method to perform that function
being addressed. A systems diagram starts at the beginning of the system and ends with
30
its goal. A FAST model, reading from left to right, starts with the goal, and ends at the
beginning of the "system" that will achieve that goal.
Second, changing a function on the HOW-WHY path affects all of the functions to the
right of that function. This is a domino effect that only goes one way, from left to right.
Starting with any place on the FAST model, if a function is changed the goals are still
valid (functions to the left), but the method to accomplish that function, and all other
functions on the right, are affected.
Finally, building the model in the HOW direction, or function justification, will focus the
team's attention on each function element of the model. Whereas, reversing the FAST
model and building it in its system orientation will cause the team to leap over individual
functions and focus on the system, leaving function "gaps" in the system. A good rule to
remember in constructing a FAST Model is to build in the HOW direction and test the
logic in the WHY direction.
The vertical orientation of the FAST model is described as the WHEN direction. This is
not part of the intuitive logic process, but it supplements intuitive thinking. WHEN is not
a time orientation, but indicates cause and effect.
Scope lines represent the boundaries of the study and are shown as two vertical lines on
the FAST model. The scope lines bound the "scope of the study", or that aspect of the
problem with which the study team is concerned. The left scope line determines the basic
function(s) of the study. The basic functions will always be the first function(s) to the
immediate right of the left scope line. The right scope line identifies the beginning of the
study and separates the input function(s) from the scope of the study.
The objective or goal of the study is called the "Highest Order Function", located to the
left of the basic function(s) and outside of the left scope line. Any function to the left of
another function is a "higher order function". Functions to the right and outside of the
right scope line represent the input side that "turn on" or initiate the subject under study
and are known as lowest order functions. Any function to the right of another function is
a "lower order" function and represents a method selected to carry out the function being
addressed.
Those function(s) to the immediate right of the left scope line represent the purpose or
mission of the product or process under study and are called Basic Function(s). Once
determined, the basic function will not change. If the basic function fails, the product or
process will lose its market value.
All functions to the right of the basic function(s) portray the conceptual approach selected
to satisfy the basic function. The concept describes the method being considered, or
elected, to achieve the basic function(s). The concept can represent either the current
conditions (as is) or proposed approach (to be). As a general rule, it is best to create a "to
be" rather than an "as is" FAST Model, even if the assignment is to improve an existing
product. This approach will give the product development team members an opportunity
to compare the "ideal" to the "current" and help resolve how to implement the
differences. Working from an "as is" model will restrict the team's attention to
incremental improvement opportunities. An "as is" model is useful for tracing the
symptoms of a problem to its root cause, and exploring ways to resolve the problem,
because of the dependent relationship of functions that form the FAST model.
31
Any function on the HOW-WHY logic path is a logic path function. If the functions
along the WHY direction lead into the basic function(s), than they are located on the
major logic path. If the WHY path does not lead directly to the basic function, it is a
minor logic path. Changing a function on the major logic path will alter or destroy the
way the basic function is performed. Changing a function on a minor logic path will
disturb an independent (supporting) function that enhances the basic function. Supporting
functions are usually secondary and exist to achieve the performance levels specified in
the objectives or specifications of the basic functions or because a particular approach
was chosen to implement the basic function(s).
The next step in the process is to dimension the FAST model or to associate information
to its functions. FAST dimensions include, but are not limited to: responsibility, budgets,
allocated target costs, estimated costs, actual costs, subsystem groupings, placing
inspection and test points, manufacturing processes, positioning design reviews, and
others. There are many ways to dimension a FAST model. The two popular ways are
called Clustering Functions and the Sensitivity Matrix.
Clustering functions involves drawing boundaries with dotted lines around groups of
functions to configure sub-systems. Clustering functions is a good way to illustrate cost
reduction targets and assign design-to-cost targets to new design concepts. For cost
reduction, a team would develop an "as is" product FAST model, cluster the functions
into subsystems, allocate product cost by clustered functions, and assign target costs.
32
During the process of creating the model, customer sensitivity functions can be identified
as well as opportunities for significant cost improvements in design and production.
Following the completion of the model, the subsystems can be divided among product
development teams assigned to achieve the target cost reductions. The teams can then
select cost sensitive sub-systems and expand them by moving that segment of the model
to a lower level of abstraction. This exposes the detail components of that assembly and
their function/cost contributions.
Performing value analysis or producing the FAST model and analyzing functions with
the value analysis matrix are only the first steps in the process. The real work begins with
brainstorming, developing and analyzing potential improvements in the product. These
subsequent steps are supported by:
• The QFD Concept Selection Matrix is a powerful tool to evaluate various concept
and design alternatives based on a set of weighted criteria that ultimately tie back
to customer needs.
• Benchmarking competitors and other similar products helps to see new ways
functions can be performed and breaks down some of the not-invented-here
paradigms.
• Product cost and life cycle cost models support the estimating of cost for the
Function-Cost and Value Analysis Matrices and aid in the evaluation of various
product concepts.
• Technology evaluation is leads us to new ways that basic functions can be
performed in a better or less costly way. Concept development should involve
people with knowledge of new technology development and an open mind to
identify how this technology might relate to product functions that need to be
performed. Methods such as the theory of inventive problem solving or TRIZ are
useful in this regard.
• Design for Manufacturability/Assembly principles provide guidance on how to
better design components and assemblies that are more manufacturability and, as
a result, are lower in cost.
Value Analysis or Function Analysis provide the methods to identify the problem and to
begin to define the functions that need to be performed. As we proceed in developing a
FAST model, implicit in this process is developing a concept of operation for the product
which is represented by all of the lower order functions in a FAST diagram.
All of these steps may be iterative as a preferred concept evolves and gets more fully
developed. In addition, there should be a thorough evaluation of whether all functions are
needed or if there is a different way of accomplishing a function as the concept is
developed to a lower level of abstraction. When a Function Cost or Value Analysis
33
Matrix is prepared, functions that are out of balance with their worth are identified,
further challenging the team to explore different approaches.
Target costing
Target costing is a pricing method used by firms. It is defined as "a cost management
tool for reducing the overall cost of a product over its entire life-cycle with the help of
production, engineering, research and design". A target cost is the maximum amount of
cost that can be incurred on a product and with it the firm can still earn the required profit
margin from that product at a particular selling price.
In the traditional cost-plus pricing method materials, labor and overhead costs are
measured and a desired profit is added to determine the selling price.
Here are some examples of decisions made at the design stage which impact on the cost
of a product.
34
In traditional costing system it is presumed that a product has already been developed,
has been costed, and is ready to be marketed as soon as a price is set. In many cases, the
sequence of events is just the reverse. That is, the company already knows what price
should be charged, and the problem is to develop a product that can be marketed
profitably at the desired price. Even in this situation, where the normal sequence of
events is reversed, cost is still a crucial factor. The company can use an approach called
target costing.
Target costing is the process of determining the maximum allowable cost for a new
product and then developing a prototype that can be profitably made for that maximum
target cost figure. A number of companies--primarily in Japan--use target costing,
including Compaq, Culp, Cummins Engine, Daihatsu Motors, DaimlerChrysler, Ford,
Isuzu Motors, ITT, NEC, and Toyota etc.
The target costing for a product is calculated by starting with the product's anticipated
selling price and then deducting the desired profit. Following formula or equation
further explains this concept:
The product development team is then given the responsibility of designing the product
so that it can be made for no more than the target cost.
Following set of activities further explains the concept of target costing technique:
Handy Appliance Company feels that there is a market niche for a hand mixer with
certain new features. Surveying the features and prices of hand mixers already in the
market, the marketing department believes that a price of $30 would be about right for
the new mixer. At that price, marketing estimates that 40,000 of new mixers could be
sold annually. To design, develop, and produce these new mixers, an investment of
$2,000,000 would be required. The company desires a 15% return on investment (ROI).
Given these data, the target cost to manufacture, sell, distribute, and service one mixer is
$22.50 as calculated below:
36
mixer)
This $22.5 target cost would be broken into target cost for the various functions:
manufacturing, marketing, distribution, after-sales service, and so on. Each functional
area would be responsible for keeping its actual costs within target.
1. Effective implementation and use requires the development of detailed cost data.
2. its implementation requires willingness to cooperate
3. Requires many meetings for coordination
4. May reduce the quality of products due to the use of cheep components which
may be of inferior quality.
Balanced scorecard
Characteristics
The core characteristic of the Balanced Scorecard and its derivatives is the presentation
of a mixture of financial and non-financial measures each compared to a 'target' value
within a single concise report. The report is not meant to be a replacement for traditional
financial or operational reports but a succinct summary that captures the information
most relevant to those reading it. It is the methods by which this 'most relevant'
information is determined (i.e. the design processes used to select the content) that most
differentiates the various versions of the tool in circulation.
The first versions of Balanced Scorecard asserted that relevance should derive from the
corporate strategy, and proposed design methods that focused on choosing measures and
37
targets associated with the main activities required to implement the strategy. As the
initial audience for this were the readers of the Harvard Business Review, the proposal
was translated into a form that made sense to a typical reader of that journal - one
relevant to a mid-sized US business. Accordingly, initial designs were encouraged to
measure three categories of non-financial measure in addition to financial outputs - those
of "Customer," "Internal Business Processes" and "Learning and Growth." Clearly these
categories were not so relevant to non-profits or units within complex organizations
(which might have high degrees of internal specialization), and much of the early
literature on Balanced Scorecard focused on suggestions of alternative 'perspectives' that
might have more relevance to these groups.
Modern Balanced Scorecard thinking has evolved considerably since the initial ideas
proposed in the late 1980s and early 1990s, and the modern performance management
tools including Balanced Scorecard are significantly improved - being more flexible (to
suit a wider range of organisational types) and more effective (as design methods have
evolved to make them easier to design, and use).
History
The first balanced scorecard was created by Art Schneiderman (an independent
consultant on the management of processes) in 1987 at Analog Devices, a mid-sized
semi-conductor company. Art Schniederman participated in an unrelated research study
in 1990 led by Dr. Robert S. Kaplan in conjunction with US management consultancy
Nolan-Norton, and during this study described his work on Balanced Scorecard.
Subsequently, Kaplan and David P. Norton included anonymous details of this use of
balanced scorecard in their 1992 article on Balanced Scorecard. Kaplan & Norton's
article wasn't the only paper on the topic published in early 1992. But the 1992 Kaplan &
Norton paper was a popular success, and was quickly followed by a second in 1993. In
1996, they published the book The Balanced Scorecard. These articles and the first book
spread knowledge of the concept of Balanced Scorecard widely, but perhaps wrongly
have lead to Kaplan & Norton being seen as the creators of the Balanced Scorecard
concept.
While the "balanced scorecard" concept and terminology was coined by Art
Schneiderman, the roots of performance management as an activity run deep in
management literature and practice. Management historians such as Alfred Chandler
suggest the origins of performance management can be seen in the emergence of the
complex organisation - most notably during the 19th Century in the USA More recent
influences may include the pioneering work of General Electric on performance
measurement reporting in the 1950’s and the work of French process engineers (who
created the tableau de bord – literally, a "dashboard" of performance measures) in the
early part of the 20th century. The tool also draws strongly on the ideas of the 'resource
based view of the firm' proposed by Edith Penrose. However it should be noted that none
of these influences is explicitly linked to original descriptions of Balanced Scorecard by
Schniederman, Maisel, or Kaplan & Norton.
Although Kaplan & Norton's first book, The Balanced Scorecard, remains the most
popular. The book reflects the earliest incarnations of Balanced Scorecard - effectively
restating the concept as described in the 2nd Harvard Business Review article. Their
second book, The Strategy Focused Organization, echoed work by others (particularly in
Scandinavia) on the value of visually documenting the links between measures by
proposing the "Strategic Linkage Model" or strategy map. Since then Balanced Scorecard
38
books have become more common - in early 2010 Amazon was listing several hundred
titles in English which had Balanced Scorecard in the title.
Design
The original thinking behind Balanced Scorecard was for it to be focused on information
relating to the implementation of a strategy, and perhaps unsurprisingly over time there
has been a blurring of the boundaries between conventional strategic planning and control
activities and those required to design a Balanced Scorecard. This is illustrated well by
the four steps required to design a Balanced Scorecard included in Kaplan & Norton's
writing on the subject in the late 1990s, where they assert four steps as being part of the
Balanced Scorecard design process:
These steps go way beyond the simple task of identifying a small number of financial and
non-financial measures, but illustrate the requirement for whatever design process is used
to fit within broader thinking about how the resulting Balanced Scorecard will integrate
with the wider business management process. This is also illustrated by books and
articles referring to balanced scorecards confusing the design process elements and the
balanced scorecard itself. In particular, it is common for people to refer to a “strategic
linkage model” or “strategy map” as being a balanced scorecard.
Although it helps focus managers' attention on strategic issues and the management of the
implementation of strategy, it is important to remember that the balanced scorecard itself
has no role in the formation of strategy. In fact, balanced scorecards can comfortably co-
exist with strategic planning systems and other tools.
The Balanced Scorecard has always attracted criticism from a variety of sources. Most
has come from the academic community, who dislike the empirical nature of the
framework: Kaplan & Norton notoriously failed to include any citation of prior art in
their initial papers on the topic. Some of this criticism focuses on technical flaws in the
methods and design of the original Balanced Scorecard proposed by Kaplan & Norton,
and has over time driven the evolution of the device through its various Generations.
Other academics have simply focused on the lack of citation support. But a general
weakness of this type of criticism is that it typically uses the 1st Generation Balanced
Scorecard as its object: many of the flaws identified are addressed in other works
published since the original Kaplan & Norton works in the early 1990s.
39
Another criticism, usually from pundits and consultants, is that the balanced scorecard
does not provide a bottom line score or a unified view with clear recommendations: it is
simply a list of metrics. These critics usually include in their criticism suggestions about
how the 'unanswered' question postulated could be answered. Typically however, the
unanswered question relates to things outside the scope of Balanced Scorecard itself
(such as developing strategies).
There are few empirical studies linking the use of Balanced Scorecards to better decision
making or improved financial performance of companies, but some work has been done
in these areas. However broadcast surveys of usage have difficulties in this respect, due
to the wide variations in definition of 'what a Balanced Scorecard is' noted above (making
it hard to work out in a survey if you are comparing like with like). Single organization
case studies suffer from the 'lack of a control' issue common to any study of
organizational change - you don't know what the organization would have achieved if the
change had not been made, so it is difficult to attribute changes observed over time to a
single intervention (such as introducing a Balanced Scorecard). However, such studies as
have been done have typically found Balanced Scorecard to be useful
The 1st Generation design method proposed by Kaplan & Norton was based on the use of
three non-financial topic areas as prompts to aid the identification of non-financial
measures in addition to one looking at Financial. The four "perspectives" proposed were.
• Financial;
• Customer;
• Internal Processes;
• Innovation and Learning.
40
The "financial perspective" encourages the identification of a few relevant high-level
financial measures. In particular, designers were encouraged to choose measures that
helped inform the answer to the question "How do we look to shareholders?"
The "customer perspective" encourages the identification of measures that answer the
question "How do customers see us?"
The "internal business perspective" encourages the identification of measures that answer
the question "What must we excel at?"
The "innovation and learning perspective" encourages the identification of measures that
answer the question "Can we continue to improve and create value?".
As noted above, these 'prompt questions' highlight, Kaplan & Norton were thinking about
a medium sized commercial organisation in the USA when choosing these topic areas.
They are not very helpful to other kinds of organisations, and much of the literature on
Balanced Scorecard since has focused on alternative headings and questions to link to]
Just-in-time
Quick notice that stock depletion requires personnel to order new stock is critical to the
inventory reduction at the center of JIT. This saves warehouse space and costs. However,
the complete mechanism for making this work is often misunderstood.
For instance, its effective application cannot be independent of other key components of a
lean manufacturing system or it can "...end up with the opposite of the desired result.". In
recent years manufacturers have continued to try to hone forecasting methods (such as
41
applying a trailing 13 week average as a better predictor for JIT planning), however some
research demonstrates that basing JIT on the presumption of stability is inherently flawed
Philosophy
The philosophy of JIT is simple: inventory is waste. JIT inventory systems expose hidden
causes of inventory keeping, and are therefore not a simple solution for a company to
adopt. The company must follow an array of new methods to manage the consequences
of the change. The ideas in this way of working come from many different disciplines
including statistics, industrial engineering, production management, and behavioral
science. The JIT inventory philosophy defines how inventory is viewed and how it relates
to management.
Inventory is seen as incurring costs, or waste, instead of adding and storing value,
contrary to traditional accounting. This does not mean to say JIT is implemented without
awareness that removing inventory exposes pre-existing manufacturing issues. This way
of working encourages businesses to eliminate inventory that does not compensate for
manufacturing process issues, and to constantly improve those processes to require less
inventory. Secondly, allowing any stock habituates management to stock keeping.
Management may be tempted to keep stock to hide production problems. These problems
include backups at work centers, machine reliability, process variability, lack of
flexibility of employees and equipment, and inadequate capacity.
In short, the just-in-time inventory system focus is having “the right material, at the right
time, at the right place, and in the exact amount”, without the safety net of inventory. The
JIT system has broad implications for implementers.
JIT reduces inventory in a firm. However, a firm may simply be outsourcing their input
inventory to suppliers, even if those suppliers don't use JIT (Naj 1993). Newman (1993)
investigated this effect and found that suppliers in Japan charged JIT customers, on
average, a 5% price premium.
Environmental concerns
During the birth of JIT, multiple daily deliveries were often made by bicycle. Increased
scale has required a move to vans and lorries (trucks). Cusumano (1994) highlighted the
potential and actual problems this causes with regard to gridlock and burning of fossil
fuels. This violates three JIT waste guidelines:
Based on a diagram modeled after the one used by Hewlett-Packard’s Boise plant to
accomplish its JIT program.
3) S Stabilize Schedule
- S Level Schedule
- W establish freeze windows
- UC Underutilize Capacity
4) K Kanban Pull System
- D Demand pull
- B Backflush
- L Reduce lot sizes
5) V Work with vendors
- L Reduce lead time
- D Frequent deliveries
- U Project usage requirements
- Q Quality Expectations
6) I Further reduce inventory in other areas
S Stores
- T Transit
- C Implement Carroussel to reduce motion waste
- C Implement Conveyor belts to reduce motion waste
Benefits
• Reduced setup time. Cutting setup time allows the company to reduce or eliminate
inventory for "changeover" time. The tool used here is SMED (single-minute
exchange of dies).
• The flow of goods from warehouse to shelves improves. Small or individual piece
lot sizes reduce lot delay inventories, which simplifies inventory flow and its
management.
• Employees with multiple skills are used more efficiently. Having employees
trained to work on different parts of the process allows companies to move
workers where they are needed.
• Production scheduling and work hour consistency synchronized with demand. If
there is no demand for a product at the time, it is not made. This saves the
43
company money, either by not having to pay workers overtime or by having them
focus on other work or participate in training.
• Increased emphasis on supplier relationships. A company without inventory does
not want a supply system problem that creates a part shortage. This makes
supplier relationships extremely important.
• Supplies come in at regular intervals throughout the production day. Supply is
synchronized with production demand and the optimal amount of inventory is on
hand at any time. When parts move directly from the truck to the point of
assembly, the need for storage facilities is reduced.
= ( I + NPVAssets in Place ) +
where there are expected to be N projects yielding surplus value (or excess returns) in the
future and I is the capital invested in assets in place (which might or might not be equal to
the book value of these assets).
• Define ROC = EBIT (1-t) / Initial Investment: The earnings before interest and
taxes are assumed to measure true earnings on the project and should not be
contaminated by capital charges (such as leases) or expenditures whose benefits
accrue to future projects (such as R & D).
In other words,
Firm Value = Capital Invested in Assets in Place + PV of EVA from Assets in Place +
Sum of PV of EVA from new projects
Advantages of EVA
1. EVA is closely related to NPV. It is closest in spirit to corporate finance theory that
argues that the value of the firm will increase if you take positive NPV projects.
2. It avoids the problems associates with approaches that focus on percentage spreads -
between ROE and Cost of Equity and ROC and Cost of Capital. These approaches may
45
lead firms with high ROE and ROC to turn away good projects to avoid lowering their
percentage spreads.
3. It makes top managers responsible for a measure that they have more control over - the
return on capital and the cost of capital are affected by their decisions - rather than one
that they feel they cannot control as well - the market price per share.
4. It is influenced by all of the decisions that managers have to make within a firm - the
investment decisions and dividend decisions affect the return on capital (the dividend
decisions affect it indirectly through the cash balance) and the financing decision affects
the cost of capital.
46
Module IV: Advanced Cost and Management Techniques
Some people don't differentiate between cost control and cost reduction but I tend to
consider cost control to be a reactive measure to stem cost growth to stay within
budget (e.g. overspending in an account) rather than a proactive effort to actually reduce
costs from baseline.
In reality some people do not differentiate between cost reduction and cost control. I
think that is why you are having a problem with finding examples. But, that said, I think
the best approach is to look at the definitions of the two words - reduction and control.
Those words have different meanings.
I can't guarantee that every business professor will accept my definitions but, as a cost
reduction consultant with over 30 years in business, the opinion is based on hands-on
experience. Please read "Instant Profits: Making Your Business Pay" for the practical
aspects of cost reduction and cost control in real business.
Cost reduction:
Cost control:
A organized and intentional effort to limit the growth of costs within specific accounts.
The management practice of putting lock limits on accounts is, in my opinion, an
example of cost control. Mandating the reduction of consumption of a supply or utility is
an example of cost control.
Designed For
Procedure
Kaizen Technique
Kaizen (Japanese for "improvement" or "change for the better") refers to a philosophy or
practices that focus upon continuous improvement of processes in manufacturing,
engineering, supporting business processes, and management. It has been applied in
healthcare, government, banking, and many other industries. When used in the business
sense and applied to the workplace, kaizen refers to activities that continually improve all
functions, and involves all employees from the CEO to the assembly line workers. It also
applies to processes, such as purchasing and logistics, that cross organizational
boundaries into the supply chain. By improving standardized activities and processes,
kaizen aims to eliminate waste (see lean manufacturing). Kaizen was first implemented in
48
several Japanese businesses after the Second World War, influenced in part by American
business and quality management teachers who visited the country. It has since spread
throughout the world.
Introduction
Kaizen is a daily activity, the purpose of which goes beyond simple productivity
improvement. It is also a process that, when done correctly, humanizes the workplace,
eliminates overly hard work ("muri"), and teaches people how to perform experiments on
their work using the scientific method and how to learn to spot and eliminate waste in
business processes. In all, the process suggests a humanized approach to workers and to
increasing productivity: "The idea is to nurture the company's human resources as much
as it is to praise and encourage participation in kaizen activities." Successful
implementation requires "the participation of workers in the improvement."
People at all levels of an organization can participate in kaizen, from the CEO down, as
well as external stakeholders when applicable. The format for kaizen can be individual,
suggestion system, small group, or large group. At Toyota, it is usually a local
improvement within a workstation or local area and involves a small group in improving
their own work environment and productivity. This group is often guided through the
kaizen process by a line supervisor; sometimes this is the line supervisor's key role.
Kaizen on a broad, cross-departmental scale in companies, generates total quality
management, and frees human efforts through improving productivity using machines
and computing power.
While kaizen (at Toyota) usually delivers small improvements, the culture of continual
aligned small improvements and standardization yields large results in the form of
compound productivity improvement. This philosophy differs from the "command and
control" improvement programs of the mid-twentieth century. Kaizen methodology
includes making changes and monitoring results, then adjusting. Large-scale pre-planning
and extensive project scheduling are replaced by smaller experiments, which can be
rapidly adapted as new improvements are suggested.
In modern usage, a focused kaizen that is designed to address a particular issue over the
course of a week is referred to as a "kaizen blitz" or "kaizen event". These are limited in
scope, and issues that arise from them are typically used in later blitzes
History
After World War II, to help restore Japan, American occupation forces brought in
American experts to help with the rebuilding of Japanese industry. The Civil
Communications Section (CCS) developed a Management Training Program that taught
statistical control methods as part of the overall material. This course was developed and
taught by Homer Sarasohn and Charles Protzman in 1949 and 1950. Sarasohn
recommended William Deming for further training in Statistical Methods. The Economic
and Scientific Section (ESS) group was also tasked with improving Japanese
management skills and Edgar McVoy is instrumental in bringing Lowell Mellen to Japan
to properly install the TWI programs in 1951. Prior to the arrival of Mellen in 1951, the
ESS group had a training film done to introduce the three TWI "J" programs (Job
Instruction, Job Methods and Job Relations)- the film was titled "Improvement in 4
49
Steps" (Kaizen eno Yon Dankai). This is the original introduction of "Kaizen" to Japan.
For the pioneering, introducing, and implementing Kaizen in Japan, the Emperor of Japan
awarded the Second Order Medal of the Sacred Treasure to Dr. Deming in 1960.
Consequently, the Union of Japanese Science and Engineering (JUSE) instituted the
annual Deming Prizes for achievements in quality and dependability of products in Japan.
Reference: US National Archives - SCAP collection
Implementation
The Toyota Production System is known for kaizen, where all line personnel are expected
to stop their moving production line in case of any abnormality and, along with their
supervisor, suggest an improvement to resolve the abnormality which may initiate a
kaizen.
• Standardize an operation
• Measure the standardized operation (find cycle time and amount of in-process
inventory)
• Gauge measurements against requirements
• Innovate to meet requirements and increase productivity
• Standardize the new, improved operations
• Continue cycle ad infinitum
• Teamwork
• Personal discipline
• Improved morale
• Quality circles
• Suggestions for improvement
Benchmarking
Benchmarking is the process of comparing one's business processes and performance
metrics to industry bests and/or best practices from other industries. Dimensions typically
measured are quality, time, and cost. Improvements from learning mean doing things
better, faster, and cheaper.
Benchmarking involves management identifying the best firms in their industry, or any
other industry where similar processes exist, and comparing the results and processes of
those studied (the "targets") to one's own results and processes to learn how well the
targets perform and, more importantly, how they do it.
The term benchmarking was first used by cobblers to measure people's feet for shoes.
They would place someone's foot on a "bench" and mark it out to make the pattern for the
shoes. Benchmarking is most used to measure performance using a specific indicator
(cost per unit of measure, productivity per unit of measure, cycle time of x per unit of
measure or defects per unit of measure) resulting in a metric of performance that is then
compared to others.
50
Also referred to as "best practice benchmarking" or "process benchmarking", it is a
process used in management and particularly strategic management, in which
organizations evaluate various aspects of their processes in relation to best practice
companies' processes, usually within a peer group defined for the purposes of
comparison. This then allows organizations to develop plans on how to make
improvements or adapt specific best practices, usually with the aim of increasing some
aspect of performance. Benchmarking may be a one-off event, but is often treated as a
continuous process in which organizations continually seek to improve their practices.
Collaborative benchmarking
Procedure
There is no single benchmarking process that has been universally adopted. The wide
appeal and acceptance of benchmarking has led to various benchmarking methodologies
emerging. The seminal book on benchmarking is Boxwell's Benchmarking for
Competitive Advantage published by McGraw-Hill in 1994. It has withstood the test of
time and is still a relevant read. The first book on benchmarking, written and published
by Kaiser Associates, is a practical guide and offers a 7-step approach. Robert Camp
(who wrote one of the earliest books on benchmarking in 1989) developed a 12-stage
approach to benchmarking.
9. Communicate
12. Review/recalibrate.
Cost of benchmarking
• Visit Costs - This includes hotel rooms, travel costs, meals, a token gift, and lost
labor time.
• Time Costs - Members of the benchmarking team will be investing time in
researching problems, finding exceptional companies to study, visits, and
implementation. This will take them away from their regular tasks for part of each
day so additional staff might be required.
• Benchmarking Database Costs - Organizations that institutionalize
benchmarking into their daily procedures find it is useful to create and maintain a
database of best practices and the companies associated with each best practice
now.
52
The cost of benchmarking can substantially be reduced through utilizing the many
internet resources that have sprung up over the last few years. These aim to capture
benchmarks and best practices from organizations, business sectors and countries to make
the benchmarking process much quicker and cheaper.
The technique initially used to compare existing corporate strategies with a view to
achieving the best possible performance in new situations (see above), has recently been
extended to the comparison of technical products. This process is usually referred to as
"Technical Benchmarking" or "Product Benchmarking". Its use is particularly well
developed within the automotive industry ("Automotive Benchmarking"), where it is
vital to design products that match precise user expectations, at minimum possible cost,
by applying the best technologies available worldwide. Many data are obtained by fully
disassembling existing cars and their systems. Such analyses were initially carried out in-
house by car makers and their suppliers. However, as they are expensive, they are
increasingly outsourced to companies specialized in this area. Indeed, outsourcing has
enabled a drastic decrease in costs for each company (by cost sharing) and the
development of very efficient tools (standards, software).
Types of benchmarking
53
Business process reengineering
Overview
Business process reengineering is one approach for redesigning the way work is done to
better support the organization's mission and reduce costs. Reengineering starts with a high-
level assessment of the organization's mission, strategic goals, and customer needs. Basic
questions are asked, such as "Does our mission need to be redefined? Are our strategic goals
aligned with our mission? Who are our customers?" An organization may find that it is
operating on questionable assumptions, particularly in terms of the wants and needs of its
customers. Only after the organization rethinks what it should be doing, does it go on to
decide how best to do it
Within the framework of this basic assessment of mission and goals, reengineering
focuses on the organization's business processes—the steps and procedures that govern
how resources are used to create products and services that meet the needs of particular
customers or markets. As a structured ordering of work steps across time and place, a
business process can be decomposed into specific activities, measured, modeled, and
improved. It can also be completely redesigned or eliminated altogether. Reengineering
identifies, analyzes, and redesigns an organization's core business processes with the aim
of achieving dramatic improvements in critical performance measures, such as cost,
quality, service, and speed.
54
Reengineering recognizes that an organization's business processes are usually fragmented
into subprocesses and tasks that are carried out by several specialized functional areas within
the organization. Often, no one is responsible for the overall performance of the entire
process. Reengineering maintains that optimizing the performance of subprocesses can result
in some benefits, but cannot yield dramatic improvements if the process itself is
fundamentally inefficient and outmoded. For that reason, reengineering focuses on
redesigning the process as a whole in order to achieve the greatest possible benefits to the
organization and their customers. This drive for realizing dramatic improvements by
fundamentally rethinking how the organization's work should be done distinguishes
reengineering from process improvement efforts that focus on functional or incremental
improvement.
Definition
Different definitions can be found. This section contains the definition provided in
notable publications in the field:
Additionally, Davenport (ibid.) points out the major difference between BPR and other
approaches to organization development (OD), especially the continuous improvement or
TQM movement, when he states: "Today firms must seek not fractional, but
multiplicative levels of improvement – 10x rather than 10%." Finally, Johansson provide
a description of BPR relative to other process-oriented views, such as Total Quality
Management (TQM) and Just-in-time (JIT), and state:
Information technology (IT) has historically played an important role in the reengineering
concept. It is considered by some as a major enabler for new forms of working and
collaborating within an organization and across organizational borders.
Early BPR literature identified several so called disruptive technologies that were
supposed to challenge traditional wisdom about how work should be performed.
Although the labels and steps differ slightly, the early methodologies that were rooted in
IT-centric BPR solutions share many of the same basic principles and elements. The
following outline is one such model, based on the PRLC (Process Reengineering Life
Cycle) approach developed by Guha.
Critique
Reengineering has earned a bad reputation because such projects have often resulted in
massive layoffs. This reputation is not altogether unwarranted, since companies have
often downsized under the banner of reengineering. Further, reengineering has not always
lived up to its expectations. The main reasons seem to be that:
A management control systems (MCS) is a system which gathers and uses information
to evaluate the performance of different organizational resources like human, physical,
financial and also the organization as a whole considering the organizational strategies.
Finally, MCS influences the behavior of organizational resources to implement
organizational strategies. MCS might be formal or informal. The term ‘management
control’ was given of its current connotations by Robert N. Anthony (Otley, 1994).
Chenhall (2003) mentioned that the terms management accounting (MA), management
accounting systems (MAS), management control systems (MCS), and organizational
controls (OC) are sometimes used interchangeably. In this case, MA refers to a collection
of practices such as budgeting or product costing. But MAS refers to the systematic use
of MA to achieve some goal and MCS is a broader term that encompasses MAS and also
includes other controls such as personal or clan controls. Finally OC is sometimes used to
refer to controls built into activities and processes such as statistical quality control, just-
in-time management.
57
Management control as an interdisciplinary subject
58