Management Advisory Services
Management Advisory Services
Management Advisory Services
This subject covers the candidates’ knowledge of the concepts, techniques and methodology
applicable to management accounting, financial management and management consultancy.
Candidates should know and understand the role of information in accounting, finance and
economics in management consultancy and in management processes of planning, controlling
and decision-making.
The candidates must have a working knowledge to comply with the various management
accounting and consultancy engagements.
The candidates must also be able to communicate effectively matters pertaining to the
management accounting and consultancy work that will be handled.
The knowledge of the candidates in the competencies cited above is that of an entry level
accountant who can address the fundamental requirements of the various parties that the
candidates will be interacting professionally in the future.
I. MANAGEMENT ACCOUNTING
One simple definition of management accounting is the provision of financial and non-
financial decision-making information to managers.
According to the Institute of Management Accountants (IMA): "Management accounting is
a profession that involves partnering in management decision making, devising planning and
performance management systems, and providing expertise in financial reporting and control
to assist management in the formulation and implementation of an organization's strategy".
OBJECTIVES OF MANAGEMENT ACCOUNTING
Analyses and interprets data: The accounting data is probed meaningfully for
effective planning and decision-making. For this purpose the data is presented in
a comparative form. Ratios are calculated and likely trends are projected.
Thus, planning is making intelligent forecasting. This forecasting is based on facts. Facts
are provided by past accounts on which forecast of future transactions is made.
Management accounting helps management in its function of planning through the
process of budgetary control.
Accounting is a technical subject and may not be easily understandable by everyone till
the user has a good knowledge of the subject. Management may not be able to use the
accounting information in its raw form due to lack of knowledge of accounting techniques.
Management accountant presents the information in an intelligible and non-technical
manner. This will help the management in interpreting the financial data, evaluating
alternative courses of action available and guiding the management in taking decisions
and having the most desired financial results.
The management accountant by setting goals, planning the best and economical course
of action and then measuring the performance tries his best to increase the effectiveness
of the organisation and thereby motivate the members of the organisation.
The required information can be made available to the management by means of reports
which are an integral part of the management accounting. Reports are means of
communication of facts which should be brought to the notice of various levels of
management so that they may be guided for taking suitable action for the purposes of
control.
Management accounting in simple terms is accounting for management which deals with
internals users of accounting information. If considered in a broader sense, it's of great
use in different managerial functions. It can help business to attain the expected results
on the basis of timely information and reports relating to the internal operations. Since
management accounting is primarily concerned with management needs, it plays an
important role in the management process assisting the managers to lead the business in
an efficient way executing any of these basic functions of management
effectively, Planning, Organizing, Coordinating, Motivating, Controlling and
Communicating.
Planning: Planning process may be short term or long term. Management is extremely
concerned with planning process, as effective planning leads a business to reap desired
results. By means of planning, a business management can identify where it stands and
what it takes to grow and develop in a way as has been predetermined. Management
accounting makes a valuable contribution in planning process of a business management
by providing significant and relevant data, such as, budgetary controls, cash and profit
and loss forecasts, pricing and evaluating proposals for capital budgeting etc.
Management accounting aids greatly in centralizing the various plans into one plan to
facilitate management in effective decision making process.
Controlling: As regards controlling, so far as the budgets are useful in facilitating the
planning process of a business, budgetary control is a good means of controlling as well.
Other techniques, such as, standard costing and departmental operating statements are
of great help in making controlling measures. They are also helpful to take remedial
measures in case of deviations in the performance of business entity.
Communicating: It is through this process that the results are communicated to the
owners, superiors and subordinates. It includes transmitting data highlighting necessary
information, such as, progress of business, financial position - to the required users. This
enables managers to highlight the issues that are worth and that they need proper
analysis, so that the intended results may be attained.
Generally, in a very large company, each division has a top accountant called the
controller, and much of the management accounting that is done in these divisions comes
under the leadership of the controller. On the other hand, the controller usually reports to
the vice president of finance for the division who, in turn, reports to the division’s president
and/or overall chief financial officer (CFO). All of these individuals are responsible for the
flow of good accounting information that supports the planning, control, and evaluation
work that takes place within the organization.
plays a key role in tracking and reporting relevant product and service costs. Overall, the
controller works to bring together all this information as an integral part of the planning,
controlling, evaluating, and decision-making activities that take place throughout the
organization.
FYI
As you have read this introductory chapter to management accounting, you have likely noticed
that the goals of management accounting information provided to the management and
executive teams inside the organization are quite different from the financial accounting
information provided to groups outside the organization, such as investors, creditors, and
regulators. You may even ask how information and performance measures regarding
quality and time can be provided by a typical general ledger system that is limited to
debits and credits of dollar amounts. This is a good question! For most of the twentieth
century, management accountants have been able to successfully produce management
accounting information using the general ledger system of financial accounting. This
marriage of management accounting and financial accounting information systems
worked as long as the goal of management accounting was strictly to track cost
information. Now, however, the emergence of JIT, coupled with increased competition in a
worldwide market, has forced most organizations to compete on issues of quality and
timeliness, as well as cost. The problem is that it is very difficult to use a debit/credit
system to track organizational performance regarding quality and time. Thankfully,
computerized information systems, specifically database systems, have progressed to a
point where it is economically feasible for organizations to track just about any kind of
information. Now the real challenge for current and future management accountants is to
organize the immense amount of data that can be provided to support decision making
without creating information overload in managers and executives. In this process,
management accountants should understand how to use the most current technology.
Typically, developing knowledge and skills in computer technologies will require additional
courses of study for the future business professional. The goal of the remainder of this
book is to provide you with a framework for developing cost, quality, and time-based
information that supports the management process. This framework must then be used
with top-notch technology in order to provide information that truly adds competitive value
to organizations!
Business professionals involved in management accounting have come a long way since the
early days of management accounting in the 1800s. Today, management accounting
professionals play a key role in many organizations. The nature of their work continues to
Reviewer 8
Management Advisory Services
expand as new industries develop and computer technology grows in importance in the
gathering and use of information by decision makers. For example, you’ve spent the bulk of this
chapter being introduced to management accounting in the context of DuPont, a manufacturing
business. However, businesses focused on service rather than manufacturing (e.g., law firms,
banks, hospitals, transportation, hotels) are far and away the dominant industries in the U.S.
economy. Further, merchandising companies (retailers and wholesalers) combine to be as
strong an economic force as the manufacturing industry. And as you’re certainly aware, the
explosion of the Internet has established a new aspect in our economy—e-commerce. At this
point, e-commerce is generally a growing delivery platform for many service and merchandising
companies, rather than a separate industry. You need to be aware of these trends as you work
through this textbook. We will spend a lot of time applying concepts and tools of management
accounting to nonmanufacturing settings. As we close this chapter, we want to leave you with
two lingering, but important, questions. First, can a service or merchandising company
effectively perform C-V-P analysis, product costing, and segment analysis? Or are these
techniques useful only for manufacturing companies? Second, does the arrival of e-commerce
in service, merchandising, or manufacturing organizations change your response to the first
question? That is, as companies shift more and more of their operations (such as sales of
software, financial services, and groceries) into the “virtual environment” of the Internet, does e-
commerce affect the use of any management accounting techniques that you are studying in
this textbook? Think about these questions. We plan to spend a lot of time in the next several
chapters exploring some possible answers with you.
FYI
By 2004, e-commerce activities across the world will be enormous, amounting to $6.8
trillion, or 8.6% of the global sales of all goods and services. Interestingly, while the
United States accounted for 75% of worldwide e-commerce sales in 2000, that share is
expected to drop to a little less than 50% by 2004.
TO SUMMARIZE
Management accounting plays a key role in organizations today. The top accountant in
most organizations is the controller. All accounting functions report to this individual,
including the cost accountants, the financial and tax accountants, the internal auditors,
and systems support personnel. Though much management accounting originates within
these positions, all decision makers in the organization must understand how to create
and use good management accounting information. Management accounting is also
being significantly affected by dramatic improvements in computer technology. Today’s
technology allows management to track performance information that goes beyond the
cost-based information of historic general ledger systems. Good management accounting
involves a responsibility to manage a wide variety of critical information. Hence, those
involved need to anticipate and be prepared to deal with various ethical dilemmas. And
finally, though we’ve used DuPont as the example company in this chapter, you need to
understand that management accounting is not just for manufacturing companies.
Service and merchandising industries represent a much larger portion of the U.S.
economy than does the manufacturing industry. Further, the advent of the Internet and e-
commerce is bringing dramatic changes to many companies and industries. This textbook
Reviewer 9
Management Advisory Services
will explore management accounting in all types of business. As you work through the
remainder of this textbook, you should consider how each new concept you learn could be
applied in multiple types of business settings.
The scope or field of management accounting is very wide and broad based and it
includes a variety of aspects of business operations. The main aim of management
accounting is to help management in its functions of planning, directing, controlling and
areas of specialization included within the umbrella of management accounting. The
scope of management accounting can be studied as follows:
Function
Merits
Human resource management
Financial Statements
Financial Accounting
Financial accounting forms the basis for analysis and interpretation for furnishing
meaningful data to the management. The control aspect is based on financial data and
performance evaluation, on recorded facts and figures. So, management accounting is
closely related to financial accounting in many respects.
Cost Accounting
Budgeting means expressing the plans, policies and goals of the firm for a definite
period in future. Forecasting on the other hand, is a prediction of what will happen as a
result of a given set of circumstances. Forecasting is a judgement whereas the budgeting
is an organizational object. These are useful for management accounting in planning.
Inventory Control
Inventory is necessary to control from the time it is acquire till its final disposal as it
involves large sum. For controlling inventory, management should determine different
level of stock. The inventory control technique will be helpful for taking
managerial decisions.
Statistical Method
Statistical tools not only make the information more impressive, comprehensive and
intelligible but also are highly useful for planning and forecasting.
Interpretation of Data
Reviewer 10
Management Advisory Services
Reporting To Management
The interpreted information must be communicated to those who are interested in it.
The report may cover Profit and Loss Account, Cash Flow and Funds Flow statements
etc.
Management accounting studies all the tax matters to assist the management in
investment decisions vis-a-vis tax planning as a resource to enjoy tax relief.
Methods of Procedures
This includes maintenance of proper data processing and other office management
services. It may have to deal with filing, copying, duplicating, communicating and
management information system and also may have to report about the utility of different
office machines.
Hence, its scope is quite vast and it includes within its fold almost all aspects of business
operations. However, the following areas may rightly be pointed out as lying within the
scope of management accounting.
Financial Accounting:
Cost Accounting:
Planning, decision-making and control are the basic managerial functions. The cost
accounting system provides necessary tools such as standard costing, budgetary control,
inventory control, marginal costing, and differential costing etc., for carrying out such
functions efficiently. Hence, cost accounting is considered a necessary adjunct of
management accounting.
Reviewer 11
Management Advisory Services
Revaluation Accounting:
Statistical Methods:
Statistical tools such as graph, charts, diagrams and index numbers etc., make the
information more impressive and comprehensive. Other tools such as time series,
regression analysis, sampling techniques etc., are highly useful for planning and
forecasting.
Operations Research:
Modern managements are faced with highly complicated business problems in their
decision-making processes. O P techniques like linear programming, queuing theory,
decision theory, etc., enable management to find scientific solutions for the business
problems.
Taxation:
This includes computation of income tax as per tax laws and regulations, filing of returns
and making tax payments. In recent times, it also includes tax planning.
O&M deal with organizations reducing cost and improving the efficiency of accounting, as
also of office systems, procedures, and operations etc.
Office Services:
This includes maintenance of proper data processing and other office management
services, communication and best use of latest mechanical devices.
Law:
Most of the management decisions have to be taken in a legal environment where the
requirements of a number of statutory provisions or regulations are to be fulfilled.
Some of the Acts, which have their influence on management decisions, are as follows:
Internal Audit:
This includes the development of a suitable system of internal audit for internal control.
Internal Reporting:
This includes the preparation of quarterly, half yearly, and other interim reports and
income statements, cash flow and funds flow statements, scarp reports, etc.
Reviewer 12
Management Advisory Services
The Scope of Management accounting is very wide and broad based. It includes all
information, which is provided to the management for financial analysis and interpretation
of the business operation. The following field of activities are included in the scope of this
subject:
Cost accounting: cost accounting provides various techniques for determining cost of
manufacturing products of cost of providing service. It uses financial data for finding out
cost of various job, Product or processes. Business executives depend heavily on
accounting information in general and on cost information in particular because any
activity of an organization can be described by its cost. They make use of various cost
data in managing organization effectively. Cost accounting is Considered as a backbone
of management accounting as its provides the analytical tools such as Budgetary Control,
Standard Costing, Marginal Costing, Inventory costing, Operating Costing Etc., which are
used by management to discharge its responsibilities effectively.
Quantitative Techniques: Modern managers believe that the financial and economic data
available for managerial decisions can be more useful when analyzed with more
sophisticated analysis and evaluation techniques. This Techniques such a time series,
regression analysis and sampling techniques are commonly used for this purpose.
Further, managers also use techniques such a linear programming, game theory, queuing
theory etc. in their decision making Process.
The cost accounting system, which accumulates data about the costs of producing goods
and services, is part of the organization’s overall accounting system. It accumulates cost
information for both management accounting and financial accounting.
MANAGEMENT FINANCIAL
ACCOUNTING ACCOUNTING
External users:
Internal users: officers and stockholders, creditors,
USERS OF REPORT
managers concerned government
agencies.
To provide internal users
with information that may To provide external users
be used by managers in with information about the
PURPOSE carrying out the functions organization’s financial
of planning, controlling, position and results of
decision-making, and operations.
performance evaluation.
Different types of reports,
such as budgets, financial Primarily financial
projections, cost analyses, statements and the
TYPES OF REPORTS
etc., depending on the accompanying notes to
specific needs of such statements.
management.
BASIS OF REPORTS Reports are based on a Reports are based almost
combination of historical, exclusively on historical
estimated, and projected data.
Reviewer 14
Management Advisory Services
data.
Reports are prepared in
In preparing reports, the
accordance with generally
management of a
accepted accounting
STANDARDS OF company can set rules to
principles and other
PRESENTATION produce information most
pronouncements of
relevant to its specific
authoritative accounting
needs.
bodies.
Focus of reports is on the
company’s value chain,
Financial reports relate to
REPORTING ENTITY such as a business
the business as a whole.
segment, product-line,
supplier, or customer.
Reports may cover any
time period – year,
quarter, month, week, day, Reports usually cover a
PERIOD COVERED
etc. Reports may be year, quarter, or month.
required as frequently as
needed.
Functions of Management
Planning
It is the basic function of management. It deals with chalking out a future course
of action & deciding in advance the most appropriate course of actions for
achievement of pre-determined goals. According to KOONTZ, “Planning is
deciding in advance - what to do, when to do & how to do. It bridges the gap from
where we are & where we want to be”. A plan is a future course of actions. It is
an exercise in problem solving & decision making. Planning is determination of
courses of action to achieve desired goals. Thus, planning is a systematic
thinking about ways & means for accomplishment of pre-determined goals.
Planning is necessary to ensure proper utilization of human & non-human
resources. It is all pervasive, it is an intellectual activity and it also helps in
avoiding confusion, uncertainties, risks, wastages etc.
Organizing
It is the process of bringing together physical, financial and human resources and
developing productive relationship amongst them for achievement of
organizational goals. According to Henry Fayol, “To organize a business is to
provide it with everything useful or its functioning i.e. raw material, tools, capital
and personnel’s”. To organize a business involves determining & providing
human and non-human resources to the organizational structure. Organizing as
a process involves:
Identification of activities.
Classification of grouping of activities.
Assignment of duties.
Delegation of authority and creation of responsibility.
Coordinating authority and responsibility relationships.
Staffing
Reviewer 16
Management Advisory Services
Directing
lowest ranks represents the scalar chain. Communications should follow this
chain.
22. 24. Contd. – Order: It implies order of things and people. Placing all required
things and materials in prescribed place i.e. in right place. Working place
should be clean, tidy and safe for employees. Engagement of right people in
the right place. – Equity: It is the combination of kindness and justice.
Employees expect equity from the management. Employees should be
treated fairly and justly, kindly for devotion and loyalty from employees in
return.
23. 25. Contd. – Stability of Tenure of Personnel: For maximum productivity
through efficient workers, a stable work force with stable tenure is needed. –
Initiative: passion, energy and initiative from the employees of all levels
through freedom to think out a plan and execute it. It motivates people and
increases productivity. – Esprit de Corp: team or organizational spirit i.e.
cohesion among personnel is a great source of strength in the organization.
Managers should strive to promote team spirit, unity and organizational
communication.
24. 26. Function of Management 1. Planning 2. Organizing 3. Leading / directing
4. Controlling
25. 27. Planning • Planning is a basic managerial function. It is setting goals and
deciding how to best achieve them in advance. Planning is predetermining
future and selecting appropriate goals and actions to achieve them. • The
process by which management set objectives, assess the future, and
develop course of action to accomplish these objectives.
26. 28. Contd. • Planning requires decision making by all levels of managers •
Planning is also to decide in advance about what to do, how to do, when to
do and who is to do. • A good planning is also required for good utilization of
human and non human resources to accomplish pre determined goals.
27. 29. Contd. • Planning is the core area of all the functions of management. It
is the foundation upon which the other three areas should be build. • The
planning process is ongoing. • There are uncontrollable, external factors that
constantly affect an organization both positively and negatively. • Depending
on the circumstances, these external factors may cause an organization to
adjust its course of action in accomplishing certain goals. This is referred to
as strategic planning.
28. 30. Contd. • During strategic planning, management analyzes internal and
external factors that do and may affect organization, as well as the
objectives and goals. • From there they determine the organization’s
strengths, weaknesses, opportunities and threats. • In order for
management to do this effectively, planning has to be realistic and
comprehensive.
29. 31. Organizing • An important function of management. • Also important for
performing staffing, directing and controlling functions. • The process of
arranging people and physical resources to carry out plans and accomplish
the organizational goals. • Its ongoing.
30. 32. Organizing involves: • Defining tasks required for achieving goals. What
task to be done? • Grouping the activities in logical pattern • Determining
manpower requirement • Establishing authority and responsibility for each
position. Who reports to whom? • Assigning the activities to specific position
and people
Reviewer 21
Management Advisory Services
Introduction
Starting a career as a staff accountant with the goal of becoming partner in a public
accounting firm is the dream of many accounting majors. However, a career goal of
becoming a chief financial officer or controller is equally viable, and the end result can
be equally rewarding.
This text presents tools and techniques used by cost and management accountants,
and also provides problem-solving methods that are useful in achieving corporate
goals. Such knowledge is important to a student who wants to become a Certified
Public Accountant (CPA) and/or a Certified Management Accountant (CMA). The first
part of this text presents the traditional methods of cost and management accounting,
which are the building blocks for generating information used to satisfy internal and
external user needs. The second part of the text presents innovative cost and
management accounting topics and methods.
Financial Accounting
In the early 1900s, financial accounting was the primary source of information for
evaluating business operations. Companies often used return on investment (ROI) to
allocate resources and evaluate divisional performance. ROI is calculated as income
Reviewer 24
Management Advisory Services
divided by total assets. Using a single measure such as ROI for decision making was
considered reasonable when companies engaged in one type of activity, operated
only domestically, were primarily labor intensive, and were managed and owned by a
small number of people who were very familiar with the operating processes.
As the securities market grew, so did the demand for audited financial statements.
Preparing financial reports was costly, and information technology was limited.
Developing a management accounting system separate from the financial accounting
system would have been cost prohibitive, particularly given the limited benefits that
would have accrued to managers and owners who were intimately familiar with their
company’s narrowly focused operating activity. Collecting information and providing
reports to management on a real-time basis would have been impossible in that era.
Management Accounting
By the mid-1900s, managers were often no longer owners but, instead, individuals
who had been selected for their positions because of their skills in accounting,
finance, or law. These managers frequently lacked in-depth knowledge of a
company’s underlying operations and processes. Additionally, companies began
operating in multiple states and countries and began manufacturing many products in
a non-labor-intensive environment. Trying to manage by using only financial reporting
information sometimes created dysfunctional behavior. Managers needed an
accounting system that could help implement and monitor a company’s goals in a
globally competitive, multiple-product environment. Introduction of affordable
information technology allowed management accounting to develop into a discipline
separate from financial accounting. Under these new circumstances, management
accounting evolved to be independent of financial accounting.
The primary differences between financial and management accounting are shown in
Exhibit 1–1.
• Historical
• Quantitative
• Monetary
• Verifi able
Reviewer 25
Management Advisory Services
May be
• Current or forecasted
• Quantitative or qualitative
• Monetary or nonmonetary
• Timely and, at a minimum, reasonably estimated
Overriding criteria:
Cost Accounting
Upstream Costs:
Downstream Costs:
Product cost is developed in compliance with GAAP for financial reporting purposes,
and, for a manufacturing company, consists of the sum of all factory costs incurred to
make one unit of product. But product cost information can also be developed outside
of the constraints of GAAP to assist management in its needs for planning and
controlling operations.
Product costs cannot be easily compared between the two locations because their
production processes are not similar. Such complications have resulted in the
evolution of the cost accounting database, which includes more than simply financial
accounting measures.
the companies to follow the rules and policies framed under GAAP (Generally
Accepted Accounting Principles). It indicates whether the company is running in loss
or profit.
Cost Accounting helps in the determination of the cost of the product, how to
control it and in making decisions. It makes use of both past and present data for
ascertainment of product cost. There is no specific format for the preparation of cost
accounting statements. It is used by the internal management of the company and
usually the cost accountant prepares this to ascertain the cost of a particular product
taking into account the cost of materials, labor and different overheads. No certain
periodicity is needed for the preparation of these statements and they are needed as
and when required by the management. This makes use of certain rules and
regulations while computing the cost of different products in different industries.
Unlike the above two accounting, Management Accounting deals with both
quantitative and qualitative aspects. This involves the preparation of budgets,
forecasts to make viable and valuable future decisions by the management. Many
decisions are taken based on the projected figures of the future. There is no question
of rules and regulations to be followed while preparing these statements but the
management can set their own principles. Like cost accounting, in management
accounting also there is no specific time span for its statement and report
preparation. It makes use of both cost and financial statements as well to analyze the
data.
FINANCIAL MANAGEMENT
ACCOUNTING ACCOUNTING
External (Investors,
Internal (Managers of
PRIMARY USERS government authorities,
business, employees)
creditors)
Help investors, creditors,
Help managers plan and
PURPOSE and others make
control business
OF INFORMATION investment, credit, and
operations
other decisions
Current and future
TIMELINES Delayed or historical
oriented
GAAP does not apply,
but information should be
RESTRICTIONS GAAP FASB AND SEC
restricted to strategic and
operational needs
Objective, auditable, More subjective and
NATURE OF
reliable, consistent and judgmental, valid,
INFORMATION
precise relevant and accurate
Highly aggregated Disaggregated
SCOPE information about the information to support
overall organization local decisions
Concern about how
BEHAVIORAL Concern about adequacy
reports will affect
IMPLICATIONS of disclosure
employees behavior
Reviewer 28
Management Advisory Services
FINANCIAL COST
ACCOUNTING ACCOUNTING
It provides information It provides information of
about financial ascertainments of costs
OBJECTIVE performance and to control costs and for
financial position of decision making about
the business. the costs.
It classifies, records,
It classifies records,
presents and interprets
presents and interprets
NATURE in a significant manner
transactions in terms of
materials, labor
money.
and overhead costs.
It records and presents
estimated, budgeted
RECORDING OF DATA It records historical data. data. It makes use of
both historical costs and
predetermined costs.
External users like
shareholders, creditors, Used by Internal
USERS OF
financial analysts, management at different
INFORMATION
government and its levels.
agencies, etc.
It provides details of
ANALYSIS OF COSTS It shows profit/loss of the costs and profit of each
AND PROFITS organization. product, process, job,
etc.
They are prepared for a
They are prepared as
TIME PERIOD definite period, usually a
and when required.
year.
Reviewer 29
Management Advisory Services
FINANCIAL COST
ACCOUNTING ACCOUNTING
A set format is used for There are no set formats
PRESENTATION OF
presenting financial for presenting cost
INFORMATION
information. information.
Treasurers and controllers are both financial managers, but they have different
roles. Controllers usually concentrate on what has already happened inside a
company. They prepare financial statements and other reports based on past activity.
Treasurers focus outward and interact with the bankers, shareholders and potential
investors who provide capital. In some small businesses, the owner, a controller and
an outside accountant might share the financial duties.
Qualifications
There are positions for treasurers and controllers in all sizes of companies
except for the smallest, where owners and outside accountants often perform the
necessary financial functions. Treasurers and controllers work in nonprofit
organizations and government agencies as well as private sector businesses,
especially banks and other financial businesses. Their day-to-day functions include
accounting oversight -- mainly the concern of controllers -- analysis and reporting.
Treasurers tend to specialize in cash management and risk management.
Focus
CONTROLLERSHIP TREASURERSHIP
Planning and control Provision of capital
Reporting and interpreting Investor relations
Evaluating and consulting Short-term financing
Tax administration Banking and custody
Government reporting Credit and collections
Protection of assets Investments
Economic appraisal Insurance
SUMMARY
*These are not “licenses”, per se, but do represent significant competency in
managerial accounting and financial management skills. These certifications are
sponsored by the Institute of Management Accountants.
In the United States, the CMA Program is conducted by the Institute of Management
Accountants (IMA), the largest US Professional organization of accountants.
The PAMA was founded primarily to provide its members with professional and
educational activities that enhance their knowledge of management accounting
principles and methods.
The CMA has four objectives, consistent with the mission of the Philippine
Association of Management Accountants (PAMA) to “promote management
accounting, enhance the capability of its members and foster high standards of
professionalism.”
Identification
Direct costs are costs that are related to a particular cost object and can
economically and effectively be traced to that cost object.
Indirect costs are costs that are related to a cost object, but cannot practically,
economically, and effectively be traced to such cost object. Cost assignment is
done by allocating the indirect cost to the related cost objects.
Reviewer 33
Management Advisory Services
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
Identification
Variable costs are within the relevant range and time period under consideration,
the total amount varies directly to the change in activity level or cost driver, and
the per unit amount is constant.
Fixed costs are within the relevant range and time period under consideration,
the total amount remains unchanged, and the per unit amount varies inversely or
indirectly with the change in the cost driver. Fixed costs may be committed or
discretionary (managed).
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
Identification
Product costs of the units sold during the period are recognized as
expenses (cost of goods sold) in the income statement.
Product costs of the unsold units become the costs of inventory and
treated as asset in the balance sheet.
Reviewer 34
Management Advisory Services
Period costs are the non-manufacturing costs that include selling, administrative,
and research and development costs. These costs are expensed in the period of
incurrence and do not become part of the cost of inventory.
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
Identification
Sunk/Past or Historical costs are already incurred and cannot be changed by any
decision made now or to be made in the future.
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
Identification
Job order costing method is the accumulation of costs by specific jobs (i.e.,
physical units, distinct batches, or job lots). This costing method is appropriate if
a product can be produced separately, distinct from the other jobs which require
different amount of materials, labor, and overhead.
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
PROCESS COSTING
Process costing accumulates all the costs of operating a process for a period of
time and then divides the cost by the number of units of product that passed
through that process during the period; the result is a unit cost. If the product of
one process becomes the material of the next, a unit cost is computed for each
process.
Reviewer 35
Management Advisory Services
Identification
Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting
ABC COSTING
Activity-based costing (ABC) has been popularized because of the rapid increase
in the automation of manufacturing process, which has led to a significant
increase in the incurrence of indirect costs and a consequent need for more
accurate cost allocation.
Under the activity-based costing, as the name implies, costs are accumulated by
activity rather than by department or function for purposes of product costing.
Identification
ABC Costing is one means of refining a cost system to avoid what has been
called peanut-butter costing. Inaccurately averaging or spreading costs like
peanut-butter over products that use different amounts of resources results in a
product-cost-cross-subsidization.
Differentiation
Characteristics
Behavior
Reviewer 36
Management Advisory Services
RELEVANT RANGE
A range of activity that reflects the company’s normal operating range. Within this
relevant range, the cost behavior to be discussed is valid.
Variable
The total amount varies directly with cost driver, and the per cost driver remains
constant.
Fixed
The total amount remains constant, and the per cost driver varies inversely with
cost driver.
Semi-Variable / Mixed
Mixed costs or Total Costs have variable and fixed costs components.
TC = FC + VC
Total variable cost varies directly with the activity level or cost driver.
Where: VC = total variable cost, b = variable cost per cost driver, x = cost driver
Example: If the cost driver is number of units and variable cost per unit is P5,
then VC = 5x
TC = FC + bx
Step Cost
Step variable costs have small steps, while step fixed costs have large steps.
A step cost is a cost that does not change steadily with changes in activity
volume, but rather at discrete points. The concept is used when making
investment decisions and deciding whether to accept additional customer orders.
A step cost is a fixed cost within certain boundaries, outside of which it will
change. When stated on a graph, step costs appear to be incurred in a stair step
pattern, with no change over a certain volume range, then a sudden increase,
then no change over the next (and higher) volume range, then another sudden
increase, and so on. The same pattern applies in reverse when the volume of
activity declines.
For example, a facility cost will remain steady until additional floor space is
constructed, at which point the cost will increase to a new and higher level as the
entity incurs new costs to maintain the additional floor space, to heat and air
condition it, insure it, and so forth.
Reviewer 38
Management Advisory Services
As another example, a company can produce 10,000 widgets during one eight-
hour shift. If the company receives additional customer orders for more widgets,
then it must add another shift, which requires the services of an additional shift
supervisor. Thus, the cost of the shift supervisor is a step cost that occurs when
the company reaches a production requirement of 10,001 widgets. This new
level of step cost will continue until yet another shift must be added, at which
point the company will incur another step cost for the shift supervisor for the night
shift.
Conversely, a company should be aware of step costs when its activity level
declines, so that it can reduce costs in an appropriate manner to maintain
profitability. This may require an examination of the costs of terminating staff,
selling off equipment, or tearing down structures.
The point at which a step cost will be incurred can be delayed by implementing
production efficiencies, which increase the number of units that can be produced
with the existing production configuration. Another option is to offer overtime to
employees, so that the company can produce more units without hiring additional
full-time staff.
Similar Terms
In an insurance industry the data is easily attainable and include direct cost to
the products and services offer. The customer output unit level cost would be the
cost of the activities to sell each product to a policyholder. The customer batch
level costs can be identifying as any cost related to each product sold such like
the cost of insuring a policyholder. The customer sustaining costs include any
activity to maintain the policyholder, which may include the marketing costs as
well as meeting expenses. The distributions channel cost involving the
distribution of information and sale of each product sold. A cost included may be
the salaries to the agents or any license staff selling insurance products. Finally,
the corporate sustaining cost which are the costs of activities that cannot be
Reviewer 39
Management Advisory Services
Mixed cost examples include our utilities. For instance, our phone line and
Internet service has a fixed monthly rate. However, when our communication
needs increase we may go over our coverage data which leads to overages. The
same applies to our electricity and gas service which it also has steady rate, but
during different seasons it may increase or decrease. In both instances it
includes a fixed and variable element.
Step cost examples include our marketing events. Every month our agency is
involved in community events where we have a fixed rate for rent, but the
supplies taken for marketing vary depending on the statistics gathered by the
marketing department. For instance, one event may only require 1,000 supplies,
but at another even our statistics may suggest to increase our supply to 2,500,
which increases our cost for supplies.
A mixed cost contains both a variable and a fixed component. For example, a
cell phone plan that has a flat charge for basic service (the fixed component) plus
a stated rate for each minute of use (the variable component) creates a mixed
cost. A mixed cost does not remain constant with changes in activity, nor does it
fluctuate on a per-unit basis in direct proportion to changes in activity. To simplify
estimation of costs, accountants typically assume that costs are linear rather
than curvilinear. Because of this assumption, the general formula for a straight
line can be used to describe any type of cost within a relevant range of activity.
The straight-line formula is
y = a + bX
Where
= total cost (dependent variable),
y
a = fixed portion of total cost,
b = unit change of variable cost relative to unit changes in
Reviewer 40
Management Advisory Services
activity, and
activity base to which y is being related (the predictor,
X =
cost driver, or independent variable)
If a cost is entirely variable, the value in the formula is zero. If the cost is entirely
fixed, the b value in the formula is zero. If a cost is mixed, it is necessary to
determine formula values for both a and b. Two methods of determining these
values—and thereby separating a mixed cost into its variable and fixed
components—are the high–low method, scatter graph, and regression analysis.
High-Low
In this method, the fixed and variable elements of the mixed costs are computed
from two data points (periods) – the high and low periods as to activity level or
cost driver.
High–Low Method
The high–low method analyzes a mixed cost by first selecting the highest and
lowest levels of activity in a data set if these two points are within the relevant
range. Activity levels are used because activities cause costs to change, not vice
versa. Occasionally, operations occur at a level outside the relevant range (e.g.,
a special rush order could require excess labor or machine time), or cost
distortions occur within the relevant range (a leak in a water pipe goes unnoticed
for a period of time). Such non-representative or abnormal observations are
called outliers and should be disregarded when analyzing a mixed cost.
Next, changes in activity and cost are determined by subtracting low values from
high values. These changes are used to calculate the b (variable unit cost) value
in the y = a + bX formula as follows:
Reviewer 41
Management Advisory Services
The b value is the unit variable cost per measure of activity. This value is
multiplied by the activity level to determine the amount of total variable cost
contained in the total cost at either the high or the low level of activity. The fixed
portion of a mixed cost is found by subtracting total variable cost from total cost.
As the activity level changes, the change in total mixed cost equals the change in
activity multiplied by the unit variable cost. By definition, the fixed cost element
does not fluctuate with changes in activity.
The problem below illustrates the high–low method using machine hours and
utility cost information for Mizzou Mechanical. In November 2017, the company
wanted to calculate its predetermined OH rate to use in calendar year 2018.
Mizzou Mechanical gathered information for the prior 10 months’ machine hours
and utility costs. During 2017, the company’s normal operating range of activity
was between 3,500 and 9,000 machine hours per month. Because it is
substantially in excess of normal activity levels, the May observation is viewed as
an outlier and should not be used in the analysis of utility cost.
STEP 1: Select the highest and lowest levels of activity within the relevant range
and obtain the costs associated with those levels. These levels and costs are
9,000 and 4,600 hours, and $3,500 and $2,180, respectively.
STEP 3: Determine the relationship of cost change to activity change to find the
variable cost element.
$0.30
High level of activity : TVC = = $2,700
(9,000)
$0.30
Low level of activity : TVC = = $1,380
(4,600)
STEP 5: Subtract total variable cost from total cost at the associated level of
activity to determine fixed cost.
STEP 6: Substitute the fixed and variable cost values in the straight-line formula
to get an equation that can be used to estimate total cost at any level of activity
within the relevant range.
y = $800 + $0.30X
One potential weakness of the high–low method is that outliers can inadvertently
be used in the calculation. Estimates of future costs calculated from a line drawn
using such points will not indicate actual costs and probably are not good
predictions. A second weakness of this method is that it considers only two data
points. A more precise method of analyzing mixed costs is least squares
regression analysis.
Scatter Graph
Various costs (the dependent variable) are plotted on a vertical line (y-axis) and
measurement figures (cost drivers or activity levels) are plotted on a horizontal
line (x-axis). A straight line is drawn through the points and, using this line, the
rate of variability and the fixed cost are computed.
Scatter graph or “visual fit analysis” plots the observation on a graph and draws
conclusion on the relationships depicted by such observations. This method uses
the principles found in a regression line. A regression line is a straight line that
depicts the relationship of two variables – one is independent and the other is
Reviewer 43
Management Advisory Services
The scatter graph method derived its name from its process where observations
are scattered in a graph depicting the relationship of x and y variables where,
normally, “x” represents the horizontal line or the units of measure and “y”
represents the vertical line or the amount. In using this model in segregating
fixed and variable elements of costs, the following steps are followed:
Draw the x (horizontal) and y (vertical) axes in the graph. Scale the axes.
Draw a straight line in the middle of the plotted observation following the
depicted relationship between “x” and “y”, where the differences of the points
above the line is equal to the differences of the points below the line.
Compute “b” by choosing two “y” values as Y1 and Y2. Get the corresponding
values of X1 and X2.
The value of “b” equals the difference in the values of “ y” divided by the
difference in the values of “x”.
Assign the computed values of “a” and “b” in the regression line equation.
Least-Squares Regressions
A regression line is any line that goes through the means (or averages) of the
independent and dependent variables in a set of observations. As shown in
Reviewer 47
Management Advisory Services
Exhibit 3–7, numerous straight lines can be drawn through any set of data
observations, but most of these lines would provide a poor fit to the data.
Using the machine hour and utility cost data for Mizzou Mechanical (excluding
the May outlier), the following calculations can be made:
The b (variable cost) and a (fixed cost) values for the company’s utility costs are
$0.35 and $354.62, respectively. These values are close to, but not exactly the
same as, the values computed using the high–low method.
By using these values, predicted costs (yc values) can be computed for each
actual activity level. The line drawn through all of the yc values will be the line of
best fit for the data. Because actual costs do not generally fall directly on the
regression line and predicted costs naturally do, these two costs differ at their
related activity levels. It is acceptable for the regression line not to pass through
any of the actual observation points because the line has been determined to
mathematically “fit” the data. Like all mathematical models, regression analysis is
based on certain assumptions that produce limitations on the model’s use. Three
of these assumptions follow; others are beyond the scope of the text. First, for
regression analysis to be useful, the independent variable must be a valid
predictor of the dependent variable; the relationship can be tested by determining
the coefficient of correlation. Second, like the high–low method, regression
analysis should be used only within a relevant range of activity. Third, the
regression model is useful only as long as the circumstances existing at the time
of its development remain constant; consequently, if significant additions are
made to capacity or if there is a major change in technology usage, the
regression line will no longer be valid.
Once a method has been selected and mixed overhead costs have been
separated into fixed and variable components, a flexible budget can be
developed to indicate the estimated amount of overhead at various levels of the
denominator activity.
The visual-fit method suffers from a lack of objectivity. Given that the cost line is
created by visual approximation or “eyeballing,’ different cost analysts will likely
produce different lines. The high-low method, on the other hand, is objective.
However, it uses only two data points and ignores the rest, thus generalizing
about cost behavior by relying on only a very small percentage of possible data
observations.
The multiple-regression line has all the same properties of the simple LSR line,
but more than one independent variable is taken into consideration. The use of
more independent variables can better explain accompanying changes in cost.
It will provide management with cost and profit data for profit planning, policy
formulation, and decision making.
It will provide data in determining the optimal level and mix of output to be
produced with available resources.
The variables of profit are the unit sales price, unit variable costs, total fixed
costs, sales volume (or volume), and sales mix. Sales mix is considered when a
business sells two or more products. The assumptions to these variables as they
relate to profit are as follows:
Assumptions
Variables of Profit
Basic Sensitivity
Sales Volume Changes Changes
Unit Sales Price Constant* Changes
Unit Variable Costs Constant Changes
Total Fixed Costs Constant Changes
Sales Mix Constant Changes
*Constant means linear
Reviewer 50
Management Advisory Services
The unit sales price once established is considered constant for planning
purposes. The sales price is impacted by competition, variability in supply and
demand, laws, technology, distribution channels, emerging practices, input of
production prices, taxes and subsidies, seasonality, and other determinants.
The unit variable costs once established is considered constant for planning
purposes. Although, the unit variable costs are affected by a change in the prices
of suppliers, labor, rentals, telecommunications, fuel, warehousing, distribution,
taxes and licenses, agency costs, and such other determinants. The total fixed
costs and expenses once established are also considered constant for planning
purposes.
Managing, i.e., controlling, sales price is not within the bounds of managerial
control or influence in the short run. Management can only control costs. The
process of managing costs and sales volume as they impact profit is known as
cost-volume-profit analysis.
All of the above assumptions are anchored on the general assumption that costs
and expenses are separable into their fixed and variable components. Cost-
volume-profit analysis also assumes that labor productivity, production
technology, and market conditions will not change. Or if they change, their
impact would be included in the sensitivity analysis. Also, it is assumed that there
is no inflation, or if it can be forecasted, it is already included in the CVP analysis
data.
The assumptions that sales price, unit variable costs, and total fixed costs are
invariable are made to establish ballpark figures. These figures serve as initial
points of understanding the results of business operations. The assumptions
Reviewer 51
Management Advisory Services
used in the basic CVP analysis are stiff, unreal, and are not reflective of practical
business decisions. In the real world, changes abound and their impacts are
sometimes profound.
Sales price change. Unit variable costs and total fixed costs also change. Sales
mix changes as well. The process of considering the impact and the results to
profit of the changes in its variables is called as CVP Sensitivity Analysis.
The total revenue function is based on the assumption that the price per unit
is constant regardless of the volume of sales and production, which is
normally, not realistic. Say, the demand declines, don’t you think the
company would not lower the selling price in an effort to boost sales? On the
other hand, if demand is high, the firm could have the best chance to
increase price and improve profit margin.
In line with these, breakeven analysis could be expanded and the cost curve
would change from linear to non-linear. This situation might exist where the firm
has a loss at low sales volume, earns a profit over some range of sales volumes,
and then has a net loss at a very high sales volume.
The firm might want to consider changing its level of fixed costs. Higher fixed
costs are not good, other things held constant. Higher fixed costs are associated
with a more mechanized or automated processes, however, it reduces variable
costs per unit. Profit under different production setups and price cost situations
could be best presented and analyzed using the cost structure and operating
leverage.
1. Sales
a. Selling price
Reviewer 52
Management Advisory Services
b. Units or volume
2. Total fixed costs
4. Sales mix
The costs and expenses in the Contribution Margin Income Statement are
classified as to behavior (variable and fixed). The amount of contribution margin,
which is the difference between sales and variable costs, is shown. The format is
as follows:
BREAK-EVEN POINT – the sales volume level (in pesos or in units) where total
revenues equals total costs, that is, there is neither profit nor loss.
GRAPHICAL METHOD
FC
BEPp =
CMR
FC
BEPu =
CM /u
Reviewer 53
Management Advisory Services
FC
BEPp =
WaCMR
FC
BEPu =
WaCM /u
d. Required Selling Price, Unit Sales And Peso Sales To Achieve A Target
Profit
The assumptions
Another important tool that managers use to help them choose between
alternative cost structures is the indifference point. The indifference point is the
level of volume at which total costs, and hence profits, are the same under both
cost structures. If the company operated at that level of volume, the alternative
used would not matter because income would be the same either way. At the
cost indifference point, total costs (fixed cost and variable cost) associated with
the two alternatives are equal.
There may be two methods or two alternatives of doing a thing, say two methods
of production. It is also possible at a particular level of activity; one production
method is superior to another, and vice versa. There is a need to know at which
level of production, it will be desirable to shift from one production method to
another production method. This level or point is known as cost indifference point
and at this point total cost of two production methods is same.
Cost Indifference Point = Differential fixed cost/Differential variable cost per unit
For example, assume indifference point for a company’s new product is 18,333
units, calculated as follows, with Q equal to unit volume.
Assume the following details about two methods of production, A and B for the
new product:
Reviewer 55
Management Advisory Services
The indifference point will be 18,333 units, calculated as follows, Q indicates unit
Volume.
At volumes below 18,333 units, production A gives lower total costs (and higher
profits); above 18,333 units, production B gives higher profits.
It may be noticed that break-even point for the two methods are:
Production method A:
Rs 40,000/Rs 3 = 13,333 units
Production method B:
Rs 95,000/Rs 6 = 15,833 units
Analytical tools such as the indifference point, margin of safety, and CVP graph
help them evaluate alternatives, but the decision depends on their attitudes
about risk and return. If they want to avoid risk, they will choose production A,
forgoing the potential for higher profits from production B. If they are
venturesome, they probably will be willing to take some risk for the potentially
higher returns and choose production B.
Sales Mix
Sales mix can be stated two different ways--in terms of units and in terms of
sales dollars. To illustrate, suppose Jama Giants produces two products: cakes
and pies. Sales mix in units differs from sales mix in revenue dollars because
both the selling price of cakes and pies and the number of pies and cakes sold
differ. The company has provided the following expected sales information for
its products for the month of May:
The unit sales mix is 2,000 cakes to 6,000 pies. However, sales mix is always
stated in lowest terms, a concept you learned in middle school math classes.
'Lowest terms' is always expressed in whole numbers. Fractions and decimals
are unacceptable because partial units cannot be sold. Reducing to lowest
terms, the sales mix in units is:
2000 : 6000 ==> 2 : 6 ==> 1 : 3
The unit sales mix tells us that Jama Giants sells one cake for every three pies
sold.
The company's sales mix based on sales dollars is determined in much the same
manner by comparing revenues of each product and then reducing to lowest
terms:
$24,000 : $36,000 ==> 2 : 3
The revenue sales mix tells us that Jama Giants sells $2 of cakes for every $3
of pies.
Reviewer 57
Management Advisory Services
In order to consider the sales mix when calculating the breakeven point in units
for multiple products, you must determine a weighted average contribution
margin amount, which considers the differing selling prices, variable costs per
unit, and number of units for each products.
When calculating the breakeven point or target profit in units, use the weighted
average contribution margin (WACM) per unit. When calculating the breakeven
point in sales dollars, use the weighted average contribution
margin ratio (WACMR). The table below summarizes which contribution margin
amount to use when calculating the breakeven point or target profit for single and
multiple products.
Which Contribution Amount to Use to Calculate the Breakeven Point or Target Profit
Number of When Calculating the Breakeven Point When Calculating the Breakeven Point or
Products or Target Profit in Units Target Profit in Sales Dollars
For a single Contribution margin per unit Contribution margin ratio
product
Weighted average contribution margin Weighted average contribution margin
For multiple
per unit ratio
products
Unit sales mix Revenue sales mix
MARGIN OF SAFETY
The amount of peso sales or the number of units by which actual or budgeted
sales may be decreased without resulting into a loss.
QUANTITY / USAGE
Direct material quantity variance (also called the direct material usage/efficiency
variance) is the product of standard price of a unit of direct material and the
difference between standard quantity of direct material allowed and actual
quantity of direct material used. The formula to calculate direct material quantity
variance is:
DM Quantity Variance = ( SQ − AQ ) × SP
Where,
SQ is the standard quantity allowed
AQ is the actual quantity of direct material used
SP is the standard price per unit of direct material
Standard quantity allowed (SQ) is calculated as the product of standard quantity
of direct material per unit and actual units produced
Reviewer 59
Management Advisory Services
Reviewer 60
Management Advisory Services
During January 2010, Sanjay Corporation produced 400 mountain bikes (the
actual quantity made by Sanjay Corporation in January 2010). The top half of
Exhibit 7–4 shows the standard quantities and costs for that production, while the
bottom half of the exhibit shows actual quantities and costs. This information is
used to compute the January 2010 variances.
Reviewer 67
Management Advisory Services
Material Variances
The general variance analysis model is used to compute price and quantity
variances for each type of direct material. To illustrate the calculations, direct
material item WF-05 is used.
Reviewer 68
Management Advisory Services
The material price variance (MPV) indicates whether the amount paid for
material was less or more than standard price. For item WF-05, the price paid
was $19 rather than the standard price of $20 per unit. This variance is favorable
because the actual price is less than the standard. A favorable variance reduces
the cost of production and, thus, a negative sign indicates a favorable variance.
The MPV can also be calculated as follows:
The purchasing manager should be able to explain why the price paid for item
WF-05 was less than standard.
The material quantity variance (MQV) indicates whether the actual quantity used
was less or more than the standard quantity allowed for the actual output. This
difference is multiplied by the standard price per unit of material because
quantities cannot be entered into the accounting records. Production used 13
more units of WF-05 than the standard allowed, resulting in a $260 unfavorable
material quantity variance. The MQV can be calculated as follows:
The production manager should be able to explain why the additional WF-05
components were used in January.
The total material variance (TMV) is the summation of the individual variances or
can also be calculated by subtracting the total standard cost for component WF-
05 from the total actual cost of WF-05:
Price and quantity variance computations must be made for each direct material
component and these component variances are summed to obtain the total price
Reviewer 69
Management Advisory Services
and quantity variances. Such a summation, however, does not provide useful
information for cost control.
A total variance for a cost component generally equals the sum of the price and
usage variances.
An exception to this rule occurs when the quantity of material purchased is not
the same as the quantity of material placed into production. Because the material
price variance relates to the purchasing (rather than the production) function, the
point-of-purchase model calculates the material price variance using the quantity
of materials purchased (Q p) rather than the quantity of materials used (Q u).
The general variance analysis model is altered slightly to isolate the variance as
early as possible to provide more rapid information for management control
purposes.
Assume that Sanjay Corporation purchased 450 WF-05s at $19 per unit during
January, but only used 413 for the 400 bikes produced that month. Using the
point-of-purchase variance model, the computation for the material price
variance is adjusted, but the computation for the material quantity variance
remains the same as previously shown. The point-of-purchase material variance
model is a “staggered” one as follows:
The material quantity variance is still computed on the actual quantity used and,
thus, remains at $260 U. However, because the price and quantity variances
have been computed using different bases, they should not be summed. Thus,
no total material variance can be meaningfully determined when the quantity of
material purchased differs from the quantity of material used.
The above discussion focused on a single material and one labor category in the
production of the product. Most companies, however, use a combination of many
materials and various classifications of direct labor to produce the goods.
When the company’s product uses more than one material, the goal is to
combine those materials in such a way that can produce the desired product
Reviewer 70
Management Advisory Services
quality in the most cost-beneficial manner. These mix and yield variances are on
the assumption that materials are substitute for one another without affecting
product quality. If this assumption is not present, changing the mix cannot
improve the yield and may even prove to be wasteful.
Yield – is the result derived from the quantity of output resulting from a specified
input. Yield ratio is the expected or actual relationship between input and output.
MIX VARIANCE – difference of standard materials costs at actual mix and actual
quantity and the standard price of materials at standard mix and actual quantity,
which measures the effect of substituting a nonstandard mix of materials during
the production process.
YIELD VARIANCE – is the difference between the actual total quantity of input
and the standard total quantity allowed based on output, which reflects standard
mix and standard price.
EFFICENCY
Reviewer 71
Management Advisory Services
Reviewer 72
Management Advisory Services
Reviewer 73
Management Advisory Services
RATE
Reviewer 74
Management Advisory Services
Reviewer 75
Management Advisory Services
MIX
YIELD
Labor Variances
The labor variances for mountain bicycle production in January 2010 would be
computed on a departmental basis and then summed across departments. To
illustrate the computations, the Painting Department data are used. Each
mountain bike requires 3 hours in the Painting Department; thus, the standard
labor time allowed for 400 bikes is (400 x 3) or 1,200 hours. The actual labor time
used in the Painting Department is shown on Exhibit 7–4 as 1,100 hours.
Calculations of the labor variances are as follows:
Reviewer 76
Management Advisory Services
The labor rate variance (LRV) is the difference between the actual wages paid to
labor for the period and the standard cost of actual hours worked. In January,
there was no difference between the actual and the standard wage rates per
hour. The labor efficiency variance (LEV) indicates whether the amount of time
worked was less or more than the standard quantity allowed for the actual
output. This difference is multiplied by the standard rate per hour of labor time. In
January, the Painting Department worked 100 hours less than the standard
allowed to produce 400 mountain bikes. The LRV and LEV can also be
computed as follows:
The total labor variance for the Painting Department can be calculated as
$1,200F by either
1. Subtracting the total standard labor cost ($14,400) from the total actual labor
cost ($13,200) or
LABOR RATE VARIANCE – difference in actual rate at actual mix at actual total
hours and the standard price at actual mix at actual hours, which is the measure
of the cost of paying workers at other than standard rates.
LABOR MIX VARIANCE – difference of standard rate at actual mix and actual
total hours and the standard rate at standard mix and actual hours. It is the
financial effect associated with changing the proportionate amount of higher or
lower paid workers in production.
LABOR YIELD VARIANCE – is the difference between the total labor cost at
standard rate at standard mix at actual total hours and the standard rate at
standard mix at standard total hours, which reflects the monetary impact of using
Reviewer 77
Management Advisory Services
more or fewer total hours than the standard allowed. The sum of the labor mix
and yield variance equals the labor efficiency variance.
NON-
CONTROLLABLE
CONTROLLABLE BUDGET VARIANCE VOLUME
VARIANCE
EFFICIENCY VOLUME
SPENDING VARIANCE
VARIANCE VARIANCE
TWO-WAY METHOD
Controllable
Volume
THREE-WAY METHOD
Spending
Variable Efficiency
Volume
FOUR-WAY METHOD
Variable Spending
Fixed Spending
Variable Efficiency
Volume
Overhead Variances
each). At that level of direct labor hours (DLHs), budgeted variable overhead
costs were calculated as $682,500 and budgeted annual fixed overhead costs
were $120,000. Company accountants decided to set the variable overhead
(VOH) rate using direct labor hours and the fixed overhead (FOH) rate using
number of mountain bikes as follows:
Variable Overhead
The difference between actual VOH and budgeted VOH based on actual hours is
the variable overhead spending variance. VOH spending variances are caused
by both component price and volume differences. For example, an unfavorable
variable overhead spending variance could be caused by either paying a higher
price or using more indirect material than the standard allows. Variable overhead
spending variances associated with price differences can occur because, over
time, changes in VOH prices have not been included in the standard rate. For
example, average indirect labor wage rates or utility rates could have changed
since the predetermined VOH rate was computed. Managers usually have little
control over prices charged by external parties and should not be held
accountable for variances arising because of such price changes. In these
instances, the standard rates should be adjusted.
The difference between budgeted VOH for actual hours and applied VOH is the
variable overhead efficiency variance.
This variance quantifies the effect of using more or less of the activity or resource
that is the base for VOH application. For example, Sanjay Corporation applies
VOH to mountain bikes using direct labor hours. If Sanjay uses direct labor time
inefficiently, higher variable overhead costs will occur. When actual input
exceeds standard input allowed, production operations are considered to be
inefficient. Excess input also indicates that an increased VOH budget is needed
to support the additional activity base being used.
Reviewer 81
Management Advisory Services
Nurseries have to be careful about the storage of seeds, which can rapidly
deteriorate with high temperature or humidity. Spoiled seeds will create a higher
variable overhead spending variance for future greenhouse operations.
Fixed Overhead
The total fixed overhead (FOH) variance is divided into price and volume
components by inserting budgeted FOH in the middle column of the general
variance analysis model as follows:
The left column is the total actual fixed overhead incurred. As discussed in
Chapter 3, actual FOH cost is debited to Fixed Manufacturing Overhead Control
and credited to various accounts. Budgeted FOH is a constant amount
throughout the relevant range of activity and was the amount used to develop the
predetermined FOH rate; thus, the middle column is a constant figure regardless
of the actual quantity of input or the standard quantity of input allowed.
Total budgeted FOH for Sanjay Corporation for 2010 is given in Exhibit 7–3 as
$487,500. Assuming that FOH is incurred steadily throughout the year, the
monthly budgeted FOH is $40,625. Using the information in Exhibit 7–4, the FOH
variances for mountain bike production are calculated as follows:
The difference between actual and budgeted FOH is the fixed overhead
spending variance. This amount normally represents the differences between
budgeted and actual costs for the numerous FOH components, although it can
also reflect resource mismanagement. Individual FOH components would be
shown in the company’s flexible overhead budget, and individual spending
variances should be calculated for each component.
As with variable overhead, applied FOH is related to the predetermined rate and
the standard quantity for the actual production level achieved. Relative to FOH,
the standard input allowed for the achieved production level measures capacity
utilization for the period. The fixed overhead volume variance is the difference
between budgeted and applied FOH. This variance is caused solely by producing
at a level that differs from the level that was used to compute the predetermined
FOH rate. In the case of Sanjay Corporation, the $10 predetermined FOH rate
was computed by dividing $487,500 of budgeted FOH cost by a capacity level of
48,750 DLHs for 5,000 bikes. Had any other capacity level been chosen, the
predetermined FOH rate would have been a different amount, even though the
$487,500 budgeted fixed overhead would have remained the same. For
example, assume the company chose 4,800 bikes as the expected capacity for
2010:
If actual capacity usage differs from that used in determining the predetermined
FOH rate, a volume variance will arise because, by using a predetermined rate
Reviewer 83
Management Advisory Services
per unit of activity, fixed overhead is treated as if it were a variable cost even
though it is not.
Preferably, such actions should be taken before production rather than after it.
Efforts made after production is completed might improve next period’s
operations but will have no impact on past production.
If the accounting system does not separate variable and fixed overhead costs,
insufficient data will be available to compute four overhead variances. Use of a
combined (variable and fixed) predetermined OH rate requires alternative
overhead variance computations. One approach is to calculate only the total
overhead variance, which is the difference between total actual overhead and
total overhead applied to production. The amount of applied overhead is found
by multiplying the combined rate by the standard quantity allowed for the actual
production. The one-variance approach is as follows:
Reviewer 84
Management Advisory Services
Like other total variances, the total overhead variance provides limited
information to managers. For Sanjay Corporation, the total overhead variance is
calculated as follows:
Note that this amount is the same as the summation of the $3,816 F total VOH
variance and the $500 F total FOH variance computed under the four-variance
approach.
The middle column is the expected total overhead cost for the period’s actual
output. This amount represents total budgeted VOH at the standard quantity
measure allowed plus the budgeted FOH, which is constant at all activity levels
in the relevant range.
The budget variance equals total actual overhead minus budgeted overhead for
the period’s actual output. This variance is also referred to as the controllable
variance because managers are able to exert influence on this amount during the
short run. The difference between total applied overhead and budgeted overhead
for the period’s actual output is the volume variance; this variance is the same as
would be computed under the four-variance approach.
Note that the budget variance amount is the same as the summation of the $736
FVOH spending variance, the $3,080 FVOH efficiency variance, and the $2,125
FFOH spending variance computed under the four-variance approach. The
$1,625 U volume variance is the same as the volume variance computed under
the four-variance approach.
Inserting another column between the left and middle columns of the two-
variance model provides a three-variance analysis by separating the budget
variance into spending and efficiency variances. The new column represents the
flexible budget based on the actual input measure(s). The three-variance model
is as follows:
Note that the OH spending variance amount is the same as the summation of the
$736 FVOH spending variance and the $2,125 FFOH spending variance
computed under the four variance approach. The $3,080 FOH efficiency variance
is the same as the VOH efficiency variance computed under the four-variance
approach, and the $1,625 U volume variance is the same as the volume variance
computed under the four-variance approach.
If VOH and FOH are applied using a combined rate, the one-, two-, and three-
variance approaches will have the interrelationships shown in Exhibit 7–5. The
amounts in the exhibit represent the data provided earlier for Sanjay Corporation.
Managers should select the method that provides the most useful information
and that conforms to the company’s accounting system. As more companies
begin to recognize the existence of multiple cost drivers for overhead and to use
multiple bases for applying overhead to production, computation of the one-,
two-, and three-variance approaches will diminish.
A product costing method that includes all the manufacturing costs (direct
materials, direct labor, and both the variable and fixed factory overhead) in
the cost of a unit of product.
Reviewer 87
Management Advisory Services
VARIABLE COSTING
COSTING METHOD
Absorption Variable
Direct materials Direct materials
Direct labor Direct labor
Variable FOH Variable FOH
Fixed FOH -
Product Cost Product Cost
COST
Product Period
Cost that is included in the Cost that is charged against current
computation of product cost that is revenue during a time period
apportioned between the sold and regardless of the difference between
unsold units. production and sales volume.
An inventoriable cost. The portion of
the cost that has been allocated to the Does not form part of the cost of
unsold units becomes part of the cost inventory.
of inventory.
Reduces current income by the
portion allocated to the sold units; the
Reduces income for the current period
portion allocated to unsold units is
by its full amount.
treated as an asset; being part of the
cost of inventory.
COMPARISON OF
Production And Sales Net Income Fixed FOH Expensed
P=S AC = VC AC = VC
P>S AC > VC AC < VC
P<S AC < VC AC > VC
BUDGET
Reviewer 90
Management Advisory Services
2. Budgets force management to think about and plan for the future.
6. The goals and objectives identified in the budgeting process can serve as
benchmarks or standards for evaluating performance.
The master budget is a comprehensive budget that consolidates the overall plan
of the organization for a specified period. The master budget is mainly composed
of: (1) operating budgets and (2) financial budgets. The master budget, in some
organizations, is also referred to as pro forma budget, forecast budget, master
profit plan.
MASTER
MASTER
BUDGET
OPERATING FINANCIAL
BUDGET
BUDGET BUDGET
BUDGET
Factory Capital
Overhead
Overhead Expenditure
Expenditure
Budget
Budget Budget
Budget
Working Capital
Budget
Budget
Production Budget
Budgeted sales xx
Add: Finished goods inventory, end xx
Total goods available for sale xx
Less
Finished goods inventory, beginning xx
:
BUDGETED PRODUCTION xx
BUDGETING MODELS
There are several budgeting models used by organizations. Some examples are
flexible budgeting, fixed (or static) budgeting, continuous budgeting, zero-based
budgeting, life-cycle budgeting, activity-based budgeting, kaizen budgeting, and
governmental budgeting.
Flexible budgeting
A series of budgets prepare for many levels of activity. It makes possible the
adjustment of the budget to the actual level of activity before comparing the
budget figures with the actual results.
Does not segregate costs into fixed and variable components. Costs are
estimated only at a single level of activity. Actual costs are compared with
the budgeted costs regardless of the actual level of production and costs
variances are obtained and analyzed accordingly.
Zero-based budgeting
Does not consider the past performances in anticipating the future. Incoming
costs should be classified and packaged based on activities which must be
prioritized and justified as to their incurrence. The objective is to encourage
objective examination of all costs in the hope that costs could be better
controlled. ZBB starts from the lowest budgetary units of the organization. It
needs determination of objectives, operations, and costs for each activity
Reviewer 96
Management Advisory Services
and the alternative means of carrying out that activity. Different levels of
service or work effort are evaluated for each activity, measurements and
performance standards are established, and activities are ranked according
to their importance to the activity. A decision package is prepared that
describes various levels of service that may be provided, including at least
one level lower than the current one. Each expenditure is justified for each
budget period and costs are reviewed from a cost-benefit perspective.
Incremental budgeting
Life-cycle budgeting
Intends to account for all costs incurred in the stages of the “value chain”,
from research and development to design, production, marketing,
distribution, up to customer service. Costing in this model is important for
pricing decisions. Revenues generated from the product should cover not
only the costs of production but the entire business costs incurred. It is also
analyzed in line with the product life-cycle concept where products have four
life stages such as infancy (or start-up stage), growth stage, expansion
stage, and maturity (or decline) stage. It is estimated that about 80% of all
costs are already committed (may not yet be incurred) before the business
begins. Life-cycle budgeting emphasizes the potential for locking in
(designing in) future costs since the opportunity of reducing costs is great
before production begins. In a whole-life costs concept, the budget includes
the “after-purchase costs” closely associated with the life-cycle costs. After-
purchase costs include the costs of operating, support, repair, and disposal
incurred by customers. Whole-life cost equals the life-cycle costs plus the
after-purchase costs. Life-cycle costing is related to target costing and target
pricing. A target price is determined in a given market condition and costs
and profit margin are adjusted accordingly.
A product’s revenues and expenses are estimated over its entire life cycle
(from research and development to withdrawal of customer support). This
concept is helpful in target costing and target pricing. It accounts for, and
emphasizes the relationships among the costs at all stages of the value
chain, such as research and development, design, production, marketing,
distribution, and customer service.
Activity-based budgeting
The activities are identified, a cost pool is established for each activity, a
cost driver is identified for each pool, and the budgeted cost for each pool is
determined by multiplying the budgeted demand for the activity by the
estimated cost per unit of such activity.
Kaizen budgeting is based not on the existing system but on changes that
are to be made.
Governmental budgeting
Is not only a financial plan but is also an expression of public policy and a
form of control having the force of law. A governmental budget is a legal
document which must be complied with by a government agency head.
Since government budgeting is not profit-centered, the use of budgets in the
appropriation process is of major importance. One budgeting concept in
government budgeting is “line budgeting” where the emphasis is more on
the control of expenditures. Each line expense should be disbursed
according to the limits of the approved appropriations.
ACTIVITY LEVELS
An activity is any event, action, transaction, or work sequence that incurs costs
when producing a product or providing a service.
While using cost drivers to assign overhead costs to individual units works well
for some activities, for some activities such as setup costs, the costs are not
incurred to produce an individual unit but rather to produce a batch of the same
units. For other costs, the costs incurred might be based on the number of
product lines or simply because there is a manufacturing facility. To assign
overhead costs more accurately, activity‐based costing assigns activities to one
of four categories.
Unit-Level
Unit-level activities
Batch-level activities
Product-level activities
Customer-level activities
Organization-sustaining activities
Batch-Level
These are costs incurred every time a group (batch) of units is produced or a
series of steps is performed. Purchase orders, machine setup, and quality
tests are examples of batch‐level activities.
Performed for each batch of product produced, rather than each unit.
Examples: setup, receiving and inspection, material-handling, packaging,
shipping, and quality assurance.
Product-Level
Also known as the product sustaining level, these are activities that are
needed to support the entire product line regardless of the number of units
and batches produced. Examples: engineering costs, product development
costs.
These are activities that support an entire product line but not necessarily
each individual unit. Examples of product‐line activities are engineering
changes made in the assembly line, product design changes, and
warehousing and storage costs for each product line.
Facility-Level
These are necessary for development and production to take place. These
costs are administrative in nature and include building depreciation, property
taxes, plant security, insurance, accounting, outside landscape and
maintenance, and plant management's and support staff's salaries. The
costs of unit‐level, batch‐level, and product‐line activities are easily
allocated to a specific product, either directly as a unit‐level activity or
through allocation of a pooled cost for batch‐level and product‐line activities.
In contrast, the facility‐level costs are kept separate from product costs and
are not allocated to individual units because the allocation would have to be
made on an arbitrary basis such as square feet, number of divisions or
products, and so on.
Reviewer 101
Management Advisory Services
Also called the general operations level, performed in order for the entire
production process to occur. Examples: plant maintenance, plant
management, property taxes, and insurance.
COST POOLS
ACTIVITY DRIVERS
A factor that causes a change in the cost pool for a particular activity. It is
used as a basis for cost allocation; any factor or activity that has a direct
cause-effect relationship.
Cost drivers are the actual activities that cause the total cost in an activity
cost pool to increase. The number of times materials are ordered, the
number of production lines in a factory, and the number of shipments made
to customers are all examples of activities that impact the costs a company
incurs. When using ABC, the total cost of each activity pool is divided by the
total number of units of the activity to determine the cost per unit.
A cost driver is the particular activity that causes the incurrence of certain
costs.
A cost driver is an activity that is the root cause of why a cost occurs. It must be
applicable and relevant to the event that is incurring a cost. There may be
multiple cost drivers responsible for the occurrence of a single expense. A cost
driver assists with allocation expenses in a systematic manner that theoretically
results in more accurate calculations of the true costs of a producing specific
products.
The most common cost driver has historically been direct labor hours. Expenses
incurred relating to the layout or structure of a building or warehouse may utilize
a cost driver of square footage to allocate expenses. More technical cost drivers
include machine hours, the number of change orders, the number of customer
contacts, the number of product returns, the machine setups required for
production or the number of inspections.
Calculate the rate for each activity, using the estimated cost of each activity cost
pool and the estimated quantity for each allocation base. At this point, this should
start to look familiar because we did this using plant-wide rates and departmental
rates. To calculate the ABC rate:
Total estimated activity cost pool / Total estimated activity allocation base = ABC
rate
(NOTE: Estimated figures are used because actual figures are not yet known at
the start of the period.)
This is the exact same formula we used for plant-wide rates and departmental
rates. Total cost divided by total activity equals rate. The only thing that is
different about ABC rates is that you will have more of them. With plant-wide
rates we had one rate for the entire company. For departmental rates, we had
one for each department. For ABC, we will have one rate for each activity that
has been identified.
It is extremely important to label each of your rates. If you are calculating the rate
for machine setups, label your rate “$/setup”. This makes it much easier when
you are applying your rates. Don’t skip this step. When students make mistakes,
the mistakes are made in the application of the rates because students use the
wrong driver to apply the rates. When you label your rates, it is so much easier to
apply the rates because you don’t need to think about which rates to use for
each activity. If the problem states that there are 15 setups, look at your rates for
the one that is marked “$/setup” and use that one.
To apply the rates, multiply the actual amount of activity by the rate for that
activity. Again, that is very similar to what we did for plant-wide rates and
departmental rates. Just like departmental rates, once you get the amount for
each activity, you will need to add up the applied cost for each activity to get the
total overhead applied to your cost object.
Applied overhead is the amount of overhead cost that has been applied to
a cost object . Overhead application is required to meet certain accounting
requirements, but is not needed for most decision-making activities.
Applied overhead costs include any cost that cannot be directly assigned to
a cost object, such as rent, administrative staff compensation, and
insurance. A cost object is an item for which a cost is compiled, such as a
product, product line, distribution channel, subsidiary, process, geographic
region, or customer.
Reviewer 104
Management Advisory Services
Costing systems helps companies determine the cost of a product related to the
revenue it generates. Two common costing systems used in business are
traditional costing and activity-based costing. Traditional costing assigns
Reviewer 105
Management Advisory Services
Identify indirect costs.
For example:
Activity-Based Costing
Group the resources used and pool them in an activity center and identify
cost driver to its activity centers.
Compute the cost functions to the activity center using the resources
[support cost (OH) ÷ resources used by activity center].
Using the above cost functions, the total costs will be assigned to the two
producing centers.
Grinding Packaging
A B C=AxB A B C=AxB
Occupancy 0.50 160,000 80,000 0.50 80,000 40,000
Record Keeping 0.10 240,000 24,000 0.10 360,000 36,000
Human Resource 30% 20,000 6,000 30% 40,000 12,000
Reviewer 108
Management Advisory Services
Link cost drivers to each product and determine its cost function.
Cost Drivers:
Cost Driver Activity Linked to Each Product
Activity Center Cost Driver Product X Product Y Product Z Total
Grinding Grinding Hours 4,000 6,000 10,000 20,000
Packaging Machine Hours 5,000 3,000 2,000 10,000
Cost Functions:
Grinding Packaging
Separate Support Costs 40,000 30,000
Share from the other activity center (per allocation) 128,000 102,000
Total Cost To Be Allocated 168,000 132,000
Divided by the total items in cost driver ÷ 20,000 ÷ 10,000
Cost Function - Per Item of Cost Driver (in hours) 8.40 13.20
Using this cost function, the total overhead costs to be allocated to each
product are:
Product X
A B C=AxB
Grinding 8.40 4,000 33,600
Packaging 13.20 5,000 66,000
TOTAL COST 99,600
Product Y
A B C=AxB
Grinding 8.40 6,000 50,400
Packaging 13.20 3,000 39,600
TOTAL COST 90,000
Product Z
A B C=AxB
Grinding 8.40 10,000 84,000
Packaging 13.20 2,000 26,400
TOTAL COST 110,400
Materials, labor and other traceable costs, if any, are then added to the total
overhead cost allocated to determine the total cost of each product.
Activity-Based Costing
Follow steps in cost allocation.
Materials, labor and other traceable costs, if any, are then added to the total
overhead cost allocated to determine the total cost of each product.
Value-Added Activities
Reviewer 111
Management Advisory Services
Value added to operations: steps that support the ability to deliver services to the
people served
Value Adding Activities are any activities that add value to the customer and
meet the three criteria for a Value Adding Activity.
Value-added activities are necessary activities that incur costs but increase
the perceived value of a particular product to the customer. Example:
engineering designs modification.
Non-Value-Added Activities
These are operations that are either (1) unnecessary or dispensable, or (2)
necessary, but inefficient and improvable. Example: rework of defective
units.
Reviewer 112
Management Advisory Services
Activities that do not make the product or service more valuable to the
customer.
Integrates ABC with other concepts such as Total Quality Management (TQM),
process value analysis and target costing to produce a management system that
strives for excellence through cost reduction and continuous process
improvement. An important goal of ABM is to reduce or eliminate non-value
added activities and costs.
ABC management is on the philosophy that activities identified for ABC can also
be used for cost management and performance evaluation purposes. It
eliminates activities that are non-value added costs. Non-value-added activities
simply add cost to, or increase the time spent on, a product or service without
increasing its market value. Awareness of these classifications encourages
managers to reduce or eliminate the time spent on the non-value added
activities.
7. Strategic Cost Management (utilize the concept for planning and control
purposes)
a. Total Quality Management
Early
TIME 1940s 1960s 1980s and Beyond
1900s
Statistical Organizational
FOCUS Inspection Customer-driven Quality
Sampling Quality Focus
New Concept of Quality: Build quality into the
Old Concept of Quality: inspect for quality
process, Identify and correct causes of
after production (REACTIVE)
problems (PROACTIVE)
This inventory supply system represents a shift away from the older just-in-case
strategy, in which producers carried large inventories in case higher demand had
to be met.
cars. The parts needed to manufacture the cars do not arrive before or after they
are needed; instead, they arrive just as they are needed.
Advantages
Disadvantages
Case Study
The just-in-time (JIT) philosophy in the simplest form means getting the right
quantity of goods at the right place and at the right time.
c. Continuous Improvement
CONTINUOUS IMPROVEMENT
Reviewer 116
Management Advisory Services
Among the most widely used tools for continuous improvement is a four-step
quality model—the plan-do-check-act (PDCA) cycle, also known as Deming
Cycle or Shewhart Cycle:
Continuous or Continual?
Overview[edit]
History[edit]
Business Process Reengineering (BPR) began as a private sector technique to
help organizations fundamentally rethink how they do their work in order to
dramatically improve customer service, cut operational costs, and become world-
class competitors. A key stimulus for re-engineering has been the continuing
development and deployment of sophisticated information
systems and networks. Leading organizations are becoming bolder in using this
technology to support innovative business processes, rather than refining current
ways of doing work.[1]
Reengineering Work: Don't Automate, Obliterate, 1990[edit]
In 1990, Michael Hammer, a former professor of computer science at
the Massachusetts Institute of Technology (MIT), published the article
"Reengineering Work: Don't Automate, Obliterate" in the Harvard Business
Review, in which he claimed that the major challenge for managers is to
obliterate forms of work that do not add value, rather than using technology for
automating it.[3] This statement implicitly accused managers of having focused on
the wrong issues, namely that technology in general, and more specifically
information technology, has been used primarily for automating existing
processes rather than using it as an enabler for making non-value adding work
obsolete.
Hammer's claim was simple: Most of the work being done does not add any
value for customers, and this work should be removed, not accelerated through
automation. Instead, companies should reconsider their inability to satisfy
customer needs, and their insufficient cost structure[citation needed]. Even well
established management thinkers, such as Peter Drucker and Tom Peters, were
accepting and advocating BPR as a new tool for (re-)achieving success in a
dynamic world.[4] During the following years, a fast-growing number of
publications, books as well as journal articles, were dedicated to BPR, and many
consulting firms embarked on this trend and developed BPR methods. However,
the critics were fast to claim that BPR was a way to dehumanize the work place,
Reviewer 121
Management Advisory Services
Topics[edit]
The most notable definitions of reengineering are:
Framework[edit]
An easy to follow seven step INSPIRE framework is developed by Bhudeb
Chakravarti which can be followed by any Process Analyst to perform BPR. The
seven steps of the framework are Initiate a new process reengineering project
and prepare a business case for the same; Negotiate with senior management to
get approval to start the process reengineering project; Select the key processes
that need to be reengineered; Plan the process reengineering
activities; Investigate the processes to analyze the problem areas; Redesign the
selected processes to improve the performance and Ensure the successful
implementation of redesigned processes through proper monitoring and
evaluation.
6. Inadequate infrastructure
7. Overly bureaucratic processes
8. Lack of motivation
Many unsuccessful BPR attempts may have been due to the confusion
surrounding BPR, and how it should be performed. Organizations were well
aware that changes needed to be made, but did not know which areas to change
or how to change them. As a result, process reengineering is a management
concept that has been formed by trial and error or, in other words, practical
experience. As more and more businesses reengineer their processes,
knowledge of what caused the successes or failures is becoming apparent.[16] To
reap lasting benefits, companies must be willing to examine how strategy and
reengineering complement each other by learning to quantify strategy in terms of
cost, milestones, and timetables, by accepting ownership of the strategy
throughout the organization, by assessing the organization’s current capabilities
and process realistically, and by linking strategy to the budgeting process.
Otherwise, BPR is only a short-term efficiency exercise.[17]
Organization-wide commitment[edit]
Major changes to business processes have a direct effect on processes,
technology, job roles, and workplace culture. Significant changes to even one of
those areas require resources, money, and leadership. Changing them
simultaneously is an extraordinary task.[16] Like any large and complex
undertaking, implementing reengineering requires the talents and energies of a
broad spectrum of experts. Since BPR can involve multiple areas within the
organization, it is important to get support from all affected departments. Through
the involvement of selected department members, the organization can gain
valuable input before a process is implemented; a step which promotes both the
cooperation and the vital acceptance of the reengineered process by all
segments of the organization.
Getting enterprise wide commitment involves the following: top management
sponsorship, bottom-up buy-in from process users, dedicated BPR team, and
budget allocation for the total solution with measures to demonstrate value.
Before any BPR project can be implemented successfully, there must be a
commitment to the project by the management of the organization, and strong
leadership must be provided.[18] Reengineering efforts can by no means be
exercised without a company-wide commitment to the goals. However, top
management commitment is imperative for success.[19][20] Top management must
recognize the need for change, develop a complete understanding of what BPR
is, and plan how to achieve it.[21]
Leadership has to be effective, strong, visible, and creative in thinking and
understanding in order to provide a clear vision.[22] Convincing every affected
group within the organization of the need for BPR is a key step in successfully
implementing a process. By informing all affected groups at every stage, and
Reviewer 126
Management Advisory Services
Most analysts view BPR and IT as irrevocably linked. Walmart, for example,
would not have been able to reengineer the processes used to procure and
distribute mass-market retail goods without IT. Ford was able to decrease its
headcount in the procurement department by 75 percent by using IT in
conjunction with BPR, in another well-known example. [33] The IT infrastructure
and BPR are interdependent in the sense that deciding the information
requirements for the new business processes determines the IT infrastructure
constituents, and a recognition of IT capabilities provides alternatives for BPR.
[32]
Building a responsive IT infrastructure is highly dependent on an appropriate
determination of business process information needs. This, in turn, is determined
by the types of activities embedded in a business process, and their sequencing
and reliance on other organizational processes.[37]
Effective change management[edit]
Al-Mashari and Zairi (2000) suggest that BPR involves changes in people
behavior and culture, processes, and technology. As a result, there are many
factors that prevent the effective implementation of BPR and hence restrict
innovation and continuous improvement. Change management, which involves
all human and social related changes and cultural adjustment techniques needed
by management to facilitate the insertion of newly designed processes and
structures into working practice and to deal effectively with resistance,is
considered by many researchers to be a crucial component of any BPR
effort.One of the most overlooked obstacles to successful BPR project
implementation is resistance from those whom implementers believe will benefit
the most. Most projects underestimate the cultural effect of major process and
structural change and as a result, do not achieve the full potential of their change
effort. Many people fail to understand that change is not an event, but rather a
management technique.
Change management is the discipline of managing change as a process, with
due consideration that employees are people, not programmable machines.
[16]
Change is implicitly driven by motivation which is fueled by the recognition of
the need for change. An important step towards any successful reengineering
effort is to convey an understanding of the necessity for change.[19] It is a well-
known fact that organizations do not change unless people change; the better
change is managed, the less painful the transition is.
Organizational culture is a determining factor in successful BPR implementation.
[38]
Organizational culture influences the organization’s ability to adapt to change.
Culture in an organization is a self-reinforcing set of beliefs, attitudes, and
behavior. Culture is one of the most resistant elements of organizational behavior
and is extremely difficult to change. BPR must consider current culture in order to
change these beliefs, attitudes, and behaviors effectively. Messages conveyed
from management in an organization continually enforce current culture. Change
is implicitly driven by motivation which is fueled by the recognition of the need for
change.
Reviewer 130
Management Advisory Services
Critique[edit]
Many companies used reengineering as a pretext to downsizing, though this was
not the intent of reengineering's proponents; consequently, reengineering earned
a reputation for being synonymous with downsizing and layoffs.[41]
In many circumstances, reengineering has not always lived up to its
expectations. Some prominent reasons include:
Others have claimed that reengineering was a recycled buzzword for commonly-
held ideas. Abrahamson (1996) argued that fashionable management terms tend
to follow a lifecycle, which for Reengineering peaked between 1993 and 1996
(Ponzi and Koenig 2002). They argue that Reengineering was in fact nothing
new (as e.g. when Henry Ford implemented the assembly line in 1908, he was in
fact reengineering, radically changing the way of thinking in an organization).
The most frequent critique against BPR concerns the strict focus on efficiency
and technology and the disregard of people in the organization that is subjected
to a reengineering initiative. Very often, the label BPR was used for major
workforce reductions. Thomas Davenport, an early BPR proponent, stated that:
"When I wrote about "business process redesign" in 1990, I explicitly said that
using it for cost reduction alone was not a sensible goal. And consultants Michael
Hammer and James Champy, the two names most closely associated with
reengineering, have insisted all along that layoffs shouldn't be the point. But the
fact is, once out of the bottle, the reengineering genie quickly turned ugly."[42]
Hammer similarly admitted that:
"I wasn't smart enough about that. I was reflecting my engineering background
and was insufficient appreciative of the human dimension. I've learned that's
critical."[43]
e. Kaizen Costing
Kaizen costing
From Wikipedia, the free encyclopedia
Kaizen costing is a cost reduction system. Yasuhiro Monden defines kaizen
costing as "the maintenance of present cost levels for products currently being
manufactured via systematic efforts to achieve the desired cost level." The
word kaizen is a Japanese word meaning continuous improvement.[citation needed]
Monden has described two types of kaizen costing:
Procurement and production costing technique that considers all life cycle costs.
In procurement, it aims to determine the lowest cost of ownership of a fixed asset
(purchase price, installation, operation, maintenance and upgrading, disposal,
and other costs) during the asset's economic life. In manufacturing (as an
integral part of terotechnology), it aims to estimate not only the production costs
but also how much revenue a product will generate and what expenses will be
incurred at each stage of the value chain during the product's estimated life cycle
duration.
PRODUCT LIFE
CYCLE
Introduction Maturity
Growth Decline
g. Target Costing
Target costing
From Wikipedia, the free encyclopedia
Target costing is an approach to determine a product’s life-cycle cost which
should be sufficient to develop specified functionality and quality, while ensuring
its desired profit. It involves setting a target cost by subtracting a desired profit
margin from a competitive market price.[1] A target cost is the maximum amount
of cost that can be incurred on a product, however, the firm can still earn the
required profit margin from that product at a particular selling price. Target
costing decomposes the target cost from product level to component level.
Through this decomposition, target costing spread the competitive pressure
faced by the company to product’s designers and suppliers. Target costing
consists of cost planning in the design phase of production as well as cost
control throughout the resulting product life cycle. The cardinal rule of target
costing is to never exceed the target cost. However, the focus of target costing is
not to minimize costs, but to achieve a desired level of cost reduction determined
by target costing process.
Definition[edit]
Target costing is defined as "a disciplined process for determining and achieving
a full-stream cost at which a proposed product with specified functionality,
performance, and quality must be produced in order to generate the desired
profitability at the product’s anticipated selling price over a specified period of
time in the future." [2] This definition encompasses the principal concepts:
products should be based on an accurate assessment of the wants and needs of
customers in different market segments, and cost targets should be what result
after a sustainable profit margin is subtracted from what customers are willing to
pay at the time of product introduction and afterwards.
The fundamental objective of target costing is to manage the business to be
profitable in a highly competitive marketplace. In effect, target costing is a
Reviewer 136
Management Advisory Services
History[edit]
Target costing was developed independently in both USA and Japan in different
time periods.[4] Target costing was adopted earlier by American companies to
reduce cost and improve productivity, such as Ford Motor from 1900s, American
Motors from 1950s-1960s. Although the ideas of target costing were also applied
by a number of other American companies
including Boeing, Caterpillar, Northern Telecom, few of them apply target costing
as comprehensively and intensively as top Japanese companies such
as Nissan, Toyota, Nippondenso.[5] Target costing emerged from Japan from
1960s to early 1970s with the particular effort of Japanese automobile industry,
including Toyota and Nissan. It did not receive global attention until late 1980s to
1990s when some authors such as Monden (1992),[6] Sakurai (1989),[7] Tanaka
(1993),[8] and Cooper (1992)[9] described the way that Japanese companies
applied target costing to thrive in their business (IMA 1994). With superior
implementation system, Japanese manufacturers is more successful than the
American companies in developing target costing.[4] Traditional cost-plus
pricing strategy has been impeding the productivity and profitability for a long
time.[10][11] As a new strategy, target costing is replacing traditional cost-plus
pricing strategy by maximizing customer satisfaction by accepted level of quality
and functionality while minimizing costs.
Following the completion of market-driven costing, the next task of the target
costing process is product-level target costing. Product-level target costing
concentrates on designing products that satisfy the company's customers at the
allowable cost. To achieve this goal, product-level target costing is typically
divided into three steps as shown below.[1]
The first step is to set a product-level target cost. Since the allowable cost is
simply obtained from external conditions without considering the design
capabilities of the company as well as the realistic cost for manufacturing, it may
not be always achievable in practice. Thus, it is necessary to adjust the
unachievable allowable cost to an achievable target cost that the cost increase
should be reduced with great effort. The second step is to discipline this target
cost process, including monitoring the relationship between the target cost and
the estimated product cost at any point during the design process, applying the
cardinal rule so that the total target costs at the component-level does not
exceed the target cost of the product, and allowing exceptions for products
violating the cardinal rule. For a product exception to the cardinal rule, two
analyses are often performed after the launch of the product. One involves
reviewing the design process to find out why the target cost was unachieved.
The other is an immediate effort to reduce the excessive cost to ensure that the
period of violation is as short as possible. Once the target cost-reduction
objective is identified, the product-level target costing comes to the final step,
finding ways to achieve it. Engineering methods such as value
engineering (VE), design for manufacture and assembly (DFMA), and quality
function deployment (QFD) are commonly adopted in this step.[1]
Applications[edit]
Aside from the application of target costing in the field of manufacturing, target
costing are also widely used in the following areas.
Energy[edit]
An Energy Retrofit Loan Analysis Model has been developed using a Monte
Carlo (MC) method for target costing in Energy Efficient buildings and
construction. MC method has been shown to be effective in determining the
impact of financial uncertainties in project performance.[15]
Target Value Design Decision Making Process (TVD-DMP) groups a set of
energy efficiency methods at different optimization levels to evaluate costs and
uncertainties involved in the energy efficiency process. Some major design
parameters are specified using this methods including Facility Operation
Schedule, Orientation, Plug load, HVAC and lightingsystems.
The entire process consists of three phases: initiation, definition and alignment.
Initiation stage involves developing a business case for energy efficiency using
target value design (TVD) training, organization and compensation. The
definition process involves defining and validating the case by tools such as
Reviewer 140
Management Advisory Services
values analysis and bench marking processes to determine the allowable costs.
By setting targets and designing the design process to align with those targets,
TVD-DMP has been shown to achieve a high level of collaboration needed for
energy efficiency investments. This is done by using risk analysis tools, pull
planning and rapid estimating processes.
Healthcare[edit]
Target costing and target value design have applications in building healthcare
facilities including critical components such as Neonatal Intensive Care
Units (NICUs). The process is influenced by unit locations, degree of comfort,
number of patients per room, type of supply location and access to nature.
[16]
According to National Vital Statistics Reports, 12.18% of 2009 births were
premature and the cost per infant was $51,600. This led to opportunities for
NICUs to implement target value design for deciding whether to build a single-
family room or more open-bay NICUs. This was achieved using set-based design
analysis which challenges the designer to generate multiple alternatives for the
same functionality. Designs are evaluated keeping in mind the requirements of
the various stakeholders in the NICU including nurses, doctors, family members
and administrators. Unlike linear point-based design, set-based design narrows
options to the optimal one by eliminating alternatives simultaneously defined by
user constraints.
Construction[edit]
About 15% construction project in Japan adopted target costing for their cost
planning and management as recognized by Jacomit (2008).[17] In the U.S.,
target costing research has been carried out within the framework of lean
construction as target value design (TVD) method[18] and have been
disseminated widely over construction industry in recent years. Research has
proven that if being applied systematically, TVD can deliver a significant
improvement in project performance with average reduction of 15% in
comparison with market cost.[19] TVD in construction project considers the final
cost of project as a design parameter, similar to the capacity and aesthetics
requirements for the project. TVD requires the project team to develop a target
cost from the beginning. The project team is expected not to design exceeding
the target cost without the owner’s approval, and must use different skills to
maintain this target cost. In some cases, the cost can increase but the project
team must commit to decrease and must try their best to decrease without
impacting on other functions of the project.[20]
COST CENTER
A cost center is a responsibility center in which the manager has the authority
only to incur costs and is specifically evaluated on the basis of how well costs are
Reviewer 141
Management Advisory Services
controlled and utilized. The unit manager is responsible for minimizing costs
subject to some output constraints. Examples are: maintenance department of a
manufacturing company; library section of a school; and an accounting
department of a trading concern. Performance of a cost center is evaluated
through variance analysis reports based on standard costs and flexible budgets.
REVENUE CENTER
PROFIT CENTER
INVESTMENT CENTER
Decentralization
Segment Reporting
Aggregate the results of two or more segments if they have similar products,
services, processes, customers, distribution methods, and regulatory
environments.
Report a segment if it has at least 10% of the revenues, 10% of the profit or
loss, or 10% of the combined assets of the entity.
If the total revenue of the segments you have selected under the preceding
criteria comprise less than 75% of the entity's total revenue, then add more
segments until you reach that threshold.
You can add more segments beyond the minimum just noted, but consider a
reduction if the total exceeds ten segments.
Controllable Costs
In the realm of budgets and costs, the budget should carefully designate which
departments have authority over and are responsible for which costs. If a
department has authority and responsibility for certain costs, those costs are
called controllable costs.
Non-controllable Costs
Because authority and accountability go together, you can only hold individuals
and units in an organization accountable for those things that they can control. If
you don’t give subordinates authority to do something, how can you hold them
accountable for doing it?
Suppose Eve asked Alfred to walk her dog for a week. However, she refused to
give Alfred the keys to her apartment, so he had no access to the dog. Because
Eve didn’t give Alfred the authority to do his job, Eve can’t possibly hold him
accountable for not walking the dog (or for the resulting mess in her apartment).
Given the organization’s goals and strategies, every required task and decision
should be under someone’s watch. Responsibility accounting allows you to hold
subordinates responsible for all tasks over which they have control. Overhead
allocations are usually inconsistent with the idea of controllable costs. Overhead
allocations use allocation rates to assign overhead costs based on number of
units, direct labor hours, or other cost drivers to individual departments. Each
department must then include a portion of this overhead as a cost in its own
budget, even though these departments usually have little or no say over how
money is spent for this overhead.
Even when one of these departments closes completely, its overhead costs often
remain and get assigned to other departments. In this way, arbitrary overhead
allocations often result, forcing departments to accept responsibility for overhead
costs that they have little or no control over — non-controllable costs.
Reviewer 144
Management Advisory Services
Direct Costs
Direct fixed costs are fixed costs that can be directly traced to the segment. Just
because a fixed cost is direct does not mean that it is avoidable. There may be
depreciation, contractual obligations, and other costs that the company will not
be able to cut even if the segment is discontinued. If the fixed costs cannot be
avoided, losses will increase if the segment is discontinued because the segment
will no longer be contributing to the total contribution margin.
Common Costs
Common fixed costs are organization sustaining fixed costs that are allocated to
the segment. These fixed costs will continue even if the segment has been
eliminated; they will just be allocated to the remaining segments.
RESPONSIBILITY CENTER
EVALUATION TECHNIQUES
MANAGER
Cost center manager Cost variance analysis
Revenue center manager Revenue variance analysis
Profit center manager Segment margin analysis
Return on Investment (ROI), Residual
Investment center manager Income Model, Economic Value
Added (EVA), etc.
Sales xx
Variable Costs (xx)
Manufacturing Margin xx
Variable Expenses (xx)
Contribution Margin xx
Controllable Direct Fixed Costs and Expenses (xx)
Controllable Margin xx
Non-controllable Direct Fixed Costs and Expenses (xx)
Segment (Direct) Margin xx
Indirect (Allocated) Fixed Costs and Expenses (xx)
Operating Income xx
A transfer price is the price at which divisions of a company transact with each
other, such as the trade of supplies or labor between departments. Transfer
prices are used when individual entities of a larger multi-entity firm are treated
and measured as separately run entities. A transfer price can also be known as a
transfer cost.
Transfer prices are often used when companies sell goods within the company
but to parts of the company in other international jurisdictions. This type of
transfer pricing is common. Approximately 60% of the goods and services sold
internationally are done within companies as opposed to between unrelated
companies.
If it costs the handle division $7 to fashion its next handle (its marginal cost of
production) and ship it off, it doesn't make sense for the transfer price to be $5
(or any other number less than $7) – otherwise, the division would lose money at
the expense of money gained by the hammer head division.
Reviewer 147
Management Advisory Services
Suppose that the hammer company also sells replacement handles for its
products. In this scenario, it sells some handles through retail rather than
sending them to the hammer head division. Suppose again that the handle
division can realize a $3 profit margin on its sold handles.
Now the cost of sending a handle isn't just the $7 marginal cost of production,
but also the $3 in lost profit (opportunity cost) from not selling the handle directly
to consumers. This means the new minimum transfer price must be $10 ($3 plus
$7).
The best transfer price is market price. Because individual business units or
segments have to compete with the rest of the world, they have to beat the
prevailing market price to stay competitive. They have to follow the market
streams of capitalistic model or free enterprise system.
Negotiated Price
Negotiated transfer price may occur when segments are free to determine the
prices at which they buy and sell internally. It is especially appropriate when
market prices are subject to rapid fluctuations. It reflects the best bargain price
acceptable to the selling and buying divisions without adversely sacrificing their
respective interests.
2. Balanced Scorecard
a. Nature And Perspectives Of Balanced Scorecard
Introduction
Payongayong
The different perspectives are linked together so that a company can better
understand how to achieve its goals and what measures to use in evaluating
performances. Likewise, within each perspective, the balanced scorecard
identifies objectives that will contribute to attainment of strategic goals. It creates
linkages so that high-level corporate goals can be communicated down to the
lowest level of employee.
The BSC suggests that we view the organization from four perspectives, and to
develop objectives, measures (KPIs), targets, and initiatives (actions) relative to
each of these points of view:
Financial: often renamed Stewardship or other more appropriate name in
the public sector, this perspective views organizational financial
performance and the use of financial resources
Wikipedia
First Generation[edit]
The first generation of balanced scorecard designs used a "4 perspective"
approach to identify what measures to use to track the implementation of
strategy. `The original four "perspectives" proposed[6] were:
There are normally no problems with defining objectives for the financial
perspective of the Balanced Scorecard for profit-oriented organizations. Any
business has financial goals, and is accustomed to using financial metrics. For
most businesses the challenges are to shift a focus from financial perspective
only to the Customer, Internal, and Learning & Growth perspectives.
Reviewer 151
Management Advisory Services
The “financial” word in the name of the perspective might sound confusing for
non-profit organizations. They are not targeting financial outcomes, but some
social, cultural, political… goals. Still, non-profit organizations have stakeholders
that might be the members of communities that founded the organization, and in
this case financial perspective is actually a “Stakeholder Interests” perspective or
“Success” perspective.
The financial perspective is on the top of the Balanced Scorecard strategy map,
which is acceptable by for-profit organizations. Non-profits tend to put it below
other perspectives or in a separate resource part. This might be the case. But
let’s have a look at a simple example: “funds raised” metric, it is a financial
metric, but it is not a resource one, it is an outcome. So we still need a
“Success / Stakeholders Interests” perspective on the top, which will reflect
designed outcomes (not necessary financial ones).
To define objectives from “Stakeholders Interest,” it is good idea for
formulate the question: “How does your department define their success?”
Framework for Finance Perspective
Let’s have a look at the 3 generic strategies:
Product Leadership Strategy.
Customer Value Strategy.
Operational Excellence Strategy.
Reviewer 152
Management Advisory Services
Small businesses might want to find and employee a new technology that
would allow to decrease costs;
Large companies might achieve resource optimization by sharing resources
and technologies between departments, or achieving economic outcome by
scaling production;
Balance inside the Financial perspective
Cascading exercise
Executive level
Sometimes strategists are trying to be more specific with leading and lagging
metrics, or even take them from 3rd party lists. I would recommend being really
careful about this. In the early stage in the most cases it is impossible to come up
with indicators (especially leading ones) that will reflect the strategy properly.
Those indicators give a mock control over the performance.
For now, all of the discussed ideas might sound like very complex ones.
We have an online training called “Building Balanced Scorecard Step by Step”
where under our guidance and following our examples you can build a prototype
of your own balanced scorecard. Check out the training schedule and details.
Reviewer 155
Management Advisory Services
Lessons learned
But given today’s business environment, should senior managers even look at
the business from a financial perspective? Should they pay attention to short-
term financial measures like quarterly sales and operating income? Many have
criticized financial measures because of their well-documented inadequacies,
their backward-looking focus, and their inability to reflect contemporary value-
creating actions. Shareholder value analysis (SVA), which forecasts future cash
flows and discounts them back to a rough estimate of current value, is an attempt
to make financial analysis more forward looking. But SVA still is based on cash
flow rather than on the activities and processes that drive cash flow.
Assertions that financial measures are unnecessary are incorrect for at least two
reasons. A well-designed financial control system can actually enhance rather
than inhibit an organization’s total quality management program. (See the insert,
“How One Company Used a Daily Financial Report to Improve Quality.”) More
important, however, the alleged linkage between improved operating
Reviewer 156
Management Advisory Services
performance and financial success is actually quite tenuous and uncertain. Let
us demonstrate rather than argue this point.
Over the three-year period between 1987 and 1990, a NYSE electronics
company made an order-of-magnitude improvement in quality and on-time
delivery performance. Outgoing defect rate dropped from 500 parts per million to
50, on-time delivery improved from 70% to 96% and yield jumped from 26% to
51%. Did these breakthrough improvements in quality, productivity, and
customer service provide substantial benefits to the company? Unfortunately not.
During the same three-year period, the company’s financial results showed little
improvement, and its stock price plummeted to one-third of its July 1987 value.
The considerable improvements in manufacturing capabilities had not been
translated into increased profitability. Slow releases of new products and a failure
to expand marketing to new and perhaps more demanding customers prevented
the company from realizing the benefits of its manufacturing achievements. The
operational achievements were real, but the company had failed to capitalize on
them.
As companies improve their quality and response time, they eliminate the need
to build, inspect, and rework out-of-conformance products or to reschedule and
Reviewer 157
Management Advisory Services
expedite delayed orders. Eliminating these tasks means that some of the people
who perform them are no longer needed. Companies are understandably
reluctant to lay off employees, especially since the employees may have been
the source of the ideas that produced the higher quality and reduced cycle time.
Layoffs are a poor reward for past improvement and can damage the morale of
remaining workers, curtailing further improvement. But companies will not realize
all the financial benefits of their improvements until their employees and facilities
are working to capacity—or the companies confront the pain of downsizing to
eliminate the expenses of the newly created excess capacity.
What you measure is what you get. Senior executives understand that their
organization’s measurement system strongly affects the behavior of managers
and employees. Executives also understand that traditional financial accounting
measures like return-on-investment and earnings-per-share can give misleading
signals for continuous improvement and innovation—activities today’s
competitive environment demands. The traditional financial performance
measures worked well for the industrial era, but they are out of step with the
skills and competencies companies are trying to master today.
financial results will follow.” But managers should not have to choose between
financial and operational measures. In observing and working with many
companies, we have found that senior executives do not rely on one set of
measures to the exclusion of the other. They realize that no single measure can
provide a clear performance target or focus attention on the critical areas of the
business. Managers want a balanced presentation of both financial and
operational measures.
The balanced scorecard allows managers to look at the business from four
important perspectives. (See the exhibit “The Balanced Scorecard Links
Performance Measures.”) It provides answers to four basic questions:
Reviewer 159
Management Advisory Services
While giving senior managers information from four different perspectives, the
balanced scorecard minimizes information overload by limiting the number of
measures used. Companies rarely suffer from having too few measures. More
commonly, they keep adding new measures whenever an employee or a
consultant makes a worthwhile suggestion. One manager described the
proliferation of new measures at his company as its “kill another tree program.”
The balanced scorecard forces managers to focus on the handful of measures
that are most critical.
Several companies have already adopted the balanced scorecard. Their early
experiences using the scorecard have demonstrated that it meets several
managerial needs. First, the scorecard brings together, in a single management
report, many of the seemingly disparate elements of a company’s competitive
agenda: becoming customer oriented, shortening response time, improving
Reviewer 160
Management Advisory Services
We will illustrate how companies can create their own balanced scorecard with
the experiences of one semiconductor company—let’s call it Electronic Circuits
Inc. ECI saw the scorecard as a way to clarify, simplify, and then operationalize
the vision at the top of the organization. The ECI scorecard was designed to
focus the attention of its top executives on a short list of critical indicators of
current and future performance.
Many companies today have a corporate mission that focuses on the customer.
“To be number one in delivering value to customers” is a typical mission
statement. How a company is performing from its customers’ perspective has
become, therefore, a priority for top management. The balanced scorecard
demands that managers translate their general mission statement on customer
service into specific measures that reflect the factors that really matter to
customers.
Customers’ concerns tend to fall into four categories: time, quality, performance
and service, and cost. Lead time measures the time required for the company to
meet its customers’ needs. For existing products, lead time can be measured
from the time the company receives an order to the time it actually delivers the
product or service to the customer. For new products, lead time represents the
time to market, or how long it takes to bring a new product from the product
definition stage to the start of shipments. Quality measures the defect level of
incoming products as perceived and measured by the customer. Quality could
also measure on-time delivery, the accuracy of the company’s delivery forecasts.
The combination of performance and service measures how the company’s
products or services contribute to creating value for its customers.
To put the balanced scorecard to work, companies should articulate goals for
time, quality, and performance and service and then translate these goals into
specific measures. Senior managers at ECI, for example, established general
Reviewer 161
Management Advisory Services
The internal measures for the balanced scorecard should stem from the business
processes that have the greatest impact on customer satisfaction—factors that
affect cycle time, quality, employee skills, and productivity, for example.
Companies should also attempt to identify and measure their company’s core
Reviewer 163
Management Advisory Services
To achieve goals on cycle time, quality, productivity, and cost, managers must
devise measures that are influenced by employees’ actions. Since much of the
action takes place at the department and workstation levels, managers need to
decompose overall cycle time, quality, product, and cost measures to local
levels. That way, the measures link top management’s judgment about key
internal processes and competencies to the actions taken by individuals that
affect overall corporate objectives. This linkage ensures that employees at lower
levels in the organization have clear targets for actions, decisions, and
improvement activities that will contribute to the company’s overall mission.
A company’s ability to innovate, improve, and learn ties directly to the company’s
value. That is, only through the ability to launch new products, create more value
for customers, and improve operating efficiencies continually can a company
Reviewer 164
Management Advisory Services
penetrate new markets and increase revenues and margins—in short, grow and
thereby increase shareholder value.
Other companies, like Milliken & Co., require that managers make improvements
within a specific time period. Milliken did not want its “associates” (Milliken’s word
for employees) to rest on their laurels after winning the Baldridge Award.
Chairman and CEO Roger Milliken asked each plant to implement a “ten-four”
improvement program: measures of process defects, missed deliveries, and
scrap were to be reduced by a factor of ten over the next four years. These
targets emphasize the role for continuous improvement in customer satisfaction
and internal business processes.
change in their performance measurement system during the past two years and
39% plan a major change within two years.
Advantages
they are incurred, so reducing profits. But successful research improves future
profits if it can be brought to market.
Disadvantages
Evaluating performance using multiple measures that can conflict in the short
term can also be time-consuming. One bank that adopted a performance
evaluation system using multiple accounting and non-financial measures saw the
time required for area directors to evaluate branch managers increase from less
than one day per quarter to six days.
this deprived them of time that could be better spent serving customers. The
company responded by eliminating most quality reviews, reducing the number of
indicators tracked and minimizing reports and meetings.
The second drawback is that, unlike accounting measures, non-financial data are
measured in many ways, there is no common denominator. Evaluating
performance or making trade-offs between attributes is difficult when some are
denominated in time, some in quantities or percentages and some in arbitrary
ways.
The lack of an explicit casual model of the relations between measures also
contributes to difficulties in evaluating their relative importance. Without knowing
the size and timing of associations among measures, companies find it difficult to
make decisions or measure success based on them.
Finally, although financial measures are unlikely to capture fully the many
dimensions of organizational performance, implementing an evaluation system
with too many measures can lead to “measurement disintegration”. This occurs
when an overabundance of measures dilutes the effect of the measurement
Reviewer 168
Management Advisory Services
Once managers have determined that the expected benefits from non-financial
data outweigh the costs, three steps can be used to select and implement
appropriate measures.
While this seems intuitive, experience indicates that companies do a poor job
determining and articulating these drivers. Managers tend to use one of three
methods to identify value drivers, the most common being intuition. However,
executives’ rankings of value drivers may not reflect their true importance. For
example, many executives rate environmental performance and quality as
relatively unimportant drivers of long-term financial performance. In contrast,
statistical analyses indicate these dimensions are strongly associated with a
company’s market value.
Review Consistencies
Most companies track hundreds, if not thousands, of non-financial measures in
their day-to-day operations. To avoid “reinventing the wheel”, an inventory of
current measures should be made. Once measures have been documented,
their value for performance measurement can be assessed. The issue at this
stage is the extent to which current measures are aligned with the company’s
strategies and value drivers. One method for assessing this alignment is “gap
analysis”. Gap analysis requires managers to rank performance measures on at
least two dimensions: their importance to strategic objectives and the importance
currently placed on them.
Reviewer 169
Management Advisory Services
Integrate Measures
Finally, after measures are chosen, they must become an integral part of
reporting and performance evaluation if they are to affect employee behavior and
organizational performance. This is not easy. Since the choice of performance
measures has a substantial impact on employees’ careers and pay, controversy
is bound to emerge no matter how appropriate the measures. Many companies
have failed to benefit from non-financial performance measures through being
reluctant to take this step.
Conclusion
Although non-financial measures are increasingly important in decision-making
and performance evaluation, companies should not simply copy measures used
by others. The choice of measures must be linked to factors such as corporate
strategy, value drivers, organizational objectives and the competitive
environment. In addition, companies should remember that performance
measurement choice is a dynamic process – measures may be appropriate
today, but the system needs to be continually reassessed as strategies and
competitive environments evolve.
Regression model.
Reviewer 170
Management Advisory Services
In simple linear regression, the model used to describe the relationship between
a single dependent variable y and a single independent variable x is y = a0 + a1x
+ k. a0and a1 are referred to as the model parameters, and is a probabilistic error
term that accounts for the variability in y that cannot be explained by the linear
relationship with x. If the error term were not present, the model would be
deterministic; in that case, knowledge of the value of x would be sufficient to
determine the value of y.
Correlation.
Correlation and regression analysis are related in the sense that both deal with
relationships among variables. The correlation coefficient is a measure of linear
association between two variables. Values of the correlation coefficient are
always between -1 and +1. A correlation coefficient of +1 indicates that two
variables are perfectly related in a positive linear sense, a correlation coefficient
Reviewer 171
Management Advisory Services
of -1 indicates that two variables are perfectly related in a negative linear sense,
and a correlation coefficient of 0 indicates that there is no linear relationship
between the two variables. For simple linear regression, the sample correlation
coefficient is the square root of the coefficient of determination, with the sign of
the correlation coefficient being the same as the sign of b1, the coefficient of x1
in the estimated regression equation.
In this section we will first discuss correlation analysis, which is used to quantify
the association between two continuous variables (e.g., between an independent
and a dependent variable or between two independent variables). Regression
analysis is a related technique to assess the relationship between an outcome
variable and one or more risk factors or confounding variables. The outcome
variable is also called the response or dependent variable and the risk factors
and confounders are called the predictors, or explanatory or independent
variables. In regression analysis, the dependent variable is denoted " y" and the
independent variables are denoted by "x".
Correlation Analysis
LISA: [I find this description confusing. You say that the correlation coefficient is
a measure of the "strength of association", but if you think about it, isn't the slope
a better measure of association? We use risk ratios and odds ratios to quantify
the strength of association, i.e., when an exposure is present it has how many
times more likely the outcome is. The analogous quantity in correlation is the
slope, i.e., for a given increment in the independent variable, how many times is
the dependent variable going to increase? And "r" (or perhaps better R-squared)
is a measure of how much of the variability in the dependent variable can be
accounted for by differences in the independent variable. The analogous
measure for a dichotomous variable and a dichotomous outcome would be the
attributable proportion, i.e., the proportion of Y that can be attributed to the
presence of the exposure.]
The figure below shows four hypothetical scenarios in which one continuous
variable is plotted along the X-axis and the other along the Y-axis.
Reviewer 173
Management Advisory Services
We wish to estimate the association between gestational age and infant birth
weight. In this example, birth weight is the dependent variable and gestational
age is the independent variable. Thus y=birth weight and x=gestational age. The
data are displayed in a scatter diagram in the figure below.
Each point represents an (x,y) pair (in this case the gestational age, measured in
weeks, and the birth weight, measured in grams). Note that the independent
variable is on the horizontal axis (or X-axis), and the dependent variable is on the
vertical axis (or Y-axis). The scatter plot shows a positive or direct association
between gestational age and birth weight. Infants with shorter gestational ages
are more likely to be born with lower weights and infants with longer gestational
ages are more likely to be born with higher weights.
The variances of x and y measure the variability of the x scores and y scores
around their respective sample means (
We first summarize the gestational age data. The mean gestational age is:
Next, we summarize the birth weight data. The mean birth weight is:
The variance of birth weight is computed just as we did for gestational age as
shown in the table below.
The computations are summarized below. Notice that we simply copy the
deviations from the mean gestational age and birth weight from the two tables
above into the table below and multiply.
b. Gantt Chart
Gantt chart
A Gantt chart showing three kinds of schedule dependencies (in red) and percent
complete indications.
A Gantt chart is a type of bar chart, devised by Henry Gantt in the 1910s, that
illustrates a project schedule. Gantt charts illustrate the start and finish dates of
the terminal elements and summary elements of a project. Terminal elements
and summary elements comprise the work breakdown structure of the project.
Modern Gantt charts also show the dependency (i.e., precedence network)
relationships between activities. Gantt charts can be used to show current
schedule status using percent-complete shadings and a vertical "TODAY" line as
shown here.
Although now regarded as a common charting technique, Gantt charts were
considered revolutionary when first introduced.[1]This chart is also used
in information technology to represent data that has been collected.
Historical development[edit]
The first known tool of this type was developed in 1896 by Karol Adamiecki, who
called it a harmonogram.[2] Adamiecki did not publish his chart until 1931,
however, and only in Polish, which limited both its adoption and recognition of his
authorship. The chart is named after Henry Gantt (1861–1919), who designed
his chart around the years 1910–1915.[3][4]
One of the first major applications of Gantt charts was by the United States
during World War I, at the instigation of General William Crozier.[5]
In the 1980s, personal computers allowed widespread creation of complex and
elaborate Gantt charts. The first desktop applications were intended mainly for
Reviewer 179
Management Advisory Services
project managers and project schedulers. With the advent of the Internet and
increased collaboration over networks at the end of the 1990s, Gantt charts
became a common feature of web-based applications, including
collaborative groupware.
Example[edit]
In the following table there are seven tasks, labeled a through g. Some tasks can
be done concurrently (a and b) while others cannot be done until their
predecessor task is complete (c and d cannot begin until a is complete).
Additionally, each task has three time estimates: the optimistic time estimate (O),
the most likely or normal time estimate (M), and the pessimistic time estimate
(P). The expected time (TE) is estimated using the beta probability distribution for
the time estimates, using the formula (O + 4M + P) ÷ 6.
Time estimates
Activity Predecessor Expected time
Opt. (O) Normal (M) Pess. (P)
a — 2 4 6 4.00
b — 3 5 9 5.33
c a 4 5 7 5.17
d a 4 6 10 6.33
e b, c 4 5 7 5.17
f d 3 4 8 4.50
g e 3 5 8 5.17
Once this step is complete, one can draw a Gantt chart or a network diagram.
non-critical activities, (3) since Saturday and Sunday are not work days
and are thus excluded from the schedule, some bars on the Gantt chart
are longer if they cut through a weekend.
Further applications[edit]
Gantt charts can be used for scheduling generic resources as well as
project management. They can also be used for scheduling production
processes and employee rostering.[6] In the latter context, they may also be
known as timebar schedules. Gantt charts can be used to track shifts or
tasks and also vacations or other types of out-of-office time.
[7]
Specialized employee scheduling software may output schedules as a
Gantt chart, or they may be created through popular desktop publishing
software.
The program evaluation review technique (PERT) and critical path method
(CPM) are tools useful in planning, scheduling, and managing complex projects.
PERT/CPM (sometimes referred to as network analysis) provides a focus around
which managers and project planners can brainstorm. It is useful for evaluating
the performance of individuals and teams. The key concept in CPM/PERT is that
a small set of activities, which make up the longest path through the activity
network, control the entire project. If these critical activities can be identified and
assigned to the responsible persons, management resources can be optimally
used by concentrating on the few activities that determine the fate of the entire
project. Noncritical activities can be replanned or rescheduled, and resources for
them can be reallocated flexibly, without affecting the whole project.
There are many variations of CPM/PERT which have been useful in planning
costs and scheduling manpower and machine time. CPM/PERT can answer the
following important questions: 1) How long will the entire project take? What are
the risks involved? 2) Which are the critical activities or tasks in the project which
could delay everything if they are not completed on time? 3) Is the project on
Reviewer 181
Management Advisory Services
Overview[edit]
PERT is a method of analyzing the tasks involved in completing a given project,
especially the time needed to complete each task, and to identify the minimum
time needed to complete the total project. It incorporates uncertainty by making it
possible to schedule a project while not knowing precisely the details
and durations of all the activities. It is more of an event-oriented technique rather
than start- and completion-oriented, and is used more in projects where time is
the major factor rather than cost. It is applied to very large-scale, one-time,
complex, non-routine infrastructure and Research and Development projects.
Program Evaluation Review Technique (PERT) offers a management tool, which
relies "on arrow and node diagrams of activities and events: arrows represent
the activities or work necessary to reach the events or nodes that indicate each
completed phase of the total project." [1]
PERT and CPM are complementary tools, because "CPM employs one time
estimate and one cost estimate for each activity; PERT may utilize three time
estimates (optimistic, expected, and pessimistic) and no costs for each activity.
Reviewer 182
Management Advisory Services
Although these are distinct differences, the term PERT is applied increasingly to
all critical path scheduling."[1]
History[edit]
PERT was developed primarily to simplify the planning and scheduling of large
and complex projects. It was developed for the U.S. Navy Special Projects
Office in 1957 to support the U.S. Navy's Polaris nuclear submarine project. [2] It
found applications all over industry. An early example was it was used for
the 1968 Winter Olympics in Grenoble which applied PERT from 1965 until the
opening of the 1968 Games.[3] This project model was the first of its kind, a
revival for scientific management, founded by Frederick Taylor (Taylorism) and
later refined by Henry Ford (Fordism). DuPont's critical path method was
invented at roughly the same time as PERT.
Terminology[edit]
Events and activities[edit]
In PERT diagram the event is the main building block, and it known predecessor
events and successor events:
PERT event: a point that marks the start or completion of one or more
activities. It consumes no time and uses no resources. When it marks the
completion of one or more activities, it is not "reached" (does not occur)
until all of the activities leading to that event have been completed.
Reviewer 184
Management Advisory Services
PERT activity: the actual performance of a task which consumes time and
requires resources (such as labor, materials, space, machinery). It can be
understood as representing the time, effort, and resources required to move
from one event to another. A PERT activity cannot be performed until the
predecessor event has occurred.
PERT sub-activity: a PERT activity can be further decomposed into a set of
sub-activities. For example, activity A1 can be decomposed into A1.1, A1.2
and A1.3. Sub-activities have all the properties of activities; in particular, a
sub-activity has predecessor or successor events just like an activity. A sub-
activity can be decomposed again into finer-grained sub-activities.
Time[edit]
PERT has defined four types of time required to accomplish an activity:
Implementation[edit]
The first step to scheduling the project is to determine the tasks that the
project requires and the order in which they must be completed. The
order may be easy to record for some tasks ( e.g. When building a
house, the land must be graded before the foundation can be laid)
while difficult for others (there are two areas that need to be graded,
but there are only enough bulldozers to do one). Additionally, the time
estimates usually reflect the normal, non-rushed time. Many times, the
time required to execute the task can be reduced for an additional cost
or a reduction in the quality.
Example[edit]
In the following example there are seven tasks, labeled A through G.
Some tasks can be done concurrently (A and B) while others cannot be
done until their predecessor task is complete ( C cannot begin until A is
complete). Additionally, each task has three time estimates: the
optimistic time estimate (o), the most likely or normal time estimate ( m),
and the pessimistic time estimate (p). The expected time (te) is
computed using the formula (o + 4m + p) ÷ 6.
Time estimates
Activity Predecessor Expected time
Opt. (o) Normal (m) Pess. (p)
A — 2 4 6 4.00
Reviewer 186
Management Advisory Services
B — 3 5 9 5.33
C A 4 5 7 5.17
D A 4 6 10 6.33
E B, C 4 5 7 5.17
F D 3 4 8 4.50
G E 3 5 8 5.17
Once this step is complete, one can draw a Gantt chart or a network
diagram.
A node like this one (from Microsoft Visio) can be used to display the
activity name, duration, ES, EF, LS, LF, and slack.
By itself, the network diagram pictured above does
not give much more information than a Gantt chart;
however, it can be expanded to display more
information. The most common information shown
is:
Probability analysis
To calculate the EV for a single discreet random variable, you must multiply the
value of the variable by the probability of that value occurring. Take, for example,
a normal six-sided die. Once you roll the die, it has an equal one-sixth chance of
landing on one, two, three, four, five or six. Given this information, the calculation
is straightforward:
(1/6 * 1) + (1/6 * 2) + (1/6 * 3) + (1/6 * 4) + (1/6 * 5) + (1/6 * 6) = 3.5
Reviewer 195
Management Advisory Services
If you were to roll a six-sided die an infinite amount of times, you see the average
value equals 3.5.
Half of the time, the value of the first roll will be below the EV of 3.5, or a one,
two or three, and half the time, it will be above 3.5, or a four, five or six. When the
first roll is below 3.5, you should roll again, otherwise you should stick with the
first roll.
Thus, half the time you keep a four, five or six, the first roll, and half the time you
have an EV of 3.5, the second roll. The expected value of this scenario is:
Need to break down a complex decision? Try using a decision tree. Read on to
find out all about decision trees, including what they are, how they’re used, and
how to make one.
A decision tree typically starts with a single node, which branches into possible
outcomes. Each of those outcomes leads to additional nodes, which branch off
into other possibilities. This gives it a treelike shape.
There are three different types of nodes: chance nodes, decision nodes, and end
nodes. A chance node, represented by a circle, shows the probabilities of certain
results. A decision node, represented by a square, shows a decision to be made,
and an end node shows the final outcome of a decision path.
Reviewer 196
Management Advisory Services
Decision trees can also be drawn with flowchart symbols, which some people
find easier to read and understand.
Decision Indicates a
node decision to
be made
Chance Shows
node multiple
uncertain
outcomes
Reviewer 197
Management Advisory Services
Rejected Shows a
alternative choice that
was not
selected
Endpoint Indicates a
node final outcome
To draw a decision tree, first pick a medium. You can draw it by hand on paper
or a whiteboard, or you can use special decision tree software. In either case,
here are the steps to follow:
1. Start with the main decision. Draw a small box to represent this point, then
draw a line from the box to the right for each possible solution or action. Label
them accordingly.
Reviewer 198
Management Advisory Services
From each decision node, draw possible solutions. From each chance node,
draw lines representing possible outcomes. If you intend to analyze your options
numerically, include the probability of each outcome and the cost of each action.
3. Continue to expand until every line reaches an endpoint, meaning that there
are no more choices to be made or chance outcomes to consider. Then, assign a
value to each possible outcome. It could be an abstract score or a financial
value. Add triangles to signify endpoints.
With a complete decision tree, you’re now ready to begin analyzing the decision
you face.
By calculating the expected utility or value of each choice in the tree, you can
minimize risk and maximize the likelihood of reaching a desirable outcome.
To calculate the expected utility of a choice, just subtract the cost of that decision
from the expected benefits. The expected benefits are equal to the total value of
all the outcomes that could result from that choice, with each value multiplied by
the likelihood that it’ll occur. Here’s how we’d calculate these values for the
example we made above:
Reviewer 200
Management Advisory Services
When identifying which outcome is the most desirable, it’s important to take the
decision maker’s utility preferences into account. For instance, some may prefer
low-risk options while others are willing to take risks for a larger benefit.
When you use your decision tree with an accompanying probability model, you
can use it to calculate the conditional probability of an event, or the likelihood that
it’ll happen, given that another event happens. To do so, simply start with the
initial event, then follow the path from that event to the target event, multiplying
the probability of each of those events together.
In this way, a decision tree can be used like a traditional tree diagram,
which maps out the probabilities of certain events, such as flipping a coin twice.
A decision tree can also be used to help build automated predictive models,
which have applications in machine learning, data mining, and statistics. Known
as decision tree learning, this method takes into account observations about an
item to predict that item’s value.
In these decision trees, nodes represent data rather than decisions. This type of
tree is also known as a classification tree. Each branch contains a set of
attributes, or classification rules, that are associated with a particular class label,
which is found at the end of the branch.
These rules, also known as decision rules, can be expressed in an if-then clause,
with each decision or data value forming a clause, such that, for instance, “if
conditions 1, 2 and 3 are fulfilled, then outcome x will be the result with y
certainty.”
Each additional piece of data helps the model more accurately predict which of a
finite set of values the subject in question belongs to. That information can then
be used as an input in a larger decision making model.
Reviewer 202
Management Advisory Services
For increased accuracy, sometimes multiple trees are used together in ensemble
methods:
A decision tree is considered optimal when it represents the most data with the
fewest number of levels or questions. Algorithms designed to create optimized
decision trees include CART, ASSISTANT, CLS and ID3/4/5. A decision tree can
also be created by building association rules, placing the target variable on the
right.
Each method has to determine which is the best way to split the data at each
level. Common methods for doing so include measuring the Gini impurity,
information gain, and variance reduction.
The cost of using the tree to predict data decreases with each additional
data point
Works for either categorical or numerical data
Can model problems with multiple outputs
Uses a white box model (making results easy to explain)
A tree’s reliability can be tested and quantified
Tends to be accurate regardless of whether it violates the assumptions
of source data
When dealing with categorical data with multiple levels, the information
gain is biased in favor of the attributes with the most levels.
Calculations can become complex when dealing with uncertainty and
lots of linked outcomes.
Conjunctions between nodes are limited to AND, whereas decision
graphs allow for nodes linked by OR.
f. Learning Curve
Reviewer 203
Management Advisory Services
Learning curve
A learning curve is a graphical representation of the increase of learning (vertical
axis) with experience (horizontal axis).
Fig 1: Learning curve for a single subject, showing how learning improves with
experience
Fig 2: A learning curve averaged over many trials is smooth, and can be
expressed as a mathematical function
The term learning curve is used in two main ways: where the same task is
repeated in a series of trials, or where a body of knowledge is learned over time.
The first person to describe the learning curve was Hermann Ebbinghaus in
1885, in the field of the psychology of learning, although the name wasn't used
until 1909.[1][2] In 1936, Theodore Paul Wright described the effect of learning
on production costs in the aircraft industry.[3] This form, in which unit cost is
plotted against total production, is sometimes called an experience curve.
The familiar expression "a steep learning curve" is intended to mean that the
activity is difficult to learn, although a learning curve with a steep start actually
represents rapid progress.[4][5]
In psychology[edit]
The first person to describe the learning curve was Hermann Ebbinghaus in
1885. His tests involved memorizing series of nonsense syllables, and recording
the success over a number of trials. The translation does not use the
term learning curve—but he presents diagrams of learning against trial number.
He also notes that the score can decrease, or even oscillate.[5][6][7]
The first known use of the term learning curve is from 1909: "Bryan and Harter
(6) found in their study of the acquisition of the telegraphic language a learning
curve which had the rapid rise at the beginning followed by a period of
retardation, and was thus convex to the vertical axis."[2][5]
Psychologist Arthur Bills gave a more detailed description of learning curves in
1934. He also discussed the properties of different types of learning curves, such
Reviewer 204
Management Advisory Services
In economics[edit]
In 1936, Theodore Paul Wright described the effect of learning on production
costs in the aircraft industry and proposed a mathematical model of the learning
curve.[3]
In 1968 Bruce Henderson of the Boston Consulting Group (BCG) generalized the
Unit Cost model pioneered by Wright, and specifically used a Power Law, which
is sometimes called Henderson's Law. He named this particular version
the experience curve.[9][10] Research by BCG in the 1970s observed experience
curve effects for various industries that ranged from 10 to 25 percent.[11]
The economic learning of productivity and efficiency generally follows the same
kinds of experience curves and have interesting secondary effects. Efficiency
and productivity improvement can be considered as whole organization or
industry or economy learning processes, as well as for individuals. The general
pattern is of first speeding up and then slowing down, as the practically
achievable level of methodology improvement is reached. The effect of reducing
local effort and resource use by learning improved methods paradoxically often
has the opposite latent effect on the next larger scale system, by facilitating its
expansion, or economic growth, as discussed in the Jevons paradox in the
1880s and updated in the Khazzoom-Brookes Postulate in the 1980s.
Exponential growth
The proficiency can increase without limit, as in Exponential growth (Fig
4)
Power law
This is similar in appearance to an Exponential decay function, and is
almost always used for a decreasing performance metric, such as cost.
(Fig 6) It also has the property that if you plot the logarithm of
proficiency against the logarithm of experience the result is a straight
line, and it is often presented that way.
The specific case of a plot of Unit Cost versus Total Production with a
Power Law was named the Experience Curve: the mathematical
function is sometimes called Henderson's Law.
Reviewer 207
Management Advisory Services
In machine learning[edit]
Plots relating performance to experience are
widely used in machine learning. Performance
is the error rate or accuracy of
the learning system, while experience may be
the number of training examples used for
learning or the number of iterations used
in optimizing the system model parameters.
[16]
The machine learning curve is useful for
many purposes including comparing different
algorithms,[17] choosing model parameters
during design,[18] adjusting optimization to
improve convergence, and determining the
amount of data used for training.[19]
Broader interpretations[edit]
Initially introduced
in educational and behavioral psychology, the
term has acquired a broader interpretation over
time, and expressions such as "experience
curve", "improvement curve", "cost
improvement curve", "progress curve",
"progress function", "startup curve", and
"efficiency curve" are often used
interchangeably. In economics the subject is
rates of "development", as development refers
to a whole system learning process with
varying rates of progression. Generally
speaking all learning displays incremental
change over time, but describes an "S"
curve which has different appearances
depending on the time scale of observation. It
has now also become associated with the
evolutionary theory of punctuated
equilibrium and other kinds of revolutionary
change in complex systems generally, relating
to innovation, organizational behavior and
the management of group learning, among
other fields.[20] These processes of rapidly
Reviewer 208
Management Advisory Services
In culture[edit]
"Steep learning curve"[edit]
The expression steep learning curve is used
with opposite meanings. Most sources,
including the Oxford English Dictionary,
the American Heritage Dictionary of the
English Language, and Merriam-Webster’s
Collegiate Dictionary, define a learning curve
as the rate at which skill is acquired, so a steep
increase would mean a quick increment of skill.
[4][22]
However, the term is often used in common
English with the meaning of a difficult initial
learning process.[5][22] L. Ron Hubbard's Study
Tech uses "study gradient" in the same sense,
where "steep" means difficult.
Arguably, the common English use is due to
metaphorical interpretation of the curve as a hill
to climb. (A steeper hill is initially hard, while a
gentle slope is less strainful, though sometimes
rather tedious. Accordingly, the shape of the
curve (hill) may not indicate the total amount
of work required. Instead, it can be understood
as a matter of preference related to ambition,
personality and learning style.)
Inventory Management
Definition
Purposes of inventory
Inventory costs
Inventory models
o Economic Order Quantity
o Quantity Discount
Definition
ABC Analysis -- classify inventory into 3 groups according to its annual dollar
volume/usage
An example:
A Top 80% of total dollar volume
B Next 15%
C Next 5%
Total 100000
Reviewer 213
Management Advisory Services
Exercise
Purposes of inventory
Inventory costs
2. Setup or ordering costs: cost involved in placing an order or setting up the
equipment to make the product
Annual ordering cost = no. of orders placed in a year x cost per order
= annual demand/order quantity x cost per order
Assumptions
2. No stockout
What is the order quantity such that the total cost is minimized?
2. Optimal order quantity (Q*) is found when annual holding cost = ordering cost
4. Time between orders = No. of working days per year / number of orders
Example:
Given:
Annual Demand = 60,000
Ordering cost = $25 per order
Holding cost = $3 per item per year
No. of working days per year = 240
Then, it can be computed:
Q* = 1000
In class exercise
Exercise
Case 1
Annual Demand =100 per year
50 or less $18
51 to 59 $16
60 or more $12
Case 2
50 or less $18 50
51 to 99 $16 54
Need to compare:
Case 3
56 to 99 $16 54 Infeasible
Need to compare:
Total cost (Q=50), Total cost (Q=56) and Total cost (Q=100)
Pg.540, problem 7b
Exercise
Pg. 540, problems 12, 26
EOQ Model
2 DO
EOQ=
√ C
Where:
Safety Stock
Reorder Point
Without safety stock: normal lead time usage + safety stock = maximum lead
time x average usage
h. Linear Programming (Graphic Method; Algebraic Method)
Reviewer 218
Management Advisory Services
INTRODUCTION
We will use the following Bridgeway Company case to introduce the graphical
method and illustrate how it solves LP maximization problems. Bridgeway
Company manufactures a printer and keyboard. The contribution margins of the
printer and keyboard are $30 and $20, respectively. Two types of skilled labor
are required to manufacture these products: soldering and assembling. A printer
requires 2 hours of soldering and I hour of assembling. A keyboard requires 1
hour of soldering and 1 hour of assembling. Bridgeway has 1,000 soldering
hours and 800 assembling hours available per week. There are no constraints on
Reviewer 219
Management Advisory Services
the supply of raw materials. Demand for keyboards is unlimited, but at most 350
printers are sold each week. Bridgeway wishes to maximize its weekly total
contribution margin.
where the variable Z denotes the objective function value of any LP problem. In
the Bridgeway case, Z equals the total contribution margin that will be realized
when an optimal mix of products X (printer) and Y (keyboard) is manufactured
and sold.
Constraint 1. Each week, no more than 1,000 hours of soldering time may be
used. Thus, constraint l may be expressed by:
2X + Y ≤ 1,000
Constraint 2. Each week, no more than 800 hours of assembling time may be
used. Thus, constraint 2 may be expressed by:
X + Y ≤ 800
X ≤ 350
X >= 0
Y >= 0
These four steps and the formulation of the LP problem for Bridgeway are
summarized in Exhibit 16-1. This LP problem provides the necessary data to
develop a graphical solution.
Choose production levels for printers (X) and keyboards (Y) that:
max Z = $30X + $20Y (objective function)
and satisfy the following:
2X + Y <= 1,000 (soldering time constraint)
X + Y <= 800 (assembling time constraint)
X <= 350 (demand constraint for printers)
X => 0 (sign restriction)
Y => 0 (sign restriction)
The following are two of the most basic concepts associated with LP:
Feasible region
Optimal solution
Step 1. Graphically determine the feasible region. Step 2. Search for the optimal
solution.
Bridgeway case, the feasible region is the set of all points (X, Y) satisfying the
constraints in Exhibit 16-1.
For a point (X, Y) to be in the feasible region, (X, Y) must satisfy all the above
inequalities. A graph containing these constraint equations is shown in Exhibit
16-2. Note that the only points satisfying the nonnegativity constraints are the
points in the first quadrant of the X, Y plane. This is indicated by the arrows
pointing to the right from the y-axis and upward from the x-axis. Thus, any point
that is outside the first quadrant cannot be in the feasible region.
In plotting equation 2X + Y <= 1,000 on the graph, the following questions are
asked: How much of product X could be produced if all resources were allocated
to it? In this equation, a total of 1,000 hours of soldering time is available. If all
1,000 hours are allocated to product X, 500 printers can be produced each week.
On the other hand, how much of product Y could be produced if all resources
were allocated to it? If all 1,000 soldering hours are allocated to produce Y, then
1,000 keyboards can be produced each week. Thus, the line on the graph
expressing the soldering time constraint equation extends from the 500-unit point
A on the x-axis to the 1,000-unit point B on the y-axis.
The equation associated with the assembling capacity constraint has been
plotted on the graph in a similar manner. If 800 assembling hours are allocated to
product X, then 800 printers can be produced. If, on the other hand, 800
assembling hours are allocated to product Y, then 800 keyboards can be
produced. This analysis results in line CD.
Since equation X < 350 concerns only product X, the line expressing the
equation on the graph does not touch the y-axis at all. It extends from the 350-
unit point E on the x-axis and runs parallel to the y-axis, thereby signifying that
regardless of the number of units of X produced, no more than 350 units of X can
ever be sold.
Reviewer 222
Management Advisory Services
Exhibit 16-2 shows that the set of points in the quadrant that satisfies all
constraints is bounded by the five-sided polygon HDGFE. Any point on this
polygon or in its interior is in the feasible region. Any other point fails to satisfy at
least one of the inequalities and thus falls outside the feasible region.
To find the optimal solution, we graph lines so that all points on a particular line
have the same Z-value. In a maximization problem, such lines are called isoprofit
lines; in a minimization problem, they are called isocost lines. The parallel lines
are created by assigning various values to Z in the objective function to provide
either higher profits or lower costs.
A graph showing the isoprofit lines for Bridgeway Company appears in Exhibit
16-3. The isoprofit lines are broken to differentiate them from the lines that form
the feasible region. To draw an isoprofit line, any Z-value is chosen, then the x-
and y-intercepts are calculated. For example, a contribution margin value of
$6,000 gives a line with intercepts at 200 printers and 300 keyboards:
X = 200 Y = 300
Since all isoprofit lines are of the form $30X + $20Y = contribution margin, they
all have the same slope. Consequently, once an isoprofit line is drawn, all other
isoprofit lines can be found by moving parallel to the initial line. Another isoprofit
Reviewer 223
Management Advisory Services
X = 300 Y = 450
Isoprofit lines move in a northeast direction; that is, upward and to the right. After
a while, the isoprofit lines will no longer intersect the feasible region. The isoprofit
line intersecting the last vertex of the feasible region defines the largest Z-value
of any point in the feasible region and indicates the optimal solution to the LP
problem. In Exhibit 16-3, the isoprofit line passing through point G is the last
isoprofit line to intersect the feasible region. Thus, point G is the point in the
feasible region with the largest Z-value and is therefore the optimal solution to
the Bridgeway problem. Note that point G is located at the intersection of lines
2X + Y = 1,000 and X + Y = 800. Solving these two equations simultaneously
results in:
X = 200
Y = 600
The optimal value of Z (i.e., the total contribution margin) may be found by
substituting these values of X and Y into the objective function. Thus, the optimal
value of Z is:
The five corners of the feasible region, designated by HDGFE, will yield different
product mixes between X and Y. The calculations are presented in Exhibit 16-4,
starting at the origin and going clockwise around the feasible region. These
calculations also show that the optimal production mix is 200 printers and 600
keyboards. Any other production mix will result in a lower total contribution
margin.
We will use the following K9 Kondo Company case to demonstrate how the
graphical method solves LP minimization problems. The K9 Kondo Company
manufactures climate-controlled doghouses. The company believes that its high-
volume customers are high-income male and female dog owners who want to
pamper their pets. To reach these groups, the marketing manager at K9 Kondo
is considering placing one-minute commercials on the following national TV
shows: “New York Dog Show” and “Man's Best Friend.”
A one-minute commercial on “New York Dog Show” costs $200,000, and a one-
minute commercial on “Man's Best Friend” costs $50,000. The marketing
manager would like the commercials to be seen by at least 60 million high-
income women and at least 36 million high-income men. Marketing studies show
the following:
Each one-minute commercial on “New York Dog Show” is seen by six
million high-income women and two million high-income men.
Each one-minute commercial on “Man's Best Friend” is seen by three
million high-income women and three million high-income men.
Constructing the LP problem for minimization of the objective function follows the
same steps used in constructing the LP problem for maximization of the objective
function:
Reviewer 225
Management Advisory Services
6X + 3Y => 60
2X+3Y => 36
These four steps and the formulation of the LP problem for K9 Kondo are
summarized in Exhibit 16-6. This LP problem provides the necessary data to
develop a graphical solution.
Choose the number of commercials on “New York Dog Show” (X) and “Man’s Best Friend” (Y) that:
min Z = $200X + $50Y (objective function)
and satisfy the following:
6X + 3Y => 60
2X+3Y => 36
X => 0
Y => 0
Like the Bridgeway problem, the K9 Kondo problem has a feasible region, but K9
Kondo's feasible region, unlike Bridgeway's, contains points for which the value
of at least one variable can assume arbitrarily large values. Such a feasible
region is sometimes called an unbounded feasible region, but it is referred to
here as simply the feasible region.
Line AB, which represents the plot of constraint 6X + 3 Y => 60, is determined by
first plotting the end points of the line 6X + 3Y = 60. Setting first Y and then X
equal to 0, we have:
6X = 60 3Y = 60
X = 10 Y = 20
Next, the constraint 2X + 3Y => 36 is plotted by first plotting the end points of the
line 2X + 3Y = 36. Again, setting first Y and then X equal to 0, we have:
2X = 36 3Y= 36
X=18 Y=12
STEP 2: SEARCH FOR THE OPTIMAL SOLUTION. Note that instead of isoprofit
lines these are isocost lines. The objective function is
and the marketing manager's goal is to minimize total advertising costs. Conse-
quently, feasible values for X and Y that minimize Z must be chosen. Thus, the
Reviewer 228
Management Advisory Services
optimal solution to the K9 Kondo LP problem is the point in the feasible region
with the smallest Z-value.
Consider an arbitrary cost of $1,800,000. That is, Z = $1,800 and the isocost line
is
as shown in Exhibit 16-8. Another parallel isocost line $1,400 = $200X + $50Yis
also shown in Exhibit 16-8. Thus, the direction of minimum cost (i.e., decreasing
Z) is toward the southwest; that is, downward and to the left. At a cost of
$800,000 (Z = $800), the isocost line is beyond the feasible region and therefore
does not represent a feasible solution. The optimum isocost line is the one that
intersects point B, because this is the farthest southwest point in the feasible
region. Thus, point B is the optimal solution to the K9 Kondo problem. Or stating
it another way, point B has the smallest Z-value of any point in the feasible
region.
Notice that the set of feasible solutions has three corner points B E, and C.
B = (0, 20)
C = (18, 0)
6X + 3 Y = 60
-2X - 3Y = -36
4X = 24
X =6
Now, the corner points of BEC are tested. The three corners will yield different
mixes of one-minute commercials on “New York Dog Show,” represented by X,
and on “Man's Best Friend,” represented by Y. The calculations are presented in
Exhibit 16-9. The optimal advertising plan is to purchase 20 one-minute
commercials on “Man's Best Friend” and zero one-minute commercials on “New
York Dog Show.” The total optimal advertising cost is $1,000,000.
Solving LP problems graphically is only practical when there are two decision
variables. Moreover, the graphical method becomes cumbersome when there
are many constraints. Real-world LP problems typically have thousands of corner
points.
The reigning champ for handling such problems is the simplex method, devised
in 1947 by George B. Dantzig of Stanford University2. The simplex method
provides an iterative algorithm that systematically locates feasible corner points
that will improve the objective function value until the optimal solution is reached.
Regardless of the number of decision variables and constraints, the simplex
algorithm applies the key characteristic of any LP problem: An optimal solution
always occurs at a corner point of the feasible region. The simplex algorithm
finds corner-point solutions, tests them for optimality, and stops once an optimal
solution is found.
Reviewer 230
Management Advisory Services
The numbers used in daily life are called scalars. Scalars are simply single
numbers, or variables used to identify single numbers. A number such as 8 is a
scalar.
People who have used a spreadsheet such as Excel or who have done any
computer programming already have a good understanding of the concept of
a matrix. A matrix is a rectangular array of numbers having m rows and n
columns; it is typically contained in brackets. For instance, one can refer to the 2
X 4 matrix [A] and identify the individual numbers with a subscripted lower-case
a, such as ay. In the following matrix, the subscripts i and j identify the row and
column, respectively, of each matrix entry:
To multiply a row vector by a matrix, the vector must have the same number of
columns as the matrix has rows; otherwise, the operation is impossible. The
following illustration shows how this multiplication is performed:
= [(cla11 + c 2 a 21 + C 3 a 31 ) (c l a 12 + c 2 a22 + c 3 a 32 ) (c 1 a 13 + c 2 a 23 +
c 3 a 33 ) (c 1 a 14 + c 2 a 24 + C 3 a 34 )]
Reviewer 231
Management Advisory Services
The entries in each column of the matrix are multiplied by the entries in the
vector, then summed to produce the entries in the resulting row vector.
Finally, to subtract a row vector from a row vector, the first entry of the second
vector is subtracted from the first entry of the first vector, which yields the first
entry of the resulting row vector. The second entries of the vectors are then
subtracted, yielding the second entry of the result, and so on until all entries have
been subtracted:
[c] - [d] = [c 1 c 2 c 3 ] - [d 1 d 2 d 3 ]
The number of nonzero rows in a matrix is known as its rank. A matrix column
containing a single one (1) in any position, with the remaining column entries
being zeros, is known as an elementary column. A matrix is said to be in row-
reduced form if the number of elementary columns is equal to the rank of the
matrix. The following matrices [B], [C], and [D] serve as examples:
How might matrix [C] be put into row-reduced form? That is, how can one more
elementary column be created? Since each row of a matrix represents the
coefficients of an equation, all the numbers in any row of a matrix can be
multiplied by an appropriate constant without disrupting the meaning of the
matrix. Thus, a particular value can be changed to a 1. Also, since the matrix
represents a system of simultaneous linear equations, rows can legally be added
or subtracted from each other. Consequently, the other values can be changed
Reviewer 232
Management Advisory Services
to zeros in the column where the I is. Multiplying the first row of matrix [C] by 0.5
gives:
which produces a 1 in the first position of row 1. Next, row 1 is subtracted from
row 2 to obtain:
which produces the needed zeros in the first column. Matrix [C] is now in row-
reduced form.
Pivoting a Matrix
The simplex algorithm can be used to solve LP problems in which the goal is to
maximize the objective function. The following example is necessarily simple to
illustrate the mechanics of the algorithm; it could easily be solved graphically.
The method is the same for more complex problems. The simplex solution of a
minimization LP problem is described in a later section.
The Heartache machine shop has time available on three machines, and the
shop's owner wishes to schedule production of two types of fastening pins. The
owner's objective is to maximize the profit resulting from the proposed production
run.
Lathe A is used for rough turn of the pin stock and has 50 hours of time
available. Lathe B is used to finish turn the fastening pins and has 36 hours
Reviewer 233
Management Advisory Services
available. The third machine, grinder G, is used to finish grind each pin, thereby
completing the production process. The grinder has 81 hours available.
Manufacturing times for pin lots, in hours, are summarized as follows:
Machine Lot Times
A BG
Pin Type 1 10 6 4.5
Pin Type 2 5 6 18.0
Heartache’s profit on these pins is $9 per lot for Type 1 and $7 per lot for Type 2.
Before the simplex algorithm can be applied, the LP problem must be set up
using the four steps introduced in the graphical method section.
STEP 2: DEFINE THE OBJECTIVE FUNCTION. The machine shop owner's goal
can be expressed by the following objective function equation:
This equation has one term for the profit generated by producing pin Type 1 and
another term for the profit generated by producing pin Type 2. Together, they
equal AeroTech's profit, Z, which is to be maximized.
Exhibit 16-10 summarizes the complete LP problem for AeroTech machine shop.
So far, the procedure has been the same as for the graphical method described
at the beginning of this chapter. Now, six additional steps, known as the simplex
algorithm, are performed to arrive at an optimal solution. .
Reviewer 234
Management Advisory Services
Choose production levels for Type 1 pins (X 1 ) and Type 2 pins (X 2 )
max Z =
Objective function
$9X 1 + $7X 2
and satisfy
the following
10X 1 +
50 (lathe A time constraint)
5X 2 <=
6X 1 +
36 (lathe B time constraint)
6X 2 <=
4.5X 1 +
81 grinder G time constraint)
18X 2 <=
X 1 >= 0 (sign restriction)
X 2 >= 0 (sign restriction)
The physical meaning of the slack variables in the AeroTech problem is the
remaining spare machine time given a particular solution for X, and X2. Ideally,
given an optimum solution, X,, X4, and X5 would all be zero, but this is seldom
the case. For our purposes here, however, the slack variables merely provide a
convenient way of converting inequalities to equalities.
max Z = [c][x]
where [c] contains the coefficients in the objective function (all slack variable
coefficients are zero). The objective function must satisfy:
[A][x] = [b]
[x] >= 0
[b] >= 0
where [x] is the variable vector, [A] is the matrix of constraint coefficients for the
variables, and [b] is the right-hand side vector from the constraint equation.
Reviewer 235
Management Advisory Services
max Z = [c][x]
=
and satisfy:
[A][x] = [b]
One should make certain that the original objective function and constraints from
these matrices can be recreated. Also, notice that matrix [A] is row-reduced, a
necessary condition of the standard form for the simplex method.
Simplex Indicators
The matrix [A] and the vectors [b] and [c] were defined in step 5. The vector [c*]
is a subset of [c] containing the coefficients of the variables that are currently
defined by elementary columns in matrix [A]. For the initial tableau, the initial
slack variables' coefficients are in [c*]. This will become clearer as the simplex
tableau of Exhibit 16-11 is filled in.
Two columns have been added to the left-hand side of the tableau in the exhibit.
The leftmost column simply indicates which decision variables are currently
elementary. Across the table from each of these variables, one finds the value of
I in an elementary column in [A]. The same variable appears at the top of that
elementary column.
The next column from the left contains the values of [c*]. In this first tableau, the
variables in the leftmost column are just the slack variables. In [c], the slack
variables all have coefficients of zero, so in this first tableau, the subset vector
[c*J is:
[c*] = [0 0 0]
Reviewer 236
Management Advisory Services
The operations [c*] [A] - [c] and [c*] [b] can now be carried out, as shown in the
exhibit, and the rest of the tableau completed. The portion of the tableau
corresponding to [c*] [b] contains the value of the objective function (the profit, or
Z, in this case) for the current solution.
The portion of the tableau corresponding to [c*] [A] - [c] contains the simplex
indicators for the current solution. Simplex indicators show in each column how
much Z will decrease per unit increase of the variable. These indicator numbers
are vital for the next four steps of the simplex algorithm.
[c*] X1 X2 X3 X4 X5 [b]
X3 0 10 5 1 0 0 50
X4 0 6 6 0 1 0 36
X 5 0 4.5 18 0 0 1 81
[c] = [9 7 0 0 0] [c*] = [0 0 0]
= [0 0 0 0 0 ] - [ 9 7 0 0 0 ]
= [-9 -7 0 0 0]
[c*][b] = [0 0 0] = 0
X X
[C*] X1 X2 X3 [b] Quotients
4 5
To continue with the AeroTech problem, the bottom of Exhibit 16-11 should be
revisited. Two of the simplex indicators are negative; therefore, step 7 is not
satisfied. There is at least one positive value in the columns above these
negative indicators, so step 8 is not satisfied. Therefore, we must proceed to step
9 and generate a new tableau.
To create the new tableau, an entry in the current tableau is selected about
which to pivot. First, the most negative simplex indicator having a positive value
above it in [Al is selected (-9 in the exhibit). In essence, the greatest negative
value indicates which variable will increase Z by the greatest rate. The column
above this indicator is the pivot column.
Second, for each positive entry in the pivot column in [A], the entry in [b] on that
row is divided by the entry in the pivot column of [A], and the resulting quotient is
noted (three such quotients are shown in the exhibit, one for each row of [A] ).
Each of these quotients is the largest value the pivot column variable can be
without exceeding the constraint of that row.
Exhibit 16-12 shows the second tableau for AeroTech. Since [c*] is not needed
beyond the first tableau, the leftmost columns are omitted. To create this second
tableau, the entire first tableau is transformed by the pivoting process described
earlier. This pivoting also affects the entire last row of the tableau, and the right-
hand column. All values in the pivot row are divided by the pivot entry, which
leaves a I in the pivot position. Then, multiples of the resulting pivot row are
subtracted from the tableau's other rows to leave zeros in the pivot column.
Once again, the second tableau does not satisfy steps 7 and 8, so a third tableau
must be generated, starting with the selection of a pivot entry. The only negative
simplex indicator in the exhibit is -2.5, so that column becomes the pivot column.
Recomputing the three quotients and finding their minimum results in the second
row being chosen as the pivot row. The pivot entry (3) is circled.
Reviewer 238
Management Advisory Services
Exhibit 16-13 shows the third tableau produced by pivoting the second tableau
around its pivot entry. Note that in the third tableau, none of the simplex
indicators are negative, satisfying step 7 of the algorithm. Therefore, this tableau
represents an optimal solution to the AeroTech problem, and no further iterations
are necessary.
The solution values in the tableau in Exhibit 16-13 are read as follows. First, the
objective function (Z) value from the lower right corner of the tableau is read, in
this case, 50. AeroTech can expect to make a profit of $50 from the optimal
production run. But what is this optimal production run? How many fastening pins
of Types 1 and 2 should be produced? This question is answered in the
rightmost [b] column of the tableau.
X 1 X2 X3 X4 X5 [b]
1 0.5 0.1 0 0 5
0 3 -0.6 1 0 6
0 15.75 -0.45 0 1 58.5
0 -2.5 0.9 0 0 45
X1 X2 X3 X4 X5 [b] Quotients
0 -2.5 0.9 0 0 45
Pivot column
First, find the elementary columns in [A] of the third tableau. Then, note the
decision variables to which these columns correspond and the row on which
each column's 1 is located. For instance, the first column of the tableau is
elementary and corresponds to the variable X1. This column has a 1 in the first
row. It is now possible to read across this row to the far right. The value of 4
appears in the first row of [b], meaning that four lots of pin Type 1 should be
produced.
Similarly, the value of 2 appears in the second row of [b], meaning that two lots
of pin Type 2 should be produced. The value of 27 in row three of [b] means that
27 hours of unused (slack) time remain on grinder G because the third
elementary column corresponds to X5. Having completed the LP problem,
AeroTech's owner can now run the machines to this optimal schedule.
Reviewer 239
Management Advisory Services
Sensitivity Analysis
When an optimal solution is reached, management would often like to know how
the optimal values would react to a change in the initial formulation of the LP
problem, but it is not practical to rework the entire problem for each possible
change. Fortunately, the information can be obtained directly through an
analytical approach called sensitivity analysis (also referred to as postoptimality
analysis). Sensitivity analysis basically looks at the question of “what if” a
variable is different from that originally estimated. The widespread use of
computers has made sensitivity analysis a common extension of linear
programming. Most linear programming computer packages include the results
of sensitivity analysis as a part of the normal printout.
The Third and Final Simplex Tableau for the AeroTech Problem
X X
X 1 X 3 X 4 [b]
2 5
1 0 0.2 -0.167 0 4
0 1 -0.2 0.33 0 2
0 0 2.7 -5.2 1 27
0 0 0.4 0.833 0 50
X1 X2 X3 X4 X5 [b]
Optimal solution
Shadow Prices
A shadow price represents the change in the objective function that would result
from the addition or reduction of one unit of a resource, such as machine time or
labor time. Shadow pricing, a form of sensitivity analysis, shows how sensitive
the optimal value of the objective function would be to adding or reducing
resources. For example, is it worthwhile to pay workers an overtime rate? If the
increase in overtime pay is $1,000 and results in an increase of $800 in the
optimal objective function, the addition of overtime work is not worthwhile.
The shadow price “value” of adding one additional unit of a resource can be
readily determined by examining the last row of the final tableau. Each value is
the shadow price for that variable. For example, as shown in Exhibit 16-13, the
shadow price of slack variable X, is 0.4. Since slack variable X3 is directly
Reviewer 240
Management Advisory Services
associated with constraint 1, this means that a one-hour increase in Lathe A's
time would result in an increase in Z of $0.40.
[x] >= 0
[b] > = 0
where [E] contains the constraint coefficients for the variables, not including slack
variables.
The first simplex tableau could be formed around the subset vector [e*] = 0,
which relates to starting at the origin in a graphical plot of the feasible region.'
Most maximization problems will fit this form nicely, but the majority of
minimization problems will not contain the origin as a feasible point. They often
have constraints of the following form:
A quick review of Exhibit 16-2 (a maximization problem) shows that the origin (0,
0) is a corner point of the feasible region, but in Exhibit 16-7 (a minimization
problem) the origin is outside the feasible region. Thus, such a simplex
minimization problem cannot be started from the origin because it is not a
feasible point.
First, the problem is converted to standard matrix form by subtracting the slack
variables J and K from the constraints. Then, the artificial variables M and N are
added to produce the following equality constraints:
For phase one, a “rigged” objective function is used, which contains only the
artificial variables. The use of these artificial variables and the rigged objective
function may seem strange and, for the purposes here, will have to be taken on
faith. In matrix form, the rigged objective function is:
= [0 0 0 0 1 1]
The first tableau is at the top of Exhibit 16-14. The vector [a*] contains the
coefficients of the artificial variables from the rigged objective function since the
artificial variables are defined by the elementary columns in matrix [Al. The
simplex indicators are then computed, and the value of the objective function is
determined as shown.
Pivoting twice yields the tableau at the bottom of Exhibit 16-14. Because there
are now no positive simplex indicators, phase one is completed. The value of the
objective function in the lower right corner of the tableau gives valuable
information. In the tableau, this value is zero, indicating that there is a solution to
the problem if one wishes to proceed with phase two. If the value were nonzero,
no solution to the LP problem would exist, and the process would not be
continued.
Reviewer 242
Management Advisory Services
Phase one found a corner point in the feasible region that permits the simplex
method to optimize the minimization LP problem. Exhibit 16-15 shows the first
tableau of phase two. The columns of [A] corresponding to the artificial variables
M and N are simply deleted and the rows are reordered, if necessary, so the l's
of the elementary columns are in the same order as the objective function
coefficients. The original objective function is, of course, restored to read in
matrix form.
[a]=[0 0 0 0 1 1] [a*]=[1 1]
= [8 6 -1 -1 1 1] - [0 0 0 0 1 1]
= [8 6 -1 -1 0 0]
[a*] X Y J K M N Ib] Quotients
M 1 6 pivot entry 3 -1 0 1 0 60 60/6 = 10 Minimum Quotient (pivot row)
N 1 2 3 0 -1 0 1 36 36/2 = 18
8 6 -1 -1 0 0 96
Pivot column
X Y J K M N [b] Quotients
1 0.5 -0.167 0 0.167 0 10 10/0.5 = 20
0 Pivot entry 2- 0.333 -1 -0.333 1 16 16/2 = 8 Minimum Quotient (pivot row)
0 2 0.333 -1 -1.333 0 16
Pivot column
Reviewer 243
Management Advisory Services
X YJ K M N [b]
1 0 -0.25 0.25 0.25 -0.25 6
0 1 0.167 -0.5 -0.167 0.5 8
0 00 0 -1 -1 0 < The Value is zero, therefore a solution to the LP exists.
Creation of the First Tableau in Phase Two of the Simplex Minimization Problem
[c*] X Y J K M N [b]
X 200 1 0 -0.25 0.25 0.25 -0.25 6
V 50 0 1 0.167 -0.5 -0.167 0.5 8
= [0 0 -41.65 25]
[c*] X Y J K [b]
X 200 1 0 -0.25 0.25 Pivot entry 6
YI 50 0 1 0.167 -0.5 8
0 0 -41.67 25 1 600
Pivot column
= [200 50 0 0]
Noting that the elementary columns in [A] correspond to the variables X and Y,
the subset vector [c*] can be determined and the simplex indicators recalculated
as shown in the exhibit. This leaves the complete first tableau of phase two
shown at the bottom. Note that the leftmost Z columns have been eliminated
from the tableau including the [a*] column. Also, note that no row ordering was
necessary since the i's of the elementary column were in the same order as the
objective function coefficients.
Reviewer 244
Management Advisory Services
There is one positive simplex indicator in the tableau, which is 25. In the [A]
column above this indicator, there is only one positive value, 0.25, which
automatically becomes the pivot value. In this case, it is not necessary to
compute quotients to determine the pivot row because there is only one positive
pivot column value to choose from. Exhibit 16-16 shows the second tableau
obtained by pivoting the first.
The Second and Final Tableau in Phase Two of the Simplex Minimization
Problem
X Y J K [b]
4 0 -1 1 24 ( Solution for K)
2 1 -0.333 0 20 ( Solution for Y)
-100 0
-16.67 0 1000 Z
No simplex indicator is positive. Minimal
2X + 3Y + 0J = 36
2X + 3 (20) + 0 - 1 (24) = 36
X=0
All simplex indicators are nonpositive so the solution is now minimized and the
simplex algorithm terminates. The location of the elementary columns in this
tableau should be studied. The number 24 in [b] is the optimal value of the slack
variable K. The number 20 in [b] is the optimal value of the decision variable Y.
The tableau provides no value for the decision variable X, so its optimal value is
zero. Finally, the value of the objective function is 1,000 (or $1,000,000). Note
that the simplex solution is the same as that found graphically earlier in the
chapter for the K9 Kondo Company problem.
If the use of linear programming in industry has been restricted, it has been
because of two difficulties: (1) the cost of collecting the necessary input data and
(2) the cost of solving large LP problems. The first of these roadblocks is being
removed as many firms develop integrated information and database systems.
Since the solution of LP problems is purely mechanical, these problems are best
assigned to the computer. Therefore, rapid reductions in the cost of computer
hardware are removing the second roadblock.
Prior to 1984, most scientists and mathematicians thought that the simplex
method was as far as they could go. A number of relatively easy-to-use decision
support packages based on the simplex method have been available for years.
Even when LP problems are not terribly complex, however, solving them can
chew up so much computer time that the answer is useless before it is found.
Testing has shown that Karmarkar's method is many times faster than the classic
simplex method. An implementation of Karmarkar's method outperformed one
implementation of the simplex method by a factor of over 50 on medium-scale
problems of 5,000 variables. AT&T (Bell Labs' parent) sold the first software
product based on Karmarkar's method to the US Air Force's Military Airlift
Command (MAC).
On a typical day, thousands of Air Force planes ferry cargo and passengers
among airfields scattered around the world. Determining how to fly various
routes, deciding which aircraft should be used, and scheduling pilots and ground
personnel are the primary functions of the MAC. Getting all the pieces to play
together is a classic challenge in linear programming. In fact, the MAC's LP
problem contains upward of 150,000 variables and 12,000 constraints. If a
computer could wring out just a couple of percentage points of added efficiency,
it would be worth millions of dollars. Karmarkar's method has enabled the MAC
to do just that. In fact, the most common software for scheduling classes at
colleges and universities uses this method to develop schedules which match
rooms, teachers and students.
Each of the dome's corners is a possible solution. The task is to find which one
holds the best solution. With the simplex method, the program “lands” on one
corner and inspects it. Then it scouts the adjacent corners to see if there is a
better answer; if so, it heads off in that direction. The procedure is repeated at
every corner until the program finds itself boxed in by worse solutions.
Perhaps the greatest benefits will be in the use of simulation for everyday
operations7. Problems that used to take hours of expensive time on the
corporate mainframe can now be performed in minutes, or possibly even
performed on a desktop microcomputer in a short time. These benefits can only
be a boon to productivity and cost management.
The major goals of this chapter were to enable you to achieve four learning
objectives:
Decision variables
Constraints
Feasible region
Learning objective 2. Describe the graphical method, and apply it in solving both
maximization and minimization linear programming problems.
The optimal solution for a maximization problem is found by moving the objective
function, which is an isoprofit line, away from the origin until it intersects the
extreme corner point of the feasible region, which is a polygon. The optimal
solution occurs where the isoprofit line intersects the extreme corner point, or
where the isoprofit line overlays one of the boundaries.
Learning objective 3. Describe the simplex method, and use it in solving both
maximization and minimization linear programming problems.
Most real-world LP problems have more than two variables and are therefore too
complex for the graphical method. The simplex method is a general-purpose
algorithm that is widely used to solve multivariable and multiconstraint LP
problems.
Reviewer 248
Management Advisory Services
feasible region. Although the simplex method will likely continue to be used for
many LP problems, software that supports Karmarkar's method is already being
used by a number of companies as well as federal government agencies.
IMPORTANT TERMS
Constraint A limit to the degree to which an objective can be pursued.
Decision variables Represent choices available to decision makers in
terms of amounts of either inputs or outputs.
Elementary column A matrix column containing a single one (1) in any
position, with the remaining column entries being zeros.
Feasible region A feasible solution space that contains the set of all
possible combinations of decision variables.
Graphical method An approach to optimally solving LP problems
involving two decision variables and a limited number of constraints.
Isocost lines A set of parallel lines that represent the objective function
of an LP problem. They indicate constant amounts of cost at various
solution values. They are used to solve an LP minimization problem
graphically.
Isoprofit lines A set of parallel lines that represent the objective
function of an LP problem. They indicate constant amounts of profit at
various solution values. They are used to solve an LP maximization
problem graphically.
Karmarkar's method An approach to optimally solving large-scale LP
problems efficiently. It starts from a point within the multidimensional
feasible region and finds the optimal solution by taking a shortcut that
avoids the tedious surface route of the simplex method.
Linear equation An algebraic equation whose variable quantity or
quantities are in the first power only and whose graph is a straight
line.
Linear programming (LP) An application of matrix algebra used to
solve a broad class of problems that can be represented by a system
of linear equations. It is used to determine the best allocation of
Reviewer 249
Management Advisory Services
DEMONSTRATION PROBLEMS
Product A Product B
Raw materials $4 $8
Direct labor 1 DLhr @ $6 6 2 DLhr @ $6 12
Variable overhead 0.5 Mhr @ $16 8 2 Mhr @ $8 16
Fixed overhead 1.5 Mhr @ $10 15 3 Mhr @ $10 30
Required:
a. Develop the objective function that will maximize Marlowe's
contribution margin (CM).
b. Develop the constraint function for the direct labor.
c. Develop the constraint function for the machine capacity.
a. The objective function that will maximize Marlowe's total contribution margin
(CM):
The total variable unit cost of product A is $18 ($4 raw materials + $6 direct labor
+ $8 variable overhead). The CM is $31.90 ($49.90 unit sales price - $18
variable cost). Similarly, the total variable unit cost of product B is $36 ($8 raw
materials + $12 direct labor + $16 variable overhead), and the CM is $48.50
($84.50 unit sales price - $36 variable cost). Thus, the objective function should
maximize the total CM from both products.
A + 2B <= 800,000
Because 800,000 direct labor hours are available, the function must be equal to
or less than 800,000. Every unit of product A requires 1 hour of direct labor, and
every unit of product B requires 2 direct labor hours.
Because 250,000 hours of machine time are available, the function must be
equal to or less than 250,000 machine hours. Every unit of product A requires
0.5 hours, and every unit of product B requires 2 hours.
Office Designs manufactures and sells two kinds of desktop pen and pencil sets.
The Executive (E) is a high-quality set, while the Clerical (C) is of somewhat
lower quality. The contribution margin (CM) is $8 for each Executive set. sold
and $2 for each Clerical set sold. Each Executive set requires twice as much
manufacturing time as is required
for a Clerical set. If only Clerical sets are made, the company has the capacity to
manufacture 1,200 sets daily. Enough pen and pencil components are available
Reviewer 251
Management Advisory Services
to make 800 sets daily of Executive and Clerical combined. Executive requires a
special marble pedestal, of which only 500 per day are available. Clerical
requires a metal pedestal, of which 700 per day are available. The company can
sell all the Executive and Clerical sets that it produces.
Required:
a. Formulate the problem.
b. Use the graphical method to find the optimal solution.
c. Management wants to know what the optimal solution will be if the
number of available marble pedestals is reduced to 400 Prepare a
graph showing this postoptimal solution.
a. The objective function maximizes the total contribution margin (CM), where the
CM is $8 for each Executive (E) set and $2 for each Clerical (C) set. Therefore,
the objective function for Office Designs is:
max CM = 8E + 2C
2E + C <= 1,200
Second, enough pen and pencil components are available to produce 800
desktop sets of any combination daily. This constraint is stated as:
E + C <= 800
Third, the pedestals for each set are also limited in supply. Only 500 marble
pedestals are available daily to make E, and only 700 metal pedestals are
available daily for C. Mathematically, these two constraints become:
E <= 500
C <= 700
Putting all the preceding material together, the LP problem is formulated as:
max CM = 8E + 2C where
2E + C <= 1,200
E + C <= 800
E <= 500
C <= 700
E >= 0
Reviewer 252
Management Advisory Services
C >= 0
b. Because there are only two variables, this LP problem lends itself to the
graphical method. The constraints are graphed first because they will determine
what solutions to the problem are possible. The feasible solution boundaries are
obtained by graph-ing the inequalities as if they were equalities and then noting
where the solution must lie relative to the equation. For example, the first
constraint, 2E + C 1,200 is graphed as 2E + C = 1,200. If C = 0, 2E = 1,200, and
E = 600. If E = 0, C = 1,200. Thus, the end points that are used to draw the 2E +
C < 1,200 constraint line are E = 600 and C = 1,200. This constraint line as well
as all the others are shown in the following graph. When all constraints are
simultaneously enforced, the shaded area results. This area is the feasible
region. It represents the set of all feasible solutions to Office Designs' LP
problem.
Reviewer 253
Management Advisory Services
The total CM is maximized at $4,400, with 500 Executive and 200 Clerical
desktop pen and pencil sets being produced. This maximum profit is represented
by the isoprofit line that intersects corner point 3.
c. Once the optimal solution is reached, it is important to know how the solution
will change based on a change in the initial formulation. This is achieved by
employing sensitivity or postoptimality analysis. In the case of Office Designs,
management wants to know what the optimal solution will be if the number of
marble pedestals is reduced to 400. Such a change causes the original
constraint line E ~ 500 to shift leftward, reducing the feasible region, as shown in
the following graph. The revised optimal solution occurs at corner point 4, where
400 sets of E and 400 sets of C are the optimum number of sets to produce. The
total CM at this corner point is $8(400) + $2(400) = $4,000. The total CM is less
than could he made before, but it is still the best possible solution when the
supply of marble pedestals is reduced to 400.
Herbicide B
Chemical A costs Moran $3,000 per ton; B costs $3,500 per ton. Moran's
production superintendent has specified that at least 30 tons of A and at least 20
tons of B must be produced during the next month. Moreover, the superintendent
observes that an existing inventory of a highly perishable raw material needed in
both chemicals must be used within 30 days. In order to prevent the loss of this
expensive raw material, Moran must produce a total of at least 70 tons of
chemicals next month.
Required:
a. Formulate the LP problem
b. Solve the LP problem graphically.
where
A >= 30
B >= 20
A+B >= 70
A >= 0
B >= 0
Reviewer 255
Management Advisory Services
The minimization LP problem is unbounded on the right side and on the top. As
long as it is bounded inward, corner points can be determined. The optimal
solution will always occur at one of the corner points, or along one of the
boundary lines. In the case of Moran's LP problem, there arc only two corner
points, 1 and 2. At point 1, A = 50 and B = 20. At point 2, A = 30 and B = 40. The
optimal solution is found at the point yielding the lowest total cost:
The lowest cost to Moran is at point 1, Thus, Moran should produce 50 tons of A
and 20 tons of B.
Rock Fellow Oil Refinery refines crude oil into gasoline and diesel. Crude oil
inputs to the refinery can be a maximum of 100 million barrels per quarter, and
the maximum energy usage per quarter is 42 million BTUs. Historical statistics
show that the maximum uptime for the refinery is 75 days per quarter. The diesel
process uses 2 million BTUs, which is twice as much energy as the gasoline
process, and a diesel batch process takes only 3 days compared to 4 days for a
gasoline batch. Each diesel and gasoline batch uses 4 million barrels and 10
million barrels, respectively. Each gasoline batch nets $50,000 and each diesel
batch nets $60,000.
Required:
a. Formulate this maximization problem.
b. Use the simplex method to find the optimal solution.
a. The objective function maximizes the total net contribution (NC) where the net
is
$60,000 for each diesel (D) batch and $50,000 for each gasoline (G) batch.
Therefore, the objective function for Rock Fellow is:
Three stated constraints exist. First, only 100 million barrels of crude oil can be
input to the refinery. Since D uses 4 million barrels per batch and G uses 10
million barrels per batch, the first constraint can be stated as:
2D + G <= 42
Third, crude oil can be refined only 75 days per quarter. Each D batch takes 3
days, and each G batch takes 4 days. Mathematically, this constraint is:
3D + 4G <= 75
where
4D + lOG <= 100
2D + G <= 42
3D + 4G <= 75
b. Setting up the first tableau, with the formulated problem from above, we have:
X
D G X2 X3 []b
1
4 10 1 0 0 100
2 1 0 1 0 42
3 4 0 0 1 75
-60 -50 0 0 0 0
or
[0 0 0] - [60 50 0 0 0]
[c*] [b]
Now, since the bottom row has negative values, we need to find the pivot entry.
Choosing the first column (D) as the pivot column since its simplex indicator is
the most negative, we calculate the quotients (b/D) to determine the pivot row, as
follows:
Reviewer 257
Management Advisory Services
X
D G X2 X3 [b] b/D
1
4 10 1 0 0 100 100/4 = 25
2 1 0 1 0 42 42/2 = 21 (pivot row)
3 4 0 0 1 75 75/3 = 25
-60 -50 0 0 0 0
Pivot Column
D G X, X2 X, [b]
0 8 1 -2 0 16
1 0.5 0 0.5 0 21
0 2.5 0 -1.5 1 12
0 -20 0 30 0 1,260
Note that as a result of the pivot, all values in the pivot column (D) are now 0,
except for the 1 where the pivot value was.
Since the last row still has negative values, another pivot value must be
determined, as shown:
DG X, X2 X3 [b] b/D
0 8- 1 - 2 0 16 16/8 = 2 (pivot Row)
1 0.5 0 0.5 0 21 21/0.5 = 42
0 2.5 0 -1_5 1 12 12/2.5 = 4.8
0 -20 0 30 0 1,260
Pivot Column
Note that as a result of the pivot, all values in the pivot column (G) are now 0,
except for the 1 where the pivot value was. Also, note that the first pivot column
(D) did not change.
Since there are no negative values in the last row, we have converged to a
solution. The value in the bottom right corner is the maximum. In this case, the
maximum net is $1,300,000. (Recall that the objective function used thousands
of dollars.)
Reviewer 258
Management Advisory Services
What is the product mix? We look to the D column for the lone value of 1, then
look at the value in the rightmost column of that row-in this case, 20. This means
that Rock Fellow should make 20 batches of diesel (D). Doing the same for G,
we see a value of 2-thus, Rock Fellow should make only 2 batches of gasoline
(G).
min Z=21X+18Y
subject to:
5X + 10Y >= 100
2X+Y >= 20
Required:
First, the tableau is generated by subtracting the slack variables and then adding
artificial variables:
S A
X Y S2 A2 [b]
1 1
5 10 -1 0 1 0 100
2 1 0 -1 0 1 20
7 11 -1 -1 0 0 120
where [a] is a vector of length [A] with only Is for each of the artificial columns
and [a*] has only is for the number of artificial columns of [A]. Thus, we have:
[1 1] - [0 0 0 0 1 1]
which is essentially a summation of all the columns except the artificial columns.
For the bottom right, the following is used:
[a*] [b]
which results in 120, again just the summation of the values of [b].
Reviewer 259
Management Advisory Services
Now we pivot until all positive values are eliminated along the bottom row (except
for the far right “optimization” value). If the far right value is not zero, a solution
does not exist, and we would stop. The pivoting process for our problem is as
follows: After the first pivot:
S A
X Y S 1 A 1 [b]
2 2
0.5 1 -0.1 0 0.1 0 10
1.5 0 0.1 -1 -0.1 1 10
1.5 0 0.1 -1 -1.1 0 10
Now that the bottom row is nonpositive and the bottom right value is zero, a
solution is assured, and we can proceed to phase 2.
PHASE 2:
Now a new first tableau is constructed, using the nonartificial values in the last
phase 1 tableau and sorting the rows. The bottom row is calculated from scratch
with now familiar maximization equations:
or
[c*][b]
or
[21 18]
X Y S 1 S2 [b]
1 0 0.067 -0.667 6.67
0 1 -0.133 0.33 6.67
0 0 -1.0 -8 260
if the bottom row contained any positive values, it would be necessary to pivot
until all the values in the bottom row (excluding the bottom right value) were
nonpositive, but that is not the case here.
Reviewer 260
Management Advisory Services
Since all the bottom row values are nonpositive, the values are optimized. This
tableau is read in the same way as the maximization case-the bottom right value
is the minimized Z-value; the X column reveals a single value in the first row; the
Y column has a single value in the second row.
Z = 260
when
X = 6.67
Y = 6.67
Fugi Disk Company cannot meet the demand for its preformatted 3.5-inch floppy
disks. Fugi markets two types of disks: HD (high density) for newer computers
and DD (double density) for older ones. Fugi can realize a profit of $0.32 for each
DD disk and $0.30 for each HD disk. Unfortunately, Fugi is limited to 2,500
plastic DD disk cases and 400 disk boxes each day due to manufacturing
limitations. Each box holds either 10 HD or 10 DD disks. The supply of box and
disk labels is unlimited. The bulk formatting machine operates 420 minutes per
day and can format two disks at a time; it has auto-feed and auto-eject
mechanisms and can format an HD disk in 18 seconds and a DD disk in 15
seconds, including the loading and ejection times.
Required:
Using Excel to calculate the solution, we input the objective ction and constraints
as: (click Tools Options Formulas, to show the formulas as seen below)
Reviewer 261
Management Advisory Services
After running Solver, the output screens show: (with formulas shown)
Evaluating these output screens, Fugi management found that the optimal
solution would he 2,500 DD disks and 716 HD disks (rounded down to whole
disks). The shadow price (Lagrange multiplier) of 1 indicates that adding 1
minute would add $1 to profit. Therefore if the floor supervisor can add blocks of
minutes for less than $1/minute, profit will increase. After examining the shadow
prices, (Lagrange Multiplier) the floor supervisor suggested adding 2.25 hours to
Reviewer 262
Management Advisory Services
When the total format machine minutes were changed from 840 to 1,110 and the
computer program was rerun, the following output screens resulted:
Since the new profit of $1,250 per day is greater than the old profit plus the extra
cost ($200), it would be advantageous for Fugi to implement the increase in
formatting machine hours.
REVIEW QUESTIONS
16.1 What does the feasible region represent?
16.2 Give two reasons why the graphical method is only practical for
small LP prob-lems.
16.3 What does moving an objective function line toward the origin
represent? Mov-ing the line away from the origin?
16.4 Which values are used to construct the objective function and the
constraints?
a. LINDO parameters.
b. Decision variables.
c. Pricing policy.
d. Sensitivity constraints.
e. Shadow prices.
16.5 Which of the following do almost all practical applications of
linear programming require?
a. Graphs.
b. Matrices.
c. Objective functions.
d. Computers,
e. Market surveys.
16.6 What are the four steps for constructing an LP problem?
16.7 Which of the following procedures is employed to solve simplex
linear programming problems?
a. Shadow prices.
b. Graphs.
c. Integral calculus.
d. Expected value.
e. Matrix algebra.
16.8 In linear programming, shadow prices measure the:
Reviewer 263
Management Advisory Services
CHAPTER-SPECIFIC PROBLEMS
subject to:
X + Y >= 15
X >= 2
3X + Y >= 33
X + 2Y >= 18
Required:
5X + Y <= 100
X + 2Y <= 50
Y <= 15
Required.
16.16 Determining the objective function and constraint. The Teaque Company
makes
A $3
B 8
C 6
Required:
a. Determine the objective function formula.
b. Specify the constraint for the Finishing Department.
MACHINE HOUR DATA
CUTTING FINISHING
Shirts 10 minutes 15 minutes
Dresses' 6 minutes 30 minutes
Monthly capacity 1,000 hours 2,000 hours
Reviewer 265
Management Advisory Services
Required:
a. Develop the objective function that will maximize the contribution margin. b.
Develop the monthly machine hour constraints. c. Develop the monthly
nonnegativity constraints.
16.18 Graphical solution to LP problem. Neil and Neil book publishers are getting
ready to print covers for their latest mass-market novel. (The current trend is to
produce different covers for the same book, ideally to generate interest in the
book and increase sales.) For this book, the publisher has decided to use
predominantly green and predominantly blue covers. Based on a marketing
department request, at least 144 covers need to be produced each day, 32 of
which should be blue.
Due to idiosyncracies of the color printing process, blue covers take 2 minutes
and green covers take 4.5 minutes each to print. Daily deliveries of raw blue
pigment are at least 90 ounces due to vendor contracts. All other raw pigments
(e.g., yellow pigment) are not a constraint. Blue covers use one ounce each
whereas green covers use half an ounce each of the raw blue pigment. Raw
materials costs are predicted to be $0.10 for each green cover and $0.17 for
each blue cover. Efficiency goals require the printing press to be run at least 420
minutes each day.
Required:
Required:
a. Use the graphical method to find the feasible region.
b. Can 25 batches of Thin spatulas be optimally made each day?
Required:
Reviewer 266
Management Advisory Services
[CMA adapted]
X + Y <= 9
X <= 3
Y <= 8
Required:
X + Y >= 25
X + 3.5Y >= 30
6X + 5Y >= 50
Required:
Required:
Use the simplex method to find the optimal solution to maximize the use of the
pill press machine.
It has a choice between two types of raisins-Best Quality and Good Quality. Best
Quality raisins cost $0.023 and Good Quality raisins cost $0.010 each.
Advertising claims of “2 scoops” in each box create the necessity to have at least
500 raisins in each box. Customer taste tests have proven that at least 250 of the
raisins in each box have to be Best Quality to meet quality requirements. Due to
damage during the mixing and packaging processes and the fact that the Best
Quality raisins are more fragile, tests have shown that the Good Quality raisins
are more visually appealing in the box. Therefore, marketing requires a minimum
of 100 Good Quality raisins in each box.
THINK-TANK PROBLEMS
Although these problems are based on chapter material, reading extra material,
reviewing previous chapters, and using creativity may be required to develop
workable solutions.
Home Cooking offers two monthly plans-Premier Cuisine and Haute Cuisine. The
Premier Cuisine plan provides frozen meals that are delivered twice each month:
this plan generates a profit of $120 for each monthly plan sold. The Haute
Cuisine plan provides freshly prepared meals delivered on a daily basis and
generates a profit of $90 for each monthly plan sold. Home Cooking's reputation
provides the company with a market that will purchase all the meals that can be
prepared.
All meals go through food preparation and cooking steps in the company's
kitchens. After these steps, the Premier Cuisine meals are flash frozen. The time
requirements per monthly meal plan and hours available per month are as
follows:
PREPARATION COOKING FREEZING
Hours required:Premier Cuisine 2 2 1
Haute Cuisine 1 3 0
Hours available 60 120 45
For planning purposes, Home Cooking uses linear programming to determine the
most profitable number of Premier Cuisine and Haute Cuisine monthly meal
plans to produce.
Required:
Reviewer 268
Management Advisory Services
a. Using the notations P = Premier Cuisine and H = Haute Cuisine, state the
objective function and the constraints that Home Cooking should use to
maximize profits generated by the monthly meal plans.
Information for short-range planning has been developed in the same format as
in prior years. This information includes expected sales prices and expected
direct labor and material costs for each product. In addition, variable and fixed
overhead costs were assumed to be the same for each product because
approximately equal quantities of the products were produced and sold.
All three products use the same type of direct material, which costs $1.50 per
pound of material. Direct labor is paid at the rate of $5.00 per direct labor hour.
There are 2,000 direct labor hours and 20,000 pounds of direct materials
available each month.
Required:
a. Formulate and label the linear programming objective function and
constraint functions necessary to maximize Tripro's contribution
margin. Use QA, QB, and Qc to represent units of the three products.
Reviewer 269
Management Advisory Services
where:
Y= Monthly total overhead in dollars
XA = Monthly direct labor hours for product A
XB = Monthly direct labor hours for product B
X C = Monthly direct labor hours for product C
16.27 Determining product mix and shadow price. [CMA adapted] The Frey
Company manufactures and sells two products-a toddler bike and a toy
highchair. Linear programming is employed to determine the best production and
sales mix of bikes and chairs. This approach also allows Frey to speculate on
economic changes. For example, management is often interested in knowing
how variations in selling prices, resource costs, resource availabilities, and
marketing strategies would affect the company's performance.
The demand for bikes and chairs is relatively constant throughout the year. The
following economic data pertain to the two products:
BIKE (B) CHAIR (C)
Selling price per unit $12 $10
Variable cost per unit 8 7
Contribution margin per unit $ 4 $3
Raw materials required:
Wood 1 board foot 2 board feet
Plastic 2 pounds 1 pound
Direct labor required 2 hours 2 hours
The graphic formulation of the constraints of the linear programming model that
Frey Company has developed for nonvacation months accompanies the
problem_ The algebraic formulation of the model for the nonvacation months is
as follows:
The results from the linear programming model indicate that Frey Company can
maximize its contribution margin (and thus profits) for a nonvacation month by
producing and selling 4,000 toddler hikes and 2,000 toy highchairs. This sales
mix will yield a total contribution margin of $22,000 for a nonvacation month.
Required:
a. During the months of June, July, and August, the total direct labor hours
available are reduced from 12,000 to 10,000 hours per month due to vacations.
1. What would be the best product mix and maximum total contribution margin
when only 10,000 direct labor hours are available during a month?
b. Competition in the toy market is very strong. Consequently, the prices of the
two products tend to fluctuate. Can analysis of data from the linear programming
model provide information to management that will indicate when price changes
Reviewer 271
Management Advisory Services
to meet market conditions will alter the optimum product mix? Explain your
answer.
The expertise of the professional staff can be divided into three distinct areas
that match the services provided by the firm, i.e., tax preparation and planning,
insurance and investments, and auditing. Since the merger, however, the new
firm has had to turn away business in all three areas of service. One of the
problems is that although the total number of staff seems adequate, the staff
members are not completely interchangeable. Limited financial resources do not
permit hiring any new staff in the near future, and, therefore, the supply of staff is
restricted in each area.
Rich Oliva has been assigned the responsibility of allocating staff and computers
to the various engagements. The management has given Oliva the objective of
maximizing revenues in a manner consistent with maintaining a high level of
professional service in each of the areas of service. Management's time is billed
at $100 per hour, and the staff's time is billed at $70 per hour for those with
experience, and $50 per hour for the inexperienced staff. Pam Wren, a member
of the staff, recently completed a course in quantitative methods at the local
university. She suggested to Oliva that he use linear programming to assign the
appropriate staff and computers to the various engagements.
Required:
a. Identify and discuss the assumptions underlying the linear
programming model.
b. Explain the reasons why linear programming would be appropriate
for Miller, Lombardi, and York in making staff assignments.
c. Identify and discuss the data that would be needed to develop a
linear programming model for Miller, Lombardi, and York.
d. Discuss objectives, other than revenue maximization, that Rich
Oliva should consider before making staff allocations.
class passenger is on a short business trip with a combined person and luggage
weight of 210 pounds. On the other hand, the average coach-class passenger is
on an extended trip with a combined person and luggage weight of 230 pounds.
Required:
c. Based on the results from Requirements (a) and (b), briefly comment on the
following:
1. What is the importance of each constraint?
2. Should Modern Air consider a smaller cabin size (and thus less
expensive) jet?
1. Optimal Solution
2. William G. Wild, Jr., and Iotas Port, “The Startling Discovery Bell
Labs Kept in the Shadows,” Busyness Week, September 21, 1987, p.
69.
3. Stewart Venit and Wayne Bishop, Elementary Linear Algebra
(Boston: PWS Publishers, 1985).
4. Linus Schrage, User's Manual: Linear, Integer, and Quadratic
Programming with UNDO, 2d ed. (Palo Alto, Calif.: scientific Press,
1985).
5. Wild and Port, op. cit., p. 70.
6. Birth of a Method,” Computer Decisions, February 12, 1985, p. 48.
7. Jack W. Farrell, “The Karmarkar Maneuver,” Traffic Management,
February 1985, p. 85.
Expected future costs and revenues that differ among alternative courses of
action.
Avoidable costs – costs that can be eliminated, in whole or in part, when one
alternative is chosen over another in a decision-making case.
Sunk (past / historical) costs – cost that has already been incurred and
therefore cannot be avoided regardless of the alternative taken by the
decision maker
Future costs that do not differ between or among the alternatives under
consideration
Opportunity cost refers to a benefit that a person could have received, but gave
up, to take another course of action. Stated differently, an opportunity cost
represents an alternative given up when a decision is made. This cost is,
therefore, most relevant for two mutually exclusive events. In investing, it is the
difference in return between a chosen investment and one that is necessarily
passed up.
by not investing in the other option is called the opportunity cost. This is often
expressed as the difference between the expected returns of each option:
For the most part, we don't think about the things that we must give up when we
make those decisions.
However, that kind of thinking could be dangerous. The problem lies when you
never look at what else you could do with your money or buy things blindly
without considering the lost opportunities. Buying takeout for lunch occasionally
can be a wise decision, especially if it gets you out of the office when your boss
is throwing a fit. However, buying one cheeseburger every day for the next 25
years could lead to several missed opportunities. Aside from the potentially
harmful health effects of high cholesterol, investing that $4.50 on a burger could
add up to just over $52,000 in that time frame, assuming a very doable rate of
return of 5%.
This is just one simple example, but the core message holds true for a variety of
situations. From choosing whether to invest in "safe" treasury bonds or deciding
to attend a public college over a private one in order to get a degree, there are
plenty of things to consider when making a decision in your personal finance life.
While it may sound like overkill to have to think about opportunity costs every
time you want to buy a candy bar or go on vacation, it's an important tool to use
to make the best use of your money.
Opportunity cost describes the returns that could have been earned if the money
was invested in another instrument. Thus, while 1,000 shares in company A
might eventually sell for $12 each, netting a profit of $2 a share, or $2,000,
during the same period, company B rose in value from $10 a share to $15. In this
scenario, investing $10,000 in company A netted a yield of $2,000, while the
same amount invested in company B would have netted $5,000. The difference,
$3,000, is the opportunity cost of having chosen company A over company B.
The easiest way to remember the difference is to imagine "sinking" money into
an investment, which ties up the capital and deprives an investor of the
"opportunity" to make more money elsewhere. Investors must take both concepts
into account when deciding whether to hold or sell current investments. Money
has already been sunk into investments, but if another investment promises
greater returns, the opportunity cost of holding the underperforming asset may
rise to the point where the rational investment option is to sell and invest in a
more promising investment elsewhere.
Reviewer 276
Management Advisory Services
MAKE OR BUY
RELEVANT COSTS TO
KINDS OF COSTS
Make Buy
Cost of ingredients and other variable costs xx
Purchase price xx
Fixed costs avoided if bought xx
Total cost per unit xx xx
Level of activity xx xx
Total relevant costs xx xx
Make Buy
Cost of ingredients and other variable costs xx
Purchase price xx
Fixed costs avoided if bought xx
Fixed costs that cannot be avoided if bought xx xx
Total cost per unit xx xx
Level of activity xx xx
Total relevant costs xx xx
Indifference Point:
(+) = ACCEPT
(-) = REJECT
CONTINUE OR DROP/SHUTDOWN
or
Shut down point (SDP) = shut down savings / new CM per unit during shut down
period
10. Rank the products using the CM per constrained resource. The highest is
most profitable.
Reviewer 279
Management Advisory Services
11. Maximize production of the most profitable product considering the demand
constraints for the product. (Produce units only up to its market limit & use
the remaining resources for the product with the next highest CM per scarce
unit; produce only up to market limit even if there is an excess number of
resources)
PRICING DECISIONS
Pricing Objectives
4. To enhance the image that the company wants to project in the market.
1. Internal Factors
o All the relevant costs in the value chain (from research and
development to customer service).
2. External Factors
o Legal requirements
o Competitors’ Actions
Pricing Methods
1. Cost-Based Pricing – it starts with the determination of the cost, then a price
is set so that such price will recover all the costs in the value chain and
provide a desired return on investment.
Life-Cycle:
Price Skimming – the introductory price is set at a very high level. The
objective is to sell the customers who are not concerned about price, so
that the firm may recover its research and development costs.
e. Apply the discounted cash flow method and the IRR method in
determining cash flows and in making business decisions concerning
capital expenditures
2. A significant period of time (more than one year) elapses between the
investment outlay and the receipt of the benefits.
The last point (7) is crucial and this is the subject of later sections of the chapter.
a) By project size
A decrease in risk
c) By degree of dependence
Positive dependence
Negative dependence
Statistical independence.
Conventional cash flow: only one change in the cash flow sign (e.g. -/++
++ or +/----, etc)
Non-conventional cash flows: more than one change in the cash flow
sign (e.g. +/-/+++ or -/+/-/++++, etc.)
The analysis stipulates a decision rule for (1) accepting or (2) rejecting
investment projects
Recall that the interaction of lenders with borrowers sets an equilibrium rate of
interest. Borrowing is only worthwhile if the return on the loan exceeds the cost of
the borrowed funds. Lending is only worthwhile if the return is at least equal to
that which can be obtained from alternative opportunities in the same risk class.
1. The time value of money: the receipt of money is preferred sooner rather
than later. Money can be used to earn more money. The earlier the money
is received, the greater the potential for increasing wealth. Thus, to forego
the use of money, you must get some compensation.
2. The risk of the capital sum not being repaid. This uncertainty requires a
premium as a hedge against the risk, hence the return must be
commensurate with the risk being undertaken.
3. Inflation: money may lose its purchasing power over time. The lender must
be compensated for the declining spending/purchasing power of money. If
the lender receives no compensation, he/she will be worse off when the loan
is repaid than at the time of lending the money.
Reviewer 285
Management Advisory Services
Future value (FV) is the value in dollars at some point in the future of one or
more investments.
FV consists of:
where
Vo is the initial sum invested
r is the interest rate
n is the number of periods for which the investment is to receive interest.
Thus we can compute the future value of what Vo will accumulate to in n years
when it is compounded annually at the same rate of r by using the above
formula.
i) What is the future value of $10 invested at 10% at the end of 1 year?
ii) What is the future value of $10 invested at 10% at the end of 5 years?
FVn = Vo (I + r)n
By denoting Vo by PV we obtain:
FVn = PV (I + r)n
As you will see from the following exercise, given the alternative of earning 10%
on his money, an individual (or firm) should never offer (invest) more than $10.00
to obtain $11.00 with certainty at the end of the year.
where:
Ct = the net cash receipt at the end of year t
Io = the initial investment outlay
r = the discount rate/the required minimum rate of return on investment
n = the project/investment's duration in years.
Examples:
N.B. At this point the tutor should introduce the net present value tables from any
recognised published source. Do that now.
Decision rule:
If NPV is positive (+): accept the project
If NPV is negative(-): reject the project
Reviewer 287
Management Advisory Services
A firm intends to invest $1,000 in a project that generated net receipts of $800,
$900 and $600 in the first, second and third years respectively. Should the firm
go ahead with the project?
Attempt the calculation without reference to net present value tables first.
c) Annuities
N.B. Introduce students to annuity tables from any recognised published source.
A set of cash flows that are equal in each and every period is called an annuity.
Example:
Year Cash Flow ($)
0 -800
1 400
2 400
3 400
PV = $400(0.9091) + $400(0.8264) + $400(0.7513)
= $994.72
= $194.72
Alternatively,
PV of an annuity = $400 (PVFAt.i) (3,0,10)
= $400 x 2.4868
= $994.72
= $194.72
d) Perpetuities
where:
C is the sum to be received per period
r is the discount rate or interest rate
Example:
You are promised a perpetuity of $700 per year at a rate of interest of 15% per
annum. What price (PV) should you be willing to pay for this income?
= $4,666.67
Suppose that the $700 annual income most recently received is expected to
grow by a rate G of 5% per year (compounded) forever. How much would this
income be worth when discounted at 15%?
Solution:
Subtract the growth rate from the discount rate and treat the first period's cash
flow as a perpetuity.
= $735/0.10
= $7,350
where r = IRR
IRR of an annuity:
where:
Q (n,r) is the discount factor
Io is the initial outlay
C is the uniform annual receipt (C1 = C2 =....= Cn).
Example:
What is the IRR of an equal annual income of $20 per annum which accrues for
7 years and costs $120?
=6
Find the IRR of this project for a firm with a 20% cost of capital:
YEAR CASH FLOW
$
0 -10,000
1 8,000
2 6,000
a) Try 20%
b) Try 27%
c) Try 29%
Net present value vs internal rate of return
Independent project: Selecting one project does not preclude the choosing of the
other.
With conventional cash flows (-|+|+) no conflict in decision arises; in this case
both NPV and IRR lead to the same accept/reject decisions.
If cash flows are discounted at k1, NPV is positive and IRR > k1: accept project.
If cash flows are discounted at k2, NPV is negative and IRR < k 2: reject the
project.
Mathematical proof: for a project to be acceptable, the NPV must be positive, i.e.
Example:
= $954.55
= $1,363.64
IRRA:
= 1.21-1
therefore IRRA = 21%
IRRB:
Reviewer 292
Management Advisory Services
= 1.2-1
therefore IRRB = 20%
Decision:
NPV and IRR may give conflicting decisions where projects differ in their scale of
investment. Example:
Years 0 1 2 3
Project A -2,500 1,500 1,500 1,500
Project B -14,000 7,000 7,000 7,000
Assume k= 10%.
NPVA = $1,500 x PVFA at 10% for 3 years
= $1,500 x 2.487
= $3,730.50 - $2,500.00
= $1,230.50.
IRRA =
= 1.67.
IRRB =
= 2.0
Therefore IRRB = 21%
Decision:
Conflicting, as:
· NPV prefers B to A
· IRR prefers A to B
NPV IRR
Project A $ 3,730.50 36%
Project B $17,400.00 21%
Reviewer 294
Management Advisory Services
To show why:
i) the NPV prefers B, the larger project, for a discount rate below 20%
= 2.09
= 20%
d) Choosing the bigger project B means choosing the smaller project A plus an
additional outlay of $11,500 of which $5,500 will be realised each year for the
next 3 years.
g) But, if k were greater than the IRR (20%) on the incremental CF, then reject
project.
Advantage of NPV:
· It ensures that the firm reaches an optimal scale of investment.
Disadvantage of IRR:
· It expresses the return in a percentage form rather than in terms of absolute
dollar returns, e.g. the IRR will prefer 500% of $1 to 20% return on $100.
However, most companies set their goals in absolute terms and not in % terms,
e.g. target sales figure of $2.5 million.
The IRR may give conflicting decisions where the timing of cash flows varies
between the 2 projects.
Assume k = 10%
NPV IRR
Project A 17.3 20.0%
Project B 16.7 25.0%
"A minus B" 0.6 10.9%
IRR prefers B to A even though both projects have identical initial outlays. So,
the decision is to accept A, that is B + (A - B) = A. See figure 6.4.
NPV and IRR rankings are contradictory. Project A earns $120 at the end of the
first year while project B earns $174 at the end of the fourth year.
0 1 234
Project A -100 120 - - -
Project B -100 - - - 174
Assume k = 10%
NPV IRR
Project A 9 20%
Project B 19 15%
Decision:
NPV prefers B to A
IRR prefers A to B.
Decision rule:
PI > 1; accept the project
PI < 1; reject the project
Reviewer 297
Management Advisory Services
If NPV = 0, we have:
NPV = PV - Io = 0
PV = Io
Decision:
Disadvantage of PI:
The CIMA defines payback as 'the time it takes the cash inflows from a capital
investment project to equal the cash outflows, usually expressed in years'. When
deciding between two or more competing projects, the usual decision is to accept
the one with the shortest payback.
Payback is often used as a "first screening method". By this, we mean that when
a capital investment project is being considered, the first question to ask is: 'How
long will it take to pay back its cost?' The company might have a target payback,
and so it would reject a capital project unless its payback period were less than a
certain number of years.
Example 1:
Years 0 1 2 3 4 5
Project A 1,000,000 250,000 250,000 250,000 250,000 250,000
= 4 years
Reviewer 298
Management Advisory Services
Example 2:
Years 0 1 2 3 4
Project B - 10,000 5,000 2,500 4,000 1,000
Payback period lies between year 2 and year 3. Sum of money recovered by the
end of the second year
= $7,500, i.e. ($5,000 + $2,500)
= $2,500
= 2.625 years
· It ignores the time value of money. This means that it does not take into
account the fact that $1 today is worth more than $1 in one year's time. An
investor who has $1 today can either consume it immediately or alternatively can
invest it at the prevailing interest rate, say 30%, to get a return of $1.30 in a
year's time.
· It is unable to distinguish between projects with the same payback period.
The ARR method (also called the return on capital employed (ROCE) or the
return on investment (ROI) method) of appraising a capital project is to estimate
the accounting rate of return that the project should yield. If it exceeds a target
rate of return, the project will be undertaken.
Example:
= 15%
= 30%
Disadvantages:
· It does not take account of the timing of the profits from an investment.
· It is based on accounting profits and not cash flows. Accounting profits are
subject to a number of different accounting treatments.
· It is a relative measure rather than an absolute measure and hence takes no
account of the size of the investment.
Despite the limitations of the payback method, it is the method most widely used
in practice. There are a number of reasons for this:
· It is a particularly useful approach for ranking projects where a firm faces
liquidity constraints and requires fast repayment of investments.
· The method is often used in conjunction with NPV or IRR method and acts as a
first screening device to identify projects which are worthy of further investigation.
· It provides an important summary method: how quickly will the initial investment
be recouped?
So far, the effect of inflation has not been considered on the appraisal of capital
investment proposals. Inflation is particularly important in developing countries as
the rate of inflation tends to be rather high. As inflation rate increases, so will the
minimum return required by an investor. For example, one might be happy with a
return of 10% with zero inflation, but if inflation was 20%, one would expect a
much greater return.
Example:
Keymer Farm is considering investing in a project with the following cash flows:
ACTUAL CASH FLOWS
Z$
TIME
0 (100,000)
Reviewer 301
Management Advisory Services
1 90,000
2 80,000
3 70,000
Keymer Farm requires a minimum return of 40% under the present conditions.
Inflation is currently running at 30% a year, and this is expected to continue
indefinitely. Should Keymer Farm go ahead with the project?
Let us take a look at Keymer Farm's required rate of return. If it invested $10,000
for one year on 1 January, then on 31 December it would require a minimum
return of $4,000. With the initial investment of $10,000, the total value of the
investment by 31 December must increase to $14,000. During the year, the
purchasing value of the dollar would fall due to inflation. We can restate the
amount received on 31 December in terms of the purchasing power of the dollar
at 1 January as follows:
= $10,769
In terms of the value of the dollar at 1 January, Keymer Farm would make a profit
of $769 which represents a rate of return of 7.69% in "today's money" terms. This
is known as the real rate of return. The required rate of 40% is a money rate of
return (sometimes known as a nominal rate of return). The money rate measures
the return in terms of the dollar, which is falling in value. The real rate measures
the return in constant price level terms.
The two rates of return and the inflation rate are linked by the equation:
(1 + money rate) = (1 + real rate) x (1 + inflation rate)
In the example,
(1 + 0.40) = (1 + 0.0769) x (1 + 0.3)
= 1.40
b) If the cash flows are expressed in terms of the value of the dollar at time 0 (i.e.
in constant price level terms), the real rate of discounting should be used.
Reviewer 302
Management Advisory Services
In Keymer Farm's case, the cash flows are expressed in terms of the actual
dollars that will be received or paid at the relevant dates. Therefore, we should
discount them using the money rate of return.
TIME CASH FLOW DISCOUNT FACTOR PV
$ 40% $
0 (150,000) 1.000 (100,000)
1 90,000 0.714 64,260
2 80,000 0.510 40,800
3 70,000 0.364 25,480
30,540
The project has a positive net present value of $30,540, so Keymer Farm should
go ahead with the project.
The future cash flows can be re-expressed in terms of the value of the dollar at
time 0 as follows, given inflation at 30% a year:
TIME ACTUAL CASH FLOW CASH FLOW AT TIME 0 PRICE LEVEL
$ $
0 (100,000) (100,000)
1 90,000 69,231
2 80,000 47,337
3 70,000 31,862
The cash flows expressed in terms of the value of the dollar at time 0 can now be
discounted using the real value of 7.69%.
TIME CASH FLOW DISCOUNT FACTOR PV
$ 7.69% $
0 (100,000) 1.000 (100,000)
1 69,231 64,246
2 47,337 40,804
3 31,862 25,490
30,540
c) Since fixed assets and stocks will increase in money value, the same
quantities of assets must be financed by increasing amounts of capital. If the
future rate of inflation can be predicted with some degree of accuracy,
management can work out how much extra finance the company will need and
take steps to obtain it, e.g. by increasing retention of earnings, or borrowing.
However, if the future rate of inflation cannot be predicted with a certain amount
of accuracy, then management should estimate what it will be and make plans to
obtain the extra finance accordingly. Provisions should also be made to have
access to 'contingency funds' should the rate of inflation exceed expectations,
e.g. a higher bank overdraft facility might be arranged should the need arise.
Many different proposals have been made for accounting for inflation. Two
systems known as "Current purchasing power" (CPP) and "Current cost
accounting" (CCA) have been suggested.
CCA is a system which takes account of specific price inflation (i.e. changes in
the prices of specific assets or groups of assets), but not of general price
inflation. It involves adjusting accounts to reflect the current values of assets
owned and used.
The new venture will incur fixed costs of $1,040,000 in the first year, including
depreciation of $400,000. These costs, excluding depreciation, are expected to
rise by 10% each year because of inflation. The unit selling price and unit
variable cost are $24 and $12 respectively in the first year and expected yearly
increases because of inflation are 8% and 14% respectively. Annual sales are
estimated to be 175,000 units.
NATURE
(v) The finance manager is often called the Controller; and the financial management
function is given name of controllership function; in as much as the basic guideline for
the formulation and implementation of plans-throughout the enterprise-come from this
quarter.
The finance manager, very often, is a highly responsible member of the Top
Management Team. He performs a trinity of roles-that of a line officer over the
Finance Department; a functional expert commanding subordinates throughout the
enterprise in matters requiring financial discipline and a staff adviser, suggesting the
best financial plans, policies and procedures to the Top Management.
In any case, however, the scope of authority of the finance manager is defined by the
Top Management; in view of the role desired of him- depending on his financial
expertise and the system of organizational functioning.
(vi) Despite a hue and cry about decentralisation of authority; finance is a matter to be
found still centralised, even in enterprises which are so called highly decentralised.
The reason for authority being centralised, in financial matters is simple; as every
Tom, Dick and Harry manager cannot be allowed to play with finances, the way
he/she likes. Finance is both-a crucial and limited asset-of any enterprise.
(vii) Financial management is not simply a basic business function along with
production and marketing; it is more significantly, the backbone of commerce and
industry. It turns the sand of dreams into the gold of reality.
Finance management is a long term decision making process which involves lot of
planning, allocation of funds, discipline and much more. Let us understand the nature
of financial management with reference of this discipline.
Reviewer 306
Management Advisory Services
1. Finance management is one of the important education which has been realized
word wide. Now a day’s people are undergoing through various specialization
courses of financial management. Many people have chosen financial management
as their profession.
Basically therefore financial management centers on funds raising for business in the
most economical way and investing these funds in optimum way so that maximum
returns can be obtained for the shares holders. Practically all business decisions
have financial implication. Hence, financial management is interlinked with all others
functions of business.
PURPOSE
Taking a commercial business as the most common organisational structure, the key
objectives of financial management would be to:
SCOPE
2.Deciding Capital Structure: The capital structure refers to the kind and proportion of
different securities for raising funds. After deciding about the quantum of funds
required it should be decided which type of securities should be raised. It may be
wise to finance fixed assets through long-term debts. Even here if gestation period is
longer, then share capital may be most suitable. Long-term funds should be
employed to finance working capital also, if not wholly then partially. Entirely
depending upon overdrafts and cash credit for meeting working capital needs may
not be suitable. A decision about various sources for funds should be linked to the
cost of raising funds. If cost of raising funds is very high then such sources may not
be useful for long. A decision about the kind of securities to be employed and the
Reviewer 309
Management Advisory Services
Break Even Analysis, (d) Cost Control, (e) Ratio Analysis (f) Cost and Internal Audit.
Return on investment is the best control device to evaluate the performance of
various financial policies. The higher this percentage better may be the financial
performance. The use of various control techniques by the finance manager will help
him in evaluating the performance in various areas and take corrective measures
whenever needed.
1. Investment Decision:
Capital budgeting also involves decisions with respect to replacement and renovation
of old assets. The finance manager must maintain an appropriate balance between
fixed and current assets in order to maximise profitability and to maintain desired
liquidity in the firm.
Capital budgeting is a very important decision as it affects the long-term success and
growth of a firm. At the same time it is a very difficult decision because it involves the
estimation of costs and benefits which are uncertain and unknown.
2. Financing Decision:
While the investment decision involves decision with respect to composition or mix of
assets, financing decision is concerned with the financing mix or financial structure of
the firm. The raising of funds requires decisions regarding the methods and sources
of finance, relative proportion and choice between alternative sources, time of
Reviewer 311
Management Advisory Services
floatation of securities, etc. In order to meet its investment needs, a firm can raise
funds from various sources.
The finance manager must develop the best finance mix or optimum capital structure
for the enterprise so as to maximise the long- term market price of the company’s
shares. A proper balance between debt and equity is required so that the return to
equity shareholders is high and their risk is low.
Use of debt or financial leverage effects both the return and risk to the equity
shareholders. The market value per share is maximised when risk and return are
properly matched. The finance department has also to decide the appropriate time to
raise the funds and the method of issuing securities.
3. Dividend Decision:
The finance manager should consider the investment opportunities available to the
firm, plans for expansion and growth, etc. Decisions must also be made with respect
to dividend stability, form of dividends, i.e., cash dividends or stock dividends, etc.
Working capital decision is related to the investment in current assets and current
liabilities. Current assets include cash, receivables, inventory, short-term securities,
etc. Current liabilities consist of creditors, bills payable, outstanding expenses, bank
overdraft, etc. Current assets are those assets which are convertible into a cash
within a year. Similarly, current liabilities are those liabilities, which are likely to
mature for payment within an accounting year.
The scope of financial management includes three groups. First – relating to finance
and cash, second – rising of fund and their administration, third – along with the
activities of rising funds, these are part and parcel of total management, Isra Salomon
felt that in view of funds utilisation third group has wider scope.
It can be said that all activities done by a finance officer are under the purview of
financial management. But the activities of these officers change from firm to firm, it
become difficult to say the scope of finance. Financial management plays two main
roles, one – participating in funds utilisation and controlling productivity, two –
Identifying the requirements of funds and selecting the sources for those funds.
Liquidity, profitability and management are the functions of financial management. Let
us know very briefly about them.
1. Liquidity:
Cash inflows and outflows should be equalized for the purpose of liquidity.
Finance manager should try to identify the requirements and increase of funds.
2. Profitability:
While ascertaining the profitability the following aspects should be taken into
consideration:
1) Cost of control:
For the purpose of controlling costs, various activities of the firm should be analyzed
through proper cost accounting system,
ii) Pricing:
Pricing policy has great importance in deciding sales level in company’s marketing.
Pricing policy should be evolved in such a way that the image of the firm should not
be affected.
Often estimated profits should be ascertained and assessed to strengthen the firm
and to ascertain the profit levels.
Each fund source has different cost of capital. As the profit of the firm is directly
related to cost of capital, each cost of capital should be measured.
3. Management:
It is the duty of the financial manager to keep the sources of the assets in maintaining
the business. Asset management plays an important role in financial management.
Besides, the financial manager should see that the required sources are available for
smooth running of the firm without any interruptions.
A business may fail without financial failures. Financial failures also lead to business
failure. Because of this peculiar condition the responsibility of financial management
increased. It can be divided into the management of long run funds and short run
funds.
Long run management of funds relates to the development and extensive plans.
Short run management of funds relates to the total business cycle activities. It is also
Reviewer 313
Management Advisory Services
One of the most important finance functions is to intelligently allocate capital to long
term assets. This activity is also known as capital budgeting. It is important to allocate
capital in those long term assets so as to get maximum yield in future. Following are
the two aspects of investment decision
Since the future is uncertain therefore there are difficulties in calculation of expected
return. Along with uncertainty comes the risk factor which has to be taken into
consideration. This risk factor plays a very significant role in calculating the expected
return of the prospective investment. Therefore while considering investment
proposal it is important to take into consideration both expected return and the risk
involved.
Investment decision not only involves allocating capital to long term assets but also
involves decisions of using funds which are obtained by selling those assets which
become less profitable and less productive. It wise decisions to decompose
depreciated assets which are not adding value and utilize those funds in securing
other beneficial assets. An opportunity cost of capital needs to be calculating while
dissolving such assets. The correct cut off rate is calculated by using this opportunity
cost of the required rate of return (RRR)
At present, efficient use and allocation of capital are the most important functions of
financial management. Practically, this function involves the decision of the firm to
commit its funds in long-term assets together with other profitable activities.
However, the decisions of the firm to invest funds in long-term assets needs
considerable importance as the same tends to influence the firm’s wealth, size,
growth and also affects the business risk. No doubt, the primary consideration of all
types of investment decisions is the rate of earning capacity, i.e., rate of return.
But there are other considerations as well, e.g. risk factor. In short, risk factor also
plays a significant role in investment decisions.
It is also known to us that there is a cost of capital in all types of capital investment in
the business Therefore, investment in own business is justified only when the return
for the same will be at least equal to the estimated return resulting from the
investment by way of relevant cost of capital.
In other words, investment in own business is desirable provided the return from such
enterprise is higher than the estimated return on the relevant cost of capital.
The values of invested capital can, no doubt, be affected due to the following factors:
(i) Advancement in technology leads to an improved and efficient machine which may
prove existing machineries worthless; or,
(ii) A change in pattern and design which involves the scrapping of parts or materials
or tools lying in stock and which can no longer be used in future; or,
(iii) If the consumers’ tastes and preferences are changed, it is nothing but a loss of
value to a company; or,
(iv) Investments made in ‘Receivables’ may prove bad and irrecoverable and so on.
Therefore, adequate consideration relating to investment of capital should always be
made since investment involves risk.
Fixed assets (e.g., Land and building. Plant and Machinery, Furniture and Fixtures
etc.) are acquired not for sale and they are usually owned They help to continue the
production function for goods and services in order to earn revenues. Investment in
fixed assets must be made in such a way so that they are properly utilised, i.e., must
not be idle.
Similarly, current assets (e.g.. Inventories, Debtors, Bills, Cash and Bank balances
etc.) are required for working capital purposes. The funds for investment in working
capital must also be properly utilised, since the idle working capital will increase the
cost.
Since the financial resources are always limited, proper allocation and use of funds
are necessary. Besides, limited financial resources leads to a firm considering the
alternative courses of action, viz.,
To do so, the company needs to find a balance between its short-term and long-term
goals. In the very short-term, a company needs money to pay its bills, but keeping all
of its cash means that it isn't investing in things that will help it grow in the future. On
the other end of the spectrum is a purely long-term view. A company that invests all
of its money will maximize its long-term growth prospects, but if it doesn't hold
enough cash, it can't pay its bills and will go out of business soon. Companies thus
need to find the right mix between long-term and short-term investment.
The investment decision also concerns what specific investments to make. Since
there is no guarantee of a return for most investments, the finance department must
determine an expected return. This return is not guaranteed, but is the average
return on an investment if it were to be made many times.
This decision relates to careful selection of assets in which funds will be invested by
the firms. A firm has many options to invest their funds but firm has to select the most
appropriate investment which will bring maximum benefit for the firm and deciding or
selecting most appropriate proposal is investment decision.
The firm invests its funds in acquiring fixed assets as well as current assets. When
decision regarding fixed assets is taken it is also called capital budgeting decision.
OPERATING DECISIONS
Reviewer 316
Management Advisory Services
Asset management is one of the main aspects for a company to adequately meet its
obligations and in turn to position itself to meet the objectives or growth targets that
have been laid out. In other words, the Financial Manager must stipulate and assure
that the existing assets are managed in the most efficient way possible. Generally,
this manager must prioritize current asset management before fixed asset
management. Current assets are those that will become effective in the near future,
such as accounts receivable or inventories. By contrast, fixed assets lack liquidity
since they are needed for permanent operations. This includes offices, warehouses,
machinery, vehicles, etc.
A financial manager at times may be faced with difficult choices because the
company does not have sufficient cash available to pay important expenses. He may
have to choose, for example, between making a tax payment on time and making a
loan payment on time. Missing the tax payment can result in the company being
charged penalties and interest. Missing the loan payment could jeopardize the
company’s relationship with a lender that the business owner hoped to obtain
additional financing from in the future.
Short-Run Vision
FINANCING DECISIONS
All functions of a company need to be paid for one way or another. It is up to the
finance department to figure out how to pay for them through the process of
financing.
There are two ways to finance an investment: using a company's own money or by
raising money from external funders. Each has its advantages and disadvantages.
There are two ways to raise money from external funders: by taking on debt or selling
equity. Taking on debt is the same as taking on a loan. The loan has to be paid back
with interest, which is the cost of borrowing. Selling equity is essentially selling part of
your company . When a company goes public, for example, they decide to sell their
company to the public instead of to private investors. Going public entails
selling stocks which represent owning a small part of the company. The company is
selling itself to the public in return for money.
Every investment can be financed through company money or from external funders.
It is the financing decision process that determines the optimal way to finance the
investment.
Reviewer 317
Management Advisory Services
Financial decision is yet another important function which a financial manger must
perform. It is important to make wise decisions about when, where and how should a
business acquire funds. Funds can be acquired through many ways and channels.
Broadly speaking a correct ratio of an equity and debt has to be maintained. This mix
of equity capital and debt is known as a firm’s capital structure.
A firm tends to benefit most when the market value of a company’s share maximizes
this not only is a sign of growth for the firm but also maximizes shareholders wealth.
On the other hand the use of debt affects the risk and return of a shareholder. It is
more risky though it may increase the return on equity funds.
A company can raise finance from various sources such as by issue of shares,
debentures or by taking loan and advances. Deciding how much to raise from which
source is concern of financing decision. Mainly sources of finance can be divided into
two categories:
1. Owners fund.
2. Borrowed fund.
Share capital and retained earnings constitute owners’ fund and debentures, loans,
bonds, etc. constitute borrowed fund.
The main concern of finance manager is to decide how much to raise from owners’
fund and how much to raise from borrowed fund.
While taking this decision the finance manager compares the advantages and
disadvantages of different sources of finance. The borrowed funds have to be paid
back and involve some degree of risk whereas in owners’ fund there is no fix
commitment of repayment and there is no risk involved. But finance manager prefers
a mix of both types. Under financing decision finance manager fixes a ratio of owner
fund and borrowed fund in the capital structure of the company.
While taking financing decisions the finance manager keeps in mind the following
factors:
1. Cost:
The cost of raising finance from various sources is different and finance managers
always prefer the source with minimum cost.
Reviewer 318
Management Advisory Services
2. Risk:
More risk is associated with borrowed fund as compared to owner’s fund securities.
Finance manager compares the risk with the cost involved and prefers securities with
moderate risk factor.
The cash flow position of the company also helps in selecting the securities. With
smooth and steady cash flow companies can easily afford borrowed fund securities
but when companies have shortage of cash flow, then they must go for owner’s fund
securities only.
4. Control Considerations:
If existing shareholders want to retain the complete control of business then they
prefer borrowed fund securities to raise further fund. On the other hand if they do not
mind to lose the control then they may go for owner’s fund securities.
5. Floatation Cost:
It refers to cost involved in issue of securities such as broker’s commission,
underwriters fees, expenses on prospectus, etc. Firm prefers securities which involve
least floatation cost.
If a company is having high fixed operating cost then they must prefer owner’s fund
because due to high fixed operational cost, the company may not be able to pay
interest on debt securities which can cause serious troubles for company.
The conditions in capital market also help in deciding the type of securities to be
raised. During boom period it is easy to sell equity shares as people are ready to take
risk whereas during depression period there is more demand for debt securities in
capital market.
B. Financial Management Concepts & Techniques For Planning, Control & Decision
Making
1. Financial Statement Analysis
a. Vertical Analysis (Common-Size Financial Statements)
are shown as a percentage of total assets. The current liabilities, long term debts
and equities are shown as a percentage of the total liabilities and stockholders’
equity.
A basic vertical analysis needs an individual statement for a reporting period but
comparative statements may be prepared to increase the usefulness of the
analysis.
Example:
Current assets:
All three of the primary financial statements can be put into a common-size
format. Financial statements in dollar amounts can easily be converted to
common-size statements using a spreadsheet, or they can be obtained from
online resources like Mergent Online. Below is an overview of each statement
and a more detailed summary of the benefits, as well as drawbacks, that such an
analysis can provide investors.
The common-size strategy from a balance sheet perspective lends insight into a
firm’s capital structure and how it compares to rivals. An investor can also look to
determine an optimal capital structure for an industry and compare it to the firm
being analyzed. Then he or she can conclude whether debt is too high, excess
cash is being retained on the balance sheet, or inventories are growing too high.
It is important to add short-term and long-term debt together and compare this
amount to total cash on hand in the current assets section. It lets the investor
know how much of a cash cushion is available or if a firm is dependent on the
markets to refinance debt when it comes due.
The common figure for an income statement is total top-line sales. This is
actually the same analysis as calculating a company's margins. For instance, a
net profit margin is simply net income divided by sales, which also happens to be
a common-size analysis. The same goes for calculating gross and operating
margins. The common-size method is appealing for research-intensive
companies, for example, because they tend to focus on research and
development (R&D) and what it represents as a percent of total sales.
Looking at the peer group and companies overall, according to a Booz & Co.
analysis, this puts IBM in the top five among tech giants and the top 20 firms in
the world (2013) in terms of total R&D spending as a percent of total sales.
In similar fashion to an income statement analysis, many items in the cash flow
statement can be stated as a percent of total sales. This can give insight on a
number of cash flow items, including capital expenditures (capex) as a percent of
revenue. Share repurchase activity can also be put into context as a percent of
the total top line. Debt issuance is another important figure in proportion to the
amount of annual sales it helps generate. Because these items are calculated as
a percent of sales, they help indicate the extent to which they are being utilized
to generate overall revenue.
Reviewer 325
Management Advisory Services
Just looking at a raw financial statement makes this more difficult. But looking up
and down a financial statement, using a vertical analysis allows an investor to
catch significant changes at a company on his or her own. A common-size
analysis helps put an analysis in context (on a percentage basis). It is the same
as a ratio analysis when looking at the profit and loss statement.
In IBM's case, its results overall have been relatively steady. One item of note is
the Treasury stock in the balance sheet, which has grown to more than a
negative 100% of total assets. But rather than alarm investors, it indicates the
company has been hugely successful in generating cash to buy back shares,
which far exceeds what it has retained on its balance sheet.
A common-size analysis can also give insight into the different strategies that
companies pursue. For instance, one company may be willing to sacrifice
margins for market share, which would tend to make overall sales larger at the
expense of gross, operating or net profit margins. Ideally the company that
pursues lower margins will grow faster. While we looked at IBM on a stand-alone
basis, like the R&D analysis, IBM should also be analyzed by comparing it to key
rivals.
TREND PERCENTAGES
Reviewer 327
Management Advisory Services
The statements for two or more periods are used in horizontal analysis. The
earliest period is usually used as the base period and the items on the
statements for all later periods are compared with items on the statements of the
base period. The changes are generally shown both in dollars and percentage.
Dollar and percentage changes are computed by using the following formulas:
Example:
In above analysis, 2007 is the base year and 2008 is the comparison year. All
items on the balance sheet and income statement for the year 2008 have been
compared with the items of balance sheet and income statement for the year
2007.
The actual changes in items are compared with the expected changes. For
example, if management expects a 30% increase in sales revenue but actual
increase is only 10%, it needs to be investigated.
Reviewer 330
Management Advisory Services
Trend analysis calculates the percentage change for one account over a period
of time of two years or more.
Percentage change
2. Divide the change by the earlier year's balance. The result is the percentage
change.
Calculation notes:
1. 20X0 is the earlier year so the amount in the 20X0 column is subtracted
from the amount in the 20X1 column.
2. The percent change is the increase or decrease divided by the earlier
amount (20X0 in this example) times 100. Written as a formula, the percent
change is:
3. If the earliest year is zero or negative, the percent calculated will not be
meaningful. N/M is used in the above table for not meaningful.
4. Most percents are rounded to one decimal place unless more are
meaningful.
Reviewer 331
Management Advisory Services
5. A small absolute dollar item may have a large percentage change and be
considered misleading.
Trend percentages
Calculation notes:
the balance in that year has increased over the base year. A negative trend
percentage represents a negative number.
2. If the base year is zero or negative, the trend percentage calculated will not
be meaningful.
In this example, the sales have increased 59.3% over the five‐year period while
the cost of goods sold has increased only 55.9% and the operating expenses
have increased only 57.5%. The trends look different if evaluated after four
years. At the end of 20X0, the sales had increased almost 20%, but the cost of
goods sold had increased 31%, and the operating expenses had increased
almost 41%. These 20X0 trend percentages reflect an unfavorable impact on net
income because costs increased at a faster rate than sales. The trend
percentages for net income appear to be higher because the base year amount
is much smaller than the other balances.
INDEX ANALYSIS
This analysis considers changes in items
of financial statement from a base year to the following years to show
the direction of change. This is also called horizontal analysis.
In this, the figures of various years are placed side by side in adjacent columns in
the form of comparative financial statements.
Some believe that Wall Street focuses only on earnings while ignoring the real
cash that a firm generates. Earnings can often be adjusted by various accounting
practices, but it's tougher to fake cash flow. For this reason, some investors
believe that FCF gives a much clearer view of a company's ability to generate
cash and profits.
However, it is important to note that negative free cash flow is not bad in itself. If
free cash flow is negative, it could be a sign that a company is making large
investments. If these investments earn a high return, the strategy has the
potential to pay off in the long run. FCF is also better indicator than the P/E ratio.
For more information, feel free to read Free Cash Flow Yield: The Best
Fundamental Indicator and FCF: Free, But Not Always Easy.
An Example of FCF
The company has been able to generate increased revenues and profits in 2014
and 2015, reaching a record $2.4 billion in profits for the fiscal year 2015.
Additionally, its operating margin increased in 2015 to 20.1%, and is expected to
produce even higher margins throughout 2016. Further, capital expenditures
reached $2 billion in 2015 and are expected to cap out at $2.2 billion in 2017.
This means that its FCF, which is a function of revenue growth and expenditures,
is expected to double by the end of 2017.
Yield Variance
Reviewer 341
Management Advisory Services
LIQUIDITY
LIQUIDITY RATIO
NAME FORMULA
WORKING CAPITAL current assets - current liabilities
CURRENT RATIO current assets ÷ current liabilities
QUICK (ACID TEST) RATIO quick assets ÷ current liabilities
cash + A/R + marketable securities - uncollectible
QUICK ASSET
A/R + inventory if exclusively sold on cash basis
INVENTORY TO WORKING CAPITAL inventory ÷ working capital
CASH RATIO (cash + marketable securities) ÷ current liabilities
CURRENT CASH DEBT COVERAGE RATIO cash provided by operations ÷ current liabilities
SOLVENCY RATIO
NAME FORMULA
DEBT TO ASSETS RATIO total debt ÷ total assets
DEBT TO EQUITY RATIO total debt ÷ total stockholders' equity
LONG-TERM DEBT TO EQUITY RATIO long-term debt ÷ total stockholders' equity
profits before interest and taxes ÷ total interest
TIMES INTEREST EARNED RATIO
charges
(Profits before taxes and interest + fixed charges)
FIXED-CHARGE COVERAGE RATIO
÷ (Total interest charges + fixed charges)
(EBIT + fixed charges + depreciation) ÷ {interest
CASH FLOW COVERAGE RATIO charges + fixed charges + [(preferred stock
dividends + debt repayments) ÷ (1 - tax rate)]}
ACTIVITY RATIO
NAME FORMULA
ACCOUNTS RECEIVABLE TURNOVER net credit sales ÷ average accounts receivable
365 days (in a year) ÷ accounts receivable
# OF DAYS IN ACCOUNTS RECEIVABLE
turnover
AVERAGE COLLECTION PERIOD average accounts receivable ÷ average daily sales
INVENTORY TURNOVER cost of sales ÷ average inventory
FINISHED GOODS TURNOVER cost of sales ÷ average finished goods inventory
cost of goods manufactured ÷ average work in
WORK IN PROCESS TURNOVER
process inventory
Reviewer 342
Management Advisory Services
PROFITABILITY
PROFITABILITY RATIO
NAME FORMULA
PROFIT MARGIN ON SALES net income available to common ÷ net sales
RETURN ON SALES net income ÷ net sales
OPTION 1: net income available to common ÷
average total assets
RETURN ON TOTAL ASSETS
OPTION 2: {net income + [interest charges x (1-tax
rate)]} ÷ average total assets
net income available to common ÷ average
RETURN ON COMMON EQUITY
common stock equity
BASIC EARNING POWER EBIT ÷ average total assets
net income available to common ÷ weighted
EARNINGS PER SHARE
average # of common stocks outstanding
DIVIDENDS PER SHARE dividends ÷ outstanding shares
Growth ratios, or growth rates, tell the analyst just how fast a company is
growing. The most important of these ratios include:
Sales (%): normally stated in terms of a percentage growth from the prior
year. Sales is the term used for operating revenues, so it's important to see
the sales growth rate as high as possible.
Net Income (%): growth in net income is even more important than sales
because net income tells the investor how much money is left over after all
of the operating costs are subtracted from sales.
GROWTH RATIO
NAME FORMULA
PRICE-EARNINGS RATIO market price per share ÷ earnings per share
MARKET-BOOK RATIO market price per share ÷ book value per share
DIVIDEND YIELD RATIO dividends per share ÷ market price per share
BOOK VALUE PER SHARE shareholders' equity ÷ average shares outstanding
DIVIDEND PAYOUT RATIO dividends per share ÷ earnings per share
DU PONT MODEL
Definition
Calculation (formula)
If ROE is unsatisfactory, the DuPont analysis helps locate the part of the
business that is underperforming.
Reviewer 344
Management Advisory Services
Return on
Investment (ROI)
Cost of Goods
Sold Cash Land
Accounts
Selling Expenses Building
Receivable
Marketable
Securities
Others
AFN is "additional funds needed," and refers to the additional resources that will
be needed for a company to expand its operations.
LEARNING OBJECTIVE
KEY POINTS
TERMS
liabilities
An amount of money in a company that is owed to someone and has to be paid
in the future, such as tax, debt, interest, and mortgage payments.
asset
Something or someone of any value; any portion of one's property or effects so
considered.
sales
Revenues
Example
TransWorld Inc. runs a shipping business and has forecasted a 10% increase in
sales over 20Y3. Its assets and liabilities at the end of 20Y2 amounted to $25
billion and $17 billion respectively. Sales for the period were $30 billion and it
earned a 4% profit margin. It reinvests 40% of its net income and pays out the
rest to its shareholders. Calculate additional funds needed.
Solution
Increase in assets = 20Y2 assets × sales growth rate = $25 billion × 10% = $2.5
billion
Additional funds needed = $2.5 billion – $1.7 billion − $0.528 billion = $0.272
billion
TransWorld must raise $272 million to finance the increased level of sales.
Working capital management commonly involves monitoring cash flow, assets and
liabilities through ratio analysis of key elements of operating expenses, including the
working capital ratio, collection ratio and the inventory turnover ratio. Efficient working
capital management helps with a company's smooth financial operation, and can also
help to improve the company's earnings and profitability. Management of working
capital includes inventory management and management of accounts receivables
and accounts payables.
The working capital ratio, calculated as current assets divided by current liabilities, is
considered a key indicator of a company's fundamental financial health since it
indicates the company's ability to successfully meet all of its short-term financial
obligations. Although numbers vary by industry, a working capital ratio below 1.0 is
generally indicative of a company having trouble meeting short-term obligations,
usually due to insufficient cash flow. Working capital ratios of 1.2 to 2.0 are
considered desirable, but a ratio higher than 2.0 may indicate a company is not
making the most effective use of its assets to increase revenues.
If a company's current assets do not exceed its current liabilities, then it may run into
trouble paying back creditors in the short term. The worst-case scenario
is bankruptcy. A declining working capital ratio over a longer time period could also
be a red flag that warrants further analysis. For example, it could be that the
company's sales volumes are decreasing and, as a result, its
accounts receivables number continues to get smaller and smaller.Working capital
also gives investors an idea of the company's underlying operational efficiency.
Money that is tied up in inventory or money that customers still owe to the company
cannot be used to pay off any of the company's obligations. So, if a company is not
operating in the most efficient manner (slow collection), it will show up as an increase
in the working capital. This can be seen by comparing the working capital from one
period to another; slow collection may signal an underlying problem in the company's
operations.
Things to Remember
If the ratio is less than one then they have negative working capital.
A high working capital ratio isn't always a good thing, it could indicate that they
have too much inventory or they are not investing their excess cash
Gross working capital is the sum of all of a company's current assets (assets that are
convertible to cash within a year or less). Gross working capital includes assets such
as cash, checking and savings account balances, accounts receivable, short-term
investments, inventory and marketable securities. From gross working capital,
subtract the sum of all of a company's current liabilities to get net working capital.
A company needs just the right amount of working capital to function optimally. With
too much working capital, some current assets would be better put to other uses.
With too little working capital, a company may not be able to meet its day-to-day cash
requirements. The correct balance is obtained through working capital management.
In an ordinary sense, working capital denotes the amount of funds needed for
meeting day-to-day operations of a concern.
This is related to short-term assets and short-term sources of financing. Hence it
deals with both, assets and liabilities—in the sense of managing working capital
it is the excess of current assets over current liabilities. In this article we will
discuss about the various aspects of working capital.
Concept of Working Capital:
The funds invested in current assets are termed as working capital. It is the fund
that is needed to run the day-to-day operations. It circulates in the business like
Reviewer 349
Management Advisory Services
the blood circulates in a living body. Generally, working capital refers to the
current assets of a company that are changed from one form to another in the
ordinary course of business, i.e. from cash to inventory, inventory to work in
progress (WIP), WIP to finished goods, finished goods to receivables and from
receivables to cash.
There are two concepts in respect of working capital:
(i) Gross working capital and
ADVERTISEMENTS:
A small business’s working capital represents its current assets minus current
liabilities. Current assets are cash or items that can convert to cash in less than a
year, such as accounts receivable, negotiable securities and inventory. Current
liabilities include the short-term payables: accounts, payroll, taxes and interest,
as well as any debt coming due within a year. Aggressive and conservative
levels of working capital sit at the opposite ends of a spectrum -- the optimal
amount of working capital lies somewhere in between.
An aggressive working capital policy is one in which you try to squeeze by with a
minimal investment in current assets coupled with an extensive use of short-term
credit. Your goal is to put as much money to work as possible to decrease the
time needed to produce products, turn over inventory or deliver services.
Speeding up your business cycle grows your sales and revenues. You keep little
money on hand, cut slow-moving inventory and unnecessary supplies to the
bone and stretch out your bill payments for as long as possible. The one
payment you cannot delay is interest -- your creditors can sue you, force you into
bankruptcy and liquidate your assets. You would also want to avoid missing tax
payments.
Conservatively managed working capital will help lower your risks of short-term
cash shortages but might hurt your long-term profitability, because excess cash
doesn’t earn much of a return.
Risk
Your risk of default and bankruptcy increases as you adopt more aggressive
working capital policies. For example, a sudden emergency can leave you
unable to make a bond interest payment. Tight inventories can lead to shortages
and lost sales. Vendors might balk at extending your further credit if you stretch
out payments beyond 90 days. Investors might be less willing to buy your bonds
and may force you to offer higher interest rates on newly issued long-term debt.
The major risk of a conservative working capital policy is the opportunity costs of
“lazy” assets that you could put to work. A conservative policy lowers your sales
efficiency -- sales revenue divided by working capital -- that can dissuade
potential investors.
Return
REASONS FOR HOLDING CASH “Why would a firm hold cash when, being idle,
it is a non-earning asset?”
FLOAT
Collection Disbursem
(Negative) (Positive
Processing
Clearing
Accelerating Collection
Cost (Benefit) of acceleration = [Average daily credit sales x (new credit period –
old credit period)]
Discount Policy
Cost (benefit) of change in discount policy = [Average daily credit sales x (new
discount period – old discount period)]
Cost VS Benefit
Benefit = Incremental CM
Cost of policy relaxation = (average daily credit sales new – average daily credit
sales old) x (collection period new – collection period old)
OBJECTIVES: To maintain inventory level that balances sales, demand, the cost
of carrying additional inventory, and the efficiency of inventory control.
CARRYING COSTS
utilities and salaries, financial costs such as opportunity cost, and inventory costs
related to perishability, shrinkage (theft) and insurance.[1] Holding cost also
includes the opportunity cost of reduced responsiveness to customers' changing
requirements, slowed introduction of improved items, and the inventory's value
and direct expenses, since that money could be used for other purposes.
When there are no transaction costs for shipment, carrying costs are minimized
when no excess inventory is held at all, as in a Just In Time production system.[1]
Excess inventory can be held for one of three reasons. Cycle stock is held based
on the re-order point, and defines the inventory that must be held for production,
sale or consumption during the time between re-order and delivery. Safety
stock is held to account for variability, either upstream in supplier lead time, or
downstream in customer demand. Physical stock is held by consumer retailers to
provide consumers with a perception of plenty.
Contents
[hide]
1Definitions
2Why do companies hold inventory
3Ways to reduce the lower Carrying Cost
4See also
5Further reading
6References
Definitions[edit]
The cost consist of four different factors:
1. The expenses of putting the inventory in storage
2. Salary and wages of workers
3. Maintenance in the long term
4. All utilities used in caring the storage [2]
Moreover, the carrying cost will mostly appear as a percentage number. It
provides an idea of how long the inventory could be held before the company
makes a loss, this also tells the manager how much to order.
Why do companies hold inventory[edit]
Inventory is a property of a company that is ready for them to sale.[3] There are
five basic reasons that why a company need inventory.
1. Safety inventory
This would act like a buffer to make sure that the company would have excess
products for sale if consumer demands exceed their expectation."[4]
2. Cater to Cyclical and Seasonal Demand
These kind of inventory are use for predicable events that would cause a change
in people’s demand. For example, candy companies can starts to produce extra
sweets that have long duration period. Build up seasonal inventory gradually to
match people’s sharply increasing demand before Halloween. "[4]
3. Cycle inventory
First of all, we need to go through the idea of economic order quantity (EOQ).
[5]
EOQ is an attempt to balance inventory holding or carrying costs with the costs
Reviewer 358
Management Advisory Services
incurred from ordering or setting up machinery. The total cost will minimized
when the ordering cost and the carrying cost equal to each other. While
customer order a significant quantities of products, cycle inventory would be able
to save cost and act as a buffer for the company to purchase more supplies."[4]
4. In-transit Inventory[6]
This kind of inventory would save company a lot transportation cost and help the
transition process become less time-consuming. For example, if the company
request a particular raw material from overseas market. Purchase in bulk will
save them a lot transportation cost from overseas shipment fees.[citation needed]
5. Dead Inventory
Dead inventory or dead stock is consisting of different kinds of products that was
outdated or only a few consumer requests this kind of product. So manager
pulled them from store shelves. To reduce costs of holding this kinds of products,
company could hold discount events or imply price reduction to attraction
consumers attentions.[7]
Ways to reduce the lower Carrying Cost[edit]
For most firms they see profit maximizing, as their prior objectives. in order to
reach higher profit here are some methods of reducing Carrying cost.
1. Base the number of stocks on the situation of Economics: The number of
stocks should be changed with consumers demand, the situation of the industry
also the exchange rate of the currency. When the economic is in recession or the
currency depreciate residents’ purchasing power would decreased."[8]
2. Improve the layout of warehouse:[9] Instead of renting a new place, the
manager might consider about the idea of rearrange the layout of the warehouse
that they owned. An inefficient layout may increase the risk of shipping the wrong
products to consumers this would both increase transportation cost and become
time consuming. To improve the layout the company could either increase the
reception area or apply segmentation. This will reduce the cost as well as
increase labour’s productivity![10]
3.Build long-term agreements with suppliers: Signing long-term contract with
suppliers may increase the supplier’s financial security and the company may
receive a lower price. This will become a win-win situation. Also the supplier
might be willing to decrease the time period of delivery their products to the
warehouse, for example from once a month to once a week. Hence, the
company would be able to switch to a smaller warehouse, as they don’t need to
stock that much products at a time. Furthermore, this would also reduce the risks
of loss and depreciation of the products.[11]
4. Creating an effective database: The database should include things like
retailer, date, quantity, quality, degree of advertising and the time taken until sold
out. This will make sure that the future employees can learn from the past
experience while making decisions. For example, if they manager want to hold a
big discount event to clear the products that have been left in stock for a long
time. Then he can go through the past data to find out if there is any event like
Reviewer 359
Management Advisory Services
this before and how was the result. The manager would be able to forecast the
budget and make some improvements base on the past events’ record."[8]
ORDERING COSTS
Ordering costs are the expenses incurred to create and process an order to a
supplier. These costs are included in the determination of the economic order
quantity for an inventory item.
There will be an ordering cost of some size, no matter how small an order may
be. The total amount of ordering costs that a business incurs will increase with
the number of orders placed. This aggregate order cost can be mitigated by
placing large blanket orders that cover long periods of time, and then issuing
order releases against the blanket orders.
An entity may be willing to tolerate a high aggregate ordering cost if the result is
a reduction in its total inventory carrying cost. This relationship occurs when a
business orders raw materials and merchandise only as needed, so that more
orders are placed but there is little inventory kept on hand. A firm must monitor
its ordering costs and inventory carrying costs in order to properly balance order
sizes and thereby minimize overall costs.
STOCK-OUT COSTS
Stock-out Costs is the cost associated with the lost opportunity caused by the
exhaustion of the inventory. The exhaustion of inventory could be a result of
various factors. The most notable amongst them is defective shelf replenishment
practices. Stock-outs could prove to be very costly for the companies. The subtle
responses could be postponement of purchase. The more disastrous ones are
the consumers may get frustrated and switch stores or even purchase substitute
items (brands). Various retailers follow the concept of “Safety Stock” in order to
avoid the situation of stock-outs. Stock-outs could occur at any point of the
supply chain.
For example: Newsvendor problem that combines the concept of statistics and
operations is one of the advanced tools used by companies to avoid stock-out
costs.
EOQ Model
Safety Stock
Reorder Point
a firm holds in stock, such that, when stock falls to this amount, the item must be
reordered. It is normally calculated as the forecast usage during the
replenishment lead time plus safety stock. In the EOQ (Economic Order
Quantity) model, it was assumed that there is no time lag between ordering and
procuring of materials. Therefore the reorder point for replenishing the stocks
occurs at that level when the inventory level drops to zero and because instant
delivery by suppliers, the stock level bounce back.
SOURCES OF
SHORT-TERM
FUNDS
Secured Loans
Credits (Current Asset
FInancing)
Factoring
TRADE CREDIT
BANK LOANS
COMMERCIAL PAPERS
Reviewer 365
Management Advisory Services
RECEIVABLE FACTORING
FACTORS OF
CONSIDERATIONS IN
SELECTING SOURCES
OF SHORT TERM FUNDS
3. Capital Budgeting
a. Capital Investment Decision Factors (Net Investment For Decision
Making, Cost Of Capital, Cash And Accrual Net Returns)
opportunity costs, to name a few. All factors should be examined before coming
to a final decision on capital investment projects.
The future cash flow resulting from the capital investment is one of the major
factors affecting capital investment decisions. For each proposed capital
investment, the enterprise must put together a cash flow forecast that is as
accurate as possible. Often, the capital investment will involve a major cash
outflow at the start, followed at a later date by a series of cash inflows as the
benefits of the capital investment are realized. These benefits may result from
increased earnings into the future, or they may be in the form of cost savings.
The cash flow forecast must be as accurate as possible to enable the results of
the capital investment to be assessed correctly. The same criteria must be
applied to cash flow forecasting for each capital investment under consideration,
to ensure that the projects may be meaningfully compared. This enables
management to proceed with a realistic assessment of the capital investments
available and reach a correct decision on the investment to be made.
Methods Used to Assess Capital Investments
is a very rough way of assessing a project that does not take into account the
time value of money and does not therefore apply any discount rate to the future
cash flows.
Another method for assessing projects is to look at the internal rate of return of
the project, which is the discount rate that when applied to the cash flows from
the capital investment arrives at a net present value of zero. This discount rate is
then compared to a benchmark rate such as the company's cost of capital to
arrive at a decision as to whether the capital investment would be worthwhile.
This method can give management some confidence that the capital investment
will benefit the business but is unreliable for comparing capital investments when
the period during which cash flows will continue varies, and where cash flows are
irregular.
The needs of the shareholders may require the pursuit of capital investments that
will pursue growth and increase the value of the enterprise, rather than
necessarily increasing cash flow in the short term. The shareholders are the
owners of the company and management is responsible to them. If management
makes capital investments that are against the wishes of the shareholders they
will be held accountable. The shareholders are one of the most important factors
affecting capital investment decisions. The final decision must therefore always
take shareholder demands into account.
Capital expenditures include the calculated worth of all assets (i.e. property,
software, equipment, etc.) and the amount of additional expenses being invested
into those assets (i.e. maintenance, repair, upkeep, installation, etc.).
At the end of the asset's useful life, the amount the asset is sold for represents
its salvage value. Non-cash depreciation of an asset is represented as its
salvage value minus any taxes the company paid on the asset throughout its
useful life.
Let's assume that Company XYZ buys a new widget machine for $500,000 and
pays someone $10,000 to install the machine in the factory. The company also
expects to receive $75,000 from the sale of its old widget machine. Company
XYZ is taxed at a rate of 30%.
WHY IT MATTERS:
COST OF CAPITAL
The return an investor receives on a company security is the cost of that security
to the company that issued it. A company's overall cost of capital is a mixture of
returns needed to compensate all creditors and stockholders. This is often called
the weighted average cost of capital and refers to the weighted average costs of
the company's debtand equity.
WHY IT MATTERS:
The cost of various capital sources varies from company to company, and
depends on factors such as its operating history, profitability, credit worthiness,
etc. In general, newer enterprises with limited operating histories will have higher
costs of capital than established companies with a solid track record, since
lenders and investors will demand a higher risk premium for the former.
Every company has to chart out its game plan for financing the business at an
early stage. The cost of capital thus becomes a critical factor in deciding which
financing track to follow – debt, equity or a combination of the two. Early-stage
companies seldom have sizable assets to pledge as collateral for debt financing,
so equity financing becomes the default mode of funding for most of them.
The cost of debt is merely the interest rate paid by the company on such debt.
However, since interest expense is tax-deductible, the after-tax cost of debt is
calculated as: Yield to maturity of debt x (1 - T) where T is the
company’s marginal tax rate.
Companies strive to attain the optimal financing mix, based on the cost of capital
for various funding sources. Debt financing has the advantage of being more tax-
efficient than equity financing, since interest expenses are tax-deductible
and dividends on common shares have to be paid with after-tax dollars.
However, too much debt can result in dangerously high leverage, resulting in
higher interest rates sought by lenders to offset the higher default risk.
NET RETURNS
Another distinction is between net and gross return. The 'pure' net return to the
investor is the return net of all fees, expenses, and taxes, whereas the 'pure'
gross return is the return before all fees, expenses, and taxes. Various variations
between these two extremes exist. Which return one looks at depends on what
one is trying to measure. For example, if one wishes to measure the ability of an
investment manager to add value, then the return net of transaction expenses,
but gross of all other fees, expenses, and taxes is an appropriate measure to
look at since fees, expenses, and taxes other than transaction expenses are
often outside the control of the investment manager.
Net income from an investment after deducting all expenses from the gross
income generated by the investment. Depending on the analysis required, the
deductions may or may not include income tax and/or capital gains tax.
Investors use net returns to calculate the return on their investments after all
expenses and profits have been included. For example, stocks may have brokers
fees associated with their purchase and sale as well as extra income such as
dividends. The net return is measured as a percentage of the cost paid to obtain
the asset. To calculate the net return, you need to know how much the asset
Reviewer 373
Management Advisory Services
cost, how much it was sold for and any other costs or income associated with the
asset.
Step 1
Calculate the total cost of your investment by adding what you paid for it to any
fees you paid to acquire it. For example, if you paid $1,500 for a stock and paid a
$10 broker's fee, your total cost would be $1,510.
Step 2
Calculate the total return on your investment by adding the amount the asset was
sold for and any payments, such as dividends, made to you while you owned it
and subtracting the costs associated with the sale. For example, if you sold the
stock for $1,700, received $50 in dividends while you owned it and paid a $10
broker's fee to sell it, you would add $1,700 to $50 and subtract $10 to get
$1,740.
Step 3
Divide the total return by the total cost. In this example, you would divide $1,740
by $1,510 to get about 1.152.
Step 4
Subtract 1 from the result from step 3 to find the net return expressed as a
decimal. In this example, you would subtract 1 from 1.152 to get 0.152.
Step 5
Multiply the result from step 4 by 100 to convert from a decimal to a percentage.
Finishing the example, you would multiply 0.152 by 100 to find your net return to
be about 15.2 percent.
Cash
investors looking for properties where cash flow is paramount, however, some
use it to determine if a property is undervalued, indicating instant equity in a
property.
Example[edit]
.
However, because the investor used debt to service a portion of the asset, they
are required to make debt service payments and principal repayments in this
scenario (I.E. mortgage payments). Because of this, the Cash-on-Cash return
would be a lower figure which would be determined by dividing the NOI after all
mortgage payment expenses were deducted from it, by the total cash invested.
For example: If the investor made total mortgage payments (principal+interest) of
$2,000 a month in this scenario, then the Cash-on-Cash investment would be as
follows: $2,000x12= $24,000. $60,000-$24,000= $36,000.
Limitations[edit]
It does not account for other risks associated with the underlying property.
Accrual
Accrual basis net income is compared against the investment cost to get the net
return.
PAYBACK PERIOD
The payback period is the length of time required to recover the cost of an
investment. The payback period of a given investment or project is an important
determinant of whether to undertake the position or project, as longer payback
periods are typically not desirable for investment positions. The payback period
ignores the time value of money, unlike other methods of capital budgeting, such
as net present value, internal rate of return or discounted cash flow.
Much of corporate finance is about capital budgeting. One of the most important
concepts that every corporate financial analyst must learn is how to value
different investments or operational projects. The analyst must figuring out a
reliable way to determine the most profitable project or investment to undertake.
One way corporate financial analysts do this is with the payback period.
Most capital budgeting formulas take the time value of money into consideration.
The time value of money (TVM) is the idea that cash in hand today is worth more
than it is in the future because it can be invested and make money from that
investment. Therefore, if you pay an investor tomorrow, it must include an
opportunity cost. The time value of money is a concept that assigns a value to
this opportunity cost.
Reviewer 376
Management Advisory Services
The payback period does not concern itself with the time value of money. In fact,
the time value of money is completely disregarded in the payback method, which
is calculated by counting the number of years it takes to recover the cash
invested. If it takes five years for the investment to earn back the costs, the
payback period is five years. Some analysts like the payback method for its
simplicity. Others like to use it as an additional point of reference in a capital
budgeting decision framework.
Assume company A invests $1 million in a project that will save the company
$250,000 every year. The payback period is calculated by dividing $1 million by
$250,000, which is four. In other words, it will take four years to pay back the
investment. Another project that costs $200,000 won't save the company money,
but it will make the company an incremental $100,000 every year for the next 20
years, which is $2 million. Clearly, the second project can make the company
twice as much money, but how long will it take to pay the investment back? The
answer is $200,000 divided by $100,000, or 2 years. Not only does the second
project take less time to pay back, but it makes the company more money.
Based solely on the payback method, the second project is better.
The key advantage of ARR is that is easy to compute and understand. The main
disadvantage of ARR is that it disregards the time factor in terms of time value of
money or risks for long term investments. The ARR is built on evaluation of
profits and it can be easily manipulated with changes in depreciation methods.
The ARR can give misleading information when evaluating investments of
different size.[3]
Basic Formulas
Reviewer 377
Management Advisory Services
Pitfalls[edit]
1. This technique is based on profits rather than cash flow. It ignores cash flow
from investment. Therefore, it can be affected by non-cash items such
as bad debts and depreciation when calculating profits. The change of
methods for depreciation can be manipulated and lead to higher profits.
2. This technique does not adjust for the risk to long term forecasts.
Accounting rate of return is also called the simple rate of return and is a metric
useful in the quick calculation of a company’s profitability. ARR is used mainly as
a general comparison between multiple projects as it is a very basic look at how
a project is doing.
Reviewer 378
Management Advisory Services
The total profit from a project over the past five years is $50,000. During this
span, a total investment of $250,000 has been made. The average annual profit
is $10,000 ($50,000/5 years) and the average annual investment is $50,000
($250,000/5 years). Therefore, the accounting rate of return is 20%
($10,000/$50,000).
In addition to the lack of consideration given to the time value of money as well
as cash flow timing, accounting rate of return does not provide any insight as to
constraints, bottleneck ramifications or impacts on company throughput.
Accounting rate of return isolates individual projects and may not capture the
systematic impact a project may have on the entire entity – both positively and
negatively. Accounting rate of return is not ideal to use for comparative purposes
because financial measurements may not be consistent between projects and
other non-financial factors need consideration. Finally, accounting rate of return
does not consider the increased risk of long-term projects and the increased
variability associated with long periods of time.
FFM study guide reference E3b) requires candidates to not only be able to
calculate the accounting rate of return, but also to be able to discuss the
usefulness of the accounting rate of return as a method of investment appraisal.
Recent FFM exam sittings have shown that candidates are struggling with the
concept of the accounting rate of return and this article aims to help candidates
with this topic.
Candidates should note that accounting rate of return can not only be examined
within the FFM syllabus, but also the F9 syllabus.
DEFINITION
Reviewer 379
Management Advisory Services
The accounting rate of return, also known as the return on investment, gives the
annual accounting profits arising from an investment as a percentage of the
investment made.
As we can see from this, the accounting rate of return, unlike investment
appraisal methods such as net present value, considers profits, not cash flows.
This is a vital point that many candidates forget in the exam.
Calculation
The formula for the accounting rate of return is (average annual accounting
profits/investment) x 100%
We need the average annual accounting profit. To find this, the profit for the
whole project needs to be calculated, which is then divided by the number of
years for which the project is running (in this case five years).
Considering the profit for the project, let us draw up a simple profit and loss
statement for the whole project:
Reviewer 380
Management Advisory Services
Next we need to convert this profit for the whole project into an average figure,
so dividing by five years gives us $8,000 ($40,000/5).
Now we have the numerator, we need to consider the denominator, i.e. the
investment figure.
The investment figure can either be
So in this case:
This approach should be used for any accounting rate of return calculation, no
matter how easy or difficult:
1. Calculate the profit for the whole project. Include not only cash revenue and
cash costs, but also other costs such as depreciation, amortisation etc.
2. Calculate the average annual profit, by dividing the profit over the whole
project by the life of the project.
Usefulness
Having calculated the percentage answer, how can this be used for project
appraisal?
The accounting rate of return percentage needs to be compared to a target set
by the organisation. If the accounting rate of return is greater than the target,
then accept the project, if it is less then reject the project.
How is the target set? Should it be 25%, or 30%? The target set could be
arbitrary
Which calculation method should be used? If in the above example, the
target was 25%, the project would be rejected under one calculation method
but accepted under the other, so changing the calculation method can
change the decision as to whether the project should be accepted or
rejected.
The timing of the cash flows is not considered. In our example, the biggest
cash flow arises in year five, but by then, the organisation may have ceased
trading due to liquidity issues in years three and four when only $5,000 cash
is being received in each year.
There are, however, some positive aspects to the accounting rate of return:
CONCLUSION
Reviewer 382
Management Advisory Services
BAILOUT PAYBACK
In accounting, bailout payback period shows the length of time required to repay
the total initial investment through investment cash flows combined with salvage
value. The shorter the payback period, the more attractive a company is.
Example: a company invested $20,000 for a project and expected $5,000 cash
flow annually.
PAYBACK RECIPROCAL
The payback reciprocal is a crude estimate of the rate of return for a project or
investment. The payback reciprocal is computed by dividing the digit "1" by a
project's payback period expressed in years. For example, if a project's payback
period is 4 years, the payback reciprocal is 1 divided by 4 = 0.25 = 25%.
The payback reciprocal overstates the true rate of return because it assumes
that the annual cash flows will continue forever. It also assumes that the annual
cash flows are identical in amount. Since these two conditions are unrealistic you
should avoid the use of the payback reciprocal. Instead, you should compute
the internal rate of return or the net present value because they will discount
each of the actual cash amounts to reflect the time value of money.
Reviewer 383
Management Advisory Services
Annual cash flows are uniformly even over the lifetime of the investment
The cash flows from the project will continue forever
For example, a financial analyst is reviewing a possible investment of $50,000,
which will generate positive cash flows of $10,000 per year. The payback period
is 5 years, since cash flows of $50,000 will accumulate over the next five years.
The payback reciprocal is 1 / 5 years, or 20%. The calculated internal rate of
return using this reciprocal is 15% if the assumed cash flow period is 10 years,
and reaches 20% only when the assumed cash flows cover a period of 30 years.
Since it is quite unlikely that cash flows will continue uninterrupted for a long
ways into the future, it is more realistic to instead evaluate a project based on the
net present value method or the internal rate of return.
where
Ct = net cash inflow during the period t
Co = total initial investment costs
r = discount rate, and
t = number of time periods
Apart from the formula itself, net present value can often be calculated using
tables, spreadsheets such as Microsoft Excel or Investopedia’s own NPV
calculator.
Let's look at how this example fits into the formula above. The lump-sum present
value of $500,000 represents the part of the formula between the equal sign and
the minus sign. The amount the retail clothing business pays for the store
represents Co. Subtract Co from $500,000 to get the NPV: if C o is less than
$500,000, the resulting NPV is positive; if Co is more than $500,000, the NPV is
negative and is not a profitable investment.
One primary issue with gauging an investment’s profitability with NPV is that
NPV relies heavily upon multiple assumptions and estimates, so there can be
substantial room for error. Estimated factors include investment costs, discount
rate and projected returns. A project may often require unforeseen expenditures
to get off the ground or may require additional expenditure at the project’s end.
Additionally, discount rates and cash inflow estimates may not inherently account
for risk associated with the project and may assume the maximum possible cash
inflows over an investment period. This may occur as a means of artificially
increasing investor confidence. As such, these factors may need to be adjusted
to account for unexpected costs or losses or for overly optimistic cash inflow
projections.
Internal rate of return (IRR) is the interest rate at which the net present value of
all the cash flows (both positive and negative) from a project or investment equal
zero.
A general rule of thumb is that the IRR value cannot be derived analytically.
Instead, IRR must be found by using mathematical trial-and-error to derive the
appropriate rate. However, most business calculators and spreadsheet programs
will automatically perform this function.
WHY IT MATTERS:
Also, IRR does not measure the absolute size of the investment or the return.
This means that IRR can favor investments with high rates of return even if the
dollar amount of the return is very small. For example, a $1 investment returning
$3 will have a higher IRR than a $1 million investment returning $2 million.
Another short-coming is that IRR can’t be used if the investment generates
interim cash flows. Finally, IRR does not consider cost of capital and can’t
compare projects with different durations.
where:
Ct = net cash inflow during the period t
Co= total initial investment costs
r = discount rate, and
t = number of time periods
To calculate IRR using the formula, one would set NPV equal to zero and solve
for the discount rate r, which is here the IRR. Because of the nature of the
formula, however, IRR cannot be calculated analytically, and must instead be
calculated either through trial-and-error or using software programmed to
calculate IRR.
Generally speaking, the higher a project's internal rate of return, the more
desirable it is to undertake the project. IRR is uniform for investments of varying
types and, as such, IRR can be used to rank multiple prospective projects
a firm is considering on a relatively even basis. Assuming the costs of investment
are equal among the various projects, the project with the highest IRR would
probably be considered the best and undertaken first.
IRR is sometimes referred to as "economic rate of return” (ERR).
or to renovate and expand a previously existing one. While both projects are
likely to add value to the company, it is likely that one will be the more logical
decision as prescribed by IRR.
In theory, any project with an IRR greater than its cost of capital is a profitable
one, and thus it is in a company’s interest to undertake such projects. In planning
investment projects, firms will often establish a required rate of return (RRR) to
determine the minimum acceptable return percentage that the investment in
question must earn in order to be worthwhile. Any project with an IRR that
exceeds the RRR will likely be deemed a profitable one, although companies will
not necessarily pursue a project on this basis alone. Rather, they will likely
pursue projects with the highest difference between IRR and RRR, as chances
are these will be the most profitable.
A similar issue arises when using IRR to compare projects of different lengths.
For example, a project of a short duration may have a high IRR, making it appear
to be an excellent investment, but may also have a low NPV. Conversely, a
longer project may have a low IRR, earning returns slowly and steadily, but may
add a large amount of value to the company over time.
Another issue with IRR is not one strictly inherent to the metric itself, but rather to
a common misuse of IRR. People may assume that, when positive cash
flows are generated during the course of a project (not at the end), the money
will be reinvested at the project’s rate of return. This can rarely be the case.
Rather, when positive cash flows are reinvested, it will be at a rate that more
resembles the cost of capital. Miscalculating using IRR in this way may lead to
the belief that a project is more profitable than it actually is in reality. This, along
Reviewer 389
Management Advisory Services
with the fact that long projects with fluctuating cash flows may have multiple
distinct IRR values, has prompted the use of another metric called modified
internal rate of return (MIRR). MIRR adjusts the IRR to correct these issues,
incorporating cost of capital as the rate at which cash flows are reinvested, and
existing as a single value. Because of MIRR’s correction of the former issue of
IRR, a project’s MIRR will often be significantly lower than the same project’s
IRR. (For more, see: Internal Rate Of Return: An Inside Look.)
PROFITABILITY INDEX
Assuming that the cash flow calculated does not include the investment made in
the project, a profitability index of 1 indicates breakeven. Any value lower than
one would indicate that the project's present value (PV) is less than the initial
investment. As the value of the profitability index increases, so does the financial
attractiveness of the proposed project.
For example:
Investment = $40,000
Life of the Machine = 5 Years
Reviewer 390
Management Advisory Services
The equivalent annual annuity approach (EAA) is one of two methods used
in capital budgeting to compare mutually exclusive projects with unequal lives.
The equivalent annual annuity (EAA) approach calculates the constant
annual cash flow generated by a project over its lifespan if it was an annuity.
When used to compare projects with unequal lives, the one with the higher EAA
should be selected.
Often, an analyst uses a financial calculator, using the typical present value (PV)
and future value (FV) functions to find the EAA. An analyst can use the following
formula in a spreadsheet or with a normal non-financial calculator with exactly
the same results.
C = (r x NPV) / (1 - (1 + r)-n )
Where:
For example, consider two projects. One has a seven-year term and an NPV of
$100,000. The other has a nine-year term and an NPV of $120,000. Both
projects are discounted at a 6% rate. The EAA of each project is:
FISHER RATE
With and Without Adjustment for Tax Rates and Risk Premiums
Irving Fisher's theory of interest rates relates the nominal interest rate i to the
rate of inflation π and the "real" interest rate r. The real interest rate r is the
interest rate after adjustment for inflation. It is the interest rate that lenders have
to have to be willing to loan out their funds. The relation Fisher postulated
between these three rates is:
i = r + π(1 + r)
This means that if r and π are known then i can be determined. On the other
hand, if i and π are known then r can be determined and the relationship is:
The next step in the analysis is to take into account the effect of taxes on the real
rate of return. Let iC be the nominal risk-free interest rate in the country with
currency C and rC and πC be the corresponding real interest rate and expected
rate of inflation, respectively. Let tC be the corresponding tax rate on interest
income and r*C be the after-tax real rate of return. The rate of return after-taxes is
iC(1-tC). Then
r*C = [iC(1-tC)- πC] /(1+πC).
If we know r*C,tC and πC and want to determine iC the formula is:
iC = [r*C(1+πC)+ πC]/ (1-tC)
= r*C/(1-tC)
+ (1 + r* )π /(1-t ).
C C C
This means that when the rate of inflation increases the nominal interest rate
increase by some multiple of the increase in the rate of inflation; i.e.,
∂iC/∂πC = (1+r*C)/(1-tC).
William Crowder and Dennis Hoffman in their article, "The Long- Run
Relationship between Nominal Interest Rates and Inflation: the Fisher Effect
Reviewer 393
Management Advisory Services
The preceding analysis presumes that the level of risk is the same in all
countries. If countries differ in risk, lenders and investors will need a risk
premium, an increment in the interest rate, to compensate them for accepting
higher levels of risk. Let sC be the risk premium required for country C. If the
international capital market is in equilibrium the real, after-tax rates of return in
the different countries must be equal. Then rC-sC=r* for all countries and hence
so that each 1 percent increase in the expected rate of inflation gets translated in
to a 1.75 percent increase in the nominal interest rate.
Thus when inflation increases by 1 percent the nomimal rate will increase by
(1+r)(1+ρ) percent, which could be significantly greater than 1.0.
To take into account the tax rate on interest, the term on the left should be 1 plus
the after-tax nominal interest rate; i.e.,
In order to use the PPP principle for forecasting future exchange rates we need
the expected rate of inflation. The way this would be determined for a country
would be.
(1+π) = (1+i(1-t))/(1+r*)(1+ρ)
For two countries in financial equilibrium the values of r* would be the same.
Thus the factor required for forecasting exchange rates by the PPP principle is
given by:
(1+πF)/(1+π$) =
(1+iF(1-tF))/(1+i$(1-t$))/
[(1+ρF)/(1+ρ$)]
In the same problem, you’re given the cashflows of the two projects. Take the
difference of every two cashflows and input the difference as if it’s a new project.
Calculate its IRR and that is your crossover rate.
Solve for IRR. It will return 13.16%, which is the ratio provided by the key
answer.
The NPV profile is a graph that illustrates a project's NPV against various
discount rates, with the NPV on the y-axis and the cost of capital on the x-axis.
To begin, simply calculate a project's NPV using different cost-of-capital
assumptions. Once these are calculated, plot the values on the graph.
Figure 11.5
Since the IRR is the discount rate where the NPV of a project equals zero, the
point where the NPV crosses the x-axis is also the project’s IRR.
Chapter objectives
This chapter is intended to provide:
· An understanding of the importance of capital budgeting in marketing decision
making
· An explanation of the different types of investment project
· An introduction to the economic evaluation of investment proposals
· The importance of the concept and calculation of net present value and internal
rate of return in decision making
· The advantages and disadvantages of the payback method as a technique for
initial screening of two or more competing projects.
Structure of the chapter
Capital budgeting is very obviously a vital activity in business. Vast sums of
money can be easily wasted if the investment turns out to be wrong or
uneconomic. The subject matter is difficult to grasp by nature of the topic
covered and also because of the mathematical content involved. However, it
seeks to build on the concept of the future value of money which may be spent
now. It does this by examining the techniques of net present value, internal rate
of return and annuities. The timing of cash flows are important in new investment
decisions and so the chapter looks at this "payback" concept. One problem
which plagues developing countries is "inflation rates" which can, in some cases,
exceed 100% per annum. The chapter ends by showing how marketers can take
this in to account.
Capital budgeting versus current expenditures
A capital investment project can be distinguished from current expenditures by
two features:
a) such projects are relatively large
b) a significant period of time (more than one year) elapses between the
investment outlay and the receipt of the benefits..
As a result, most medium-sized and large organisations have developed special
procedures and methods for dealing with these decisions. A systematic approach
to capital budgeting implies:
a) the formulation of long-term goals
b) the creative search for and identification of new investment opportunities
c) classification of projects and recognition of economically and/or statistically
dependent proposals
d) the estimation and forecasting of current and future cash flows
e) a suitable administrative framework capable of transferring the required
information to the decision level
f) the controlling of expenditures and careful monitoring of crucial aspects of
project execution
g) a set of decision rules which can differentiate acceptable from unacceptable
alternatives is required.
The last point (g) is crucial and this is the subject of later sections of the chapter.
The classification of investment projects
Reviewer 397
Management Advisory Services
a) By project size
Small projects may be approved by departmental managers. More careful
analysis and Board of Directors' approval is needed for large projects of, say,
half a million dollars or more.
b) By type of benefit to the firm
· an increase in cash flow
· a decrease in risk
· an indirect benefit (showers for workers, etc).
c) By degree of dependence
· mutually exclusive projects (can execute project A or B, but not both)
· complementary projects: taking project A increases the cash flow of project B.
· substitute projects: taking project A decreases the cash flow of project B.
d) By degree of statistical dependence
· Positive dependence
· Negative dependence
· Statistical independence.
e) By type of cash flow
· Conventional cash flow: only one change in the cash flow sign
e.g. -/++++ or +/----, etc
· Non-conventional cash flows: more than one change in the cash flow sign,
e.g. +/-/+++ or -/+/-/++++, etc.
The economic evaluation of investment proposals
The analysis stipulates a decision rule for:
I) accepting or
II) rejecting
investment projects
The time value of money
Recall that the interaction of lenders with borrowers sets an equilibrium rate of
interest. Borrowing is only worthwhile if the return on the loan exceeds the cost of
the borrowed funds. Lending is only worthwhile if the return is at least equal to
that which can be obtained from alternative opportunities in the same risk class.
The interest rate received by the lender is made up of:
i) The time value of money: the receipt of money is preferred sooner rather than
later. Money can be used to earn more money. The earlier the money is
received, the greater the potential for increasing wealth. Thus, to forego the use
of money, you must get some compensation.
ii) The risk of the capital sum not being repaid. This uncertainty requires a
premium as a hedge against the risk, hence the return must be commensurate
with the risk being undertaken.
iii) Inflation: money may lose its purchasing power over time. The lender must be
compensated for the declining spending/purchasing power of money. If the
lender receives no compensation, he/she will be worse off when the loan is
repaid than at the time of lending the money.
a) Future values/compound interest
Reviewer 398
Management Advisory Services
Future value (FV) is the value in dollars at some point in the future of one or
more investments.
FV consists of:
i) the original sum of money invested, and
ii) the return in the form of interest.
The general formula for computing Future Value is as follows:
FVn = Vo (l + r)n
where
Vo is the initial sum invested
r is the interest rate
n is the number of periods for which the investment is to receive interest.
Thus we can compute the future value of what Vo will accumulate to in n years
when it is compounded annually at the same rate of r by using the above
formula.
Now attempt exercise 6.1.
Exercise 6.1 Future values/compound interest
i) What is the future value of $10 invested at 10% at the end of 1 year?
ii) What is the future value of $10 invested at 10% at the end of 5 years?
We can derive the Present Value (PV) by using the formula:
FVn = Vo (I + r)n
By denoting Vo by PV we obtain:
FVn = PV (I + r)n
by dividing both sides of the formula by (I + r)n we derive:
where:
Ct = the net cash receipt at the end of year t
Io = the initial investment outlay
Reviewer 399
Management Advisory Services
Examples:
N.B. At this point the tutor should introduce the net present value tables from any
recognised published source. Do that now.
Decision rule:
If NPV is positive (+): accept the project
If NPV is negative(-): reject the project
Now attempt exercise 6.3.
Exercise 6.3 Net present value
A firm intends to invest $1,000 in a project that generated net receipts of $800,
$900 and $600 in the first, second and third years respectively. Should the firm
go ahead with the project?
Attempt the calculation without reference to net present value tables first.
c) Annuities
N.B. Introduce students to annuity tables from any recognised published source.
A set of cash flows that are equal in each and every period is called an annuity.
Example:
Year Cash Flow ($)
0 -800
1 400
2 400
3 400
PV = $400(0.9091) + $400(0.8264) + $400(0.7513)
= $363.64 + $330.56 + $300.52
= $994.72
NPV = $994.72 - $800.00
= $194.72
Alternatively,
PV of an annuity = $400 (PVFAt.i) (3,0,10)
= $400 (0.9091 + 0.8264 + 0.7513)
= $400 x 2.4868
= $994.72
NPV = $994.72 - $800.00
Reviewer 400
Management Advisory Services
= $194.72
d) Perpetuities
A perpetuity is an annuity with an infinite life. It is an equal sum of money to be
paid in each period forever.
where:
C is the sum to be received per period
r is the discount rate or interest rate
Example:
You are promised a perpetuity of $700 per year at a rate of interest of 15% per
annum. What price (PV) should you be willing to pay for this income?
= $4,666.67
A perpetuity with growth:
Suppose that the $700 annual income most recently received is expected to
grow by a rate G of 5% per year (compounded) forever. How much would this
income be worth when discounted at 15%?
Solution:
Subtract the growth rate from the discount rate and treat the first period's cash
flow as a perpetuity.
= $735/0.10
= $7,350
e) The internal rate of return (IRR)
Refer students to the tables in any recognised published source.
· The IRR is the discount rate at which the NPV for a project equals zero. This
rate means that the present value of the cash inflows for the project would equal
the present value of its outflows.
· The IRR is the break-even discount rate.
· The IRR is found by trial and error.
where r = IRR
IRR of an annuity:
where:
Reviewer 401
Management Advisory Services
=6
From the tables = 4%
Economic rationale for IRR:
If IRR exceeds cost of capital, project is worthwhile, i.e. it is profitable to
undertake. Now attempt exercise 6.4
Exercise 6.4 Internal rate of return
Find the IRR of this project for a firm with a 20% cost of capital:
YEAR CASH FLOW
$
0 -10,000
1 8,000
2 6,000
a) Try 20%
b) Try 27%
c) Try 29%
Net present value vs internal rate of return
Independent vs dependent projects
NPV and IRR methods are closely related because:
i) both are time-adjusted measures of profitability, and
ii) their mathematical formulas are almost identical.
So, which method leads to an optimal decision: IRR or NPV?
a) NPV vs IRR: Independent projects
Independent project: Selecting one project does not preclude the choosing of the
other.
With conventional cash flows (-|+|+) no conflict in decision arises; in this case
both NPV and IRR lead to the same accept/reject decisions.
Figure 6.1 NPV vs IRR Independent projects
Reviewer 402
Management Advisory Services
If cash flows are discounted at k1, NPV is positive and IRR > k1: accept project.
If cash flows are discounted at k2, NPV is negative and IRR < k 2: reject the
project.
Mathematical proof: for a project to be acceptable, the NPV must be positive, i.e.
B
Assume k = 10%, which project should Agritex undertake?
= $954.55
= $1,363.64
Both projects are of one-year duration:
IRRA:
$11,500 = $9,500 (1 +RA)
= 1.21-1
therefore IRRA = 21%
IRRB:
$18,000 = $15,000(1 + RB)
= 1.2-1
therefore IRRB = 20%
Decision:
Assuming that k = 10%, both projects are acceptable because:
NPVA and NPVB are both positive
IRRA > k AND IRRB > k
Which project is a "better option" for Agritex?
If we use the NPV method:
NPVB ($1,363.64) > NPVA ($954.55): Agritex should choose Project B.
If we use the IRR method:
IRRA (21%) > IRRB (20%): Agritex should choose Project A. See figure 6.2.
Figure 6.2 NPV vs IRR: Dependent projects
Reviewer 404
Management Advisory Services
IRRA =
Reviewer 405
Management Advisory Services
= 1.67.
Therefore IRRA = 36% (from the tables)
IRRB =
= 2.0
Therefore IRRB = 21%
Decision:
Conflicting, as:
· NPV prefers B to A
· IRR prefers A to B
NPV IRR
Project A $ 3,730.50 36%
Project B $17,400.00 21%
See figure 6.3.
Figure 6.3 Scale of investments
To show why:
i) the NPV prefers B, the larger project, for a discount rate below 20%
ii) the NPV is superior to the IRR
a) Use the incremental cash flow approach, "B minus A" approach
b) Choosing project B is tantamount to choosing a hypothetical project "B minus
A".
0 1 2 3
Reviewer 406
Management Advisory Services
P - 1 3
r 1 0 1
o 0 0 .
j 0 2
e 5
c
t
B
" 0 - 8
A 8 8
0 .
m 1
i 5
n
u
s
B
"
Assume k = 10%
NPV IRR
Project A 17.3 20.0%
Project B 16.7 25.0%
"A minus B" 0.6 10.9%
IRR prefers B to A even though both projects have identical initial outlays. So,
the decision is to accept A, that is B + (A - B) = A. See figure 6.4.
Figure 6.4 Timing of the cash flow
Reviewer 408
Management Advisory Services
Decision rule:
PI > 1; accept the project
PI < 1; reject the project
If NPV = 0, we have:
NPV = PV - Io = 0
PV = Io
Dividing both sides by Io we get:
o
f
C
F
P 1 5 2.
r 0 0 0
o 0
j
e
c
Reviewer 409
Management Advisory Services
t
A
P 1 1 1.
r , , 5
o 5 0
j 0 0
e 0 0
c
t
B
Decision:
Choose option B because it maximises the firm's profitability by $1,500.
Disadvantage of PI:
Like IRR it is a percentage and therefore ignores the scale of investment.
The payback period (PP)
The CIMA defines payback as 'the time it takes the cash inflows from a capital
investment project to equal the cash outflows, usually expressed in years'. When
deciding between two or more competing projects, the usual decision is to accept
the one with the shortest payback.
Payback is often used as a "first screening method". By this, we mean that when
a capital investment project is being considered, the first question to ask is: 'How
long will it take to pay back its cost?' The company might have a target payback,
and so it would reject a capital project unless its payback period were less than a
certain number of years.
Example 1:
Years 0 1 2 3 4 5
Project A 1,000,000 250,000 250,000 250,000 250,000 250,000
For a project with equal annual receipts:
= 4 years
Example 2:
Years 0 1 2 3 4
Project B - 10,000 5,000 2,500 4,000 1,000
Payback period lies between year 2 and year 3. Sum of money recovered by the
end of the second year
= $7,500, i.e. ($5,000 + $2,500)
Sum of money to be recovered by end of 3rd year
= $10,000 - $7,500
= $2,500
Reviewer 410
Management Advisory Services
= 2.625 years
Disadvantages of the payback method:
· It ignores the timing of cash flows within the payback period, the cash flows
after the end of payback period and therefore the total project return.
· It ignores the time value of money. This means that it does not take into
account the fact that $1 today is worth more than $1 in one year's time. An
investor who has $1 today can either consume it immediately or alternatively can
invest it at the prevailing interest rate, say 30%, to get a return of $1.30 in a
year's time.
· It is unable to distinguish between projects with the same payback period.
· It may lead to excessive investment in short-term projects.
Advantages of the payback method:
· Payback can be important: long payback means capital tied up and high
investment risk. The method also has the advantage that it involves a quick,
simple calculation and an easily understood concept.
The accounting rate of return - (ARR)
The ARR method (also called the return on capital employed (ROCE) or the
return on investment (ROI) method) of appraising a capital project is to estimate
the accounting rate of return that the project should yield. If it exceeds a target
rate of return, the project will be undertaken.
= 15%
= 30%
Disadvantages:
· It does not take account of the timing of the profits from an investment.
· It implicitly assumes stable cash receipts over time.
Reviewer 411
Management Advisory Services
· It is based on accounting profits and not cash flows. Accounting profits are
subject to a number of different accounting treatments.
· It is a relative measure rather than an absolute measure and hence takes no
account of the size of the investment.
· It takes no account of the length of the project.
· it ignores the time value of money.
The payback and ARR methods in practice
Despite the limitations of the payback method, it is the method most widely used
in practice. There are a number of reasons for this:
· It is a particularly useful approach for ranking projects where a firm faces
liquidity constraints and requires fast repayment of investments.
· It is appropriate in situations where risky investments are made in uncertain
markets that are subject to fast design and product changes or where future cash
flows are particularly difficult to predict.
· The method is often used in conjunction with NPV or IRR method and acts as a
first screening device to identify projects which are worthy of further investigation.
· it is easily understood by all levels of management.
· It provides an important summary method: how quickly will the initial investment
be recouped?
Now attempt exercise 6.5.
Exercise 6.5 Payback and ARR
Delta Corporation is considering two capital expenditure proposals. Both
proposals are for similar products and both are expected to operate for four
years. Only one proposal can be accepted.
The following information is available:
Profit/(loss)
Proposa Proposa
lA lB
$ $
Initial 46,000 46,000
investme
nt
Year 1 6,500 4,500
Year 2 3,500 2,500
Year 3 13,500 4,500
Year 4 Loss Profit
1,500 14,500
Estimate 4,000 4,000
d scrap
value at
the end
of Year 4
Depreciation is charged on the straight line basis. Problem:
a) Calculate the following for both proposals:
Reviewer 412
Management Advisory Services
= $10,769
In terms of the value of the dollar at 1 January, Keymer Farm would make a profit
of $769 which represents a rate of return of 7.69% in "today's money" terms. This
is known as the real rate of return. The required rate of 40% is a money rate of
return (sometimes known as a nominal rate of return). The money rate measures
the return in terms of the dollar, which is falling in value. The real rate measures
the return in constant price level terms.
The two rates of return and the inflation rate are linked by the equation:
(1 + money rate) = (1 + real rate) x (1 + inflation rate)
where all the rates are expressed as proportions.
In the example,
(1 + 0.40) = (1 + 0.0769) x (1 + 0.3)
= 1.40
Reviewer 413
Management Advisory Services
0 0 0
0 0
3 7 0 2
0 . 5
, 3 ,
0 6 4
0 4 8
0 0
3
0
,
5
4
0
The project has a positive net present value of $30,540, so Keymer Farm should
go ahead with the project.
The future cash flows can be re-expressed in terms of the value of the dollar at
time 0 as follows, given inflation at 30% a year:
TIM A CASH FLOW AT TIME 0
E C PRICE LEVEL
T
U
A
L
C
A
S
H
F
L
O
W
$ $
0 ( (100,0
1 00)
0
0
,
0
0
0
)
Reviewer 415
Management Advisory Services
1 9 69,
0 23
, 1
0
0
0
2 8 47,
0 33
, 7
0
0
0
3 7 31,
0 86
, 2
0
0
0
The cash flows expressed in terms of the value of the dollar at time 0 can now be
discounted using the real value of 7.69%.
TI C D P
M A I V
E S S
H C
O
F U
L N
O T
W F
A
C
T
O
R
$ 7 $
.
6
9
%
0 ( 1 (
1 . 1
0 0 0
0 0 0
, 0 ,
Reviewer 416
Management Advisory Services
0 0
0 0
0 0
) )
1 6 6
9 4
, ,
2 2
3 4
1 6
2 4 4
7 0
, ,
3 8
3 0
7 4
3 3 2
1 5
, ,
8 4
6 9
2 0
3
0
,
5
4
0
The NPV is the same as before.
Expectations of inflation and the effects of inflation
When a manager evaluates a project, or when a shareholder evaluates his/her
investments, he/she can only guess what the rate of inflation will be. These
guesses will probably be wrong, at least to some extent, as it is extremely difficult
to forecast the rate of inflation accurately. The only way in which uncertainty
about inflation can be allowed for in project evaluation is by risk and uncertainty
analysis.
Inflation may be general, that is, affecting prices of all kinds, or specific to
particular prices. Generalised inflation has the following effects:
a) Inflation will mean higher costs and higher selling prices. It is difficult to predict
the effect of higher selling prices on demand. A company that raises its prices by
30%, because the general rate of inflation is 30%, might suffer a serious fall in
demand.
b) Inflation, as it affects financing needs, is also going to affect gearing, and so
the cost of capital.
Reviewer 417
Management Advisory Services
c) Since fixed assets and stocks will increase in money value, the same
quantities of assets must be financed by increasing amounts of capital. If the
future rate of inflation can be predicted with some degree of accuracy,
management can work out how much extra finance the company will need and
take steps to obtain it, e.g. by increasing retention of earnings, or borrowing.
However, if the future rate of inflation cannot be predicted with a certain amount
of accuracy, then management should estimate what it will be and make plans to
obtain the extra finance accordingly. Provisions should also be made to have
access to 'contingency funds' should the rate of inflation exceed expectations,
e.g. a higher bank overdraft facility might be arranged should the need arise.
Many different proposals have been made for accounting for inflation. Two
systems known as "Current purchasing power" (CPP) and "Current cost
accounting" (CCA) have been suggested.
CPP is a system of accounting which makes adjustments to income and capital
values to allow for the general rate of price inflation.
CCA is a system which takes account of specific price inflation (i.e. changes in
the prices of specific assets or groups of assets), but not of general price
inflation. It involves adjusting accounts to reflect the current values of assets
owned and used.
At present, there is very little measure of agreement as to the best approach to
the problem of 'accounting for inflation'. Both these approaches are still being
debated by the accountancy bodies.
Now attempt exercise 6.6.
Exercise 6.6 Inflation
TA Holdings is considering whether to invest in a new product with a product life
of four years. The cost of the fixed asset investment would be $3,000,000 in
total, with $1,500,000 payable at once and the rest after one year. A further
investment of $600,000 in working capital would be required.
The management of TA Holdings expect all their investments to justify
themselves financially within four years, after which the fixed asset is expected to
be sold for $600,000.
The new venture will incur fixed costs of $1,040,000 in the first year, including
depreciation of $400,000. These costs, excluding depreciation, are expected to
rise by 10% each year because of inflation. The unit selling price and unit
variable cost are $24 and $12 respectively in the first year and expected yearly
increases because of inflation are 8% and 14% respectively. Annual sales are
estimated to be 175,000 units.
TA Holdings money cost of capital is 28%.
Is the product worth investing in?
INDEPENDENT PROJECTS
Project Screening
Project Ranking
Capital Rationing
Project Screening
Project Ranking
Capital Rationing
Capital Rationing is the process of selecting that mix of acceptable projects that
provides the highest overall net present value (NPV). The profitability index is
widely used in ranking projects competing for limited funds.
CAPITAL
INVESTMENT PROJECT
BUDGETING PROCEDURES
INDEPENDENT MUTUALLY EXCLUSIVE
Project Screening Payback Period, ARR, NPV, IRR ?
Project Ranking Profitability Index, NPV Profitability Index, NPV
Capital Rationing ? ?
Reviewer 419
Management Advisory Services
Reviewer 420
Management Advisory Services
Sensitivity analysis can be incorporated into DCF analysis by examining how the
DCF of each project changes with changes in the inputs used. These could
include changes in revenue assumptions, cost assumptions, tax rate
assumptions, and discount rates.
The risk-free rate of return is the theoretical rate of return of an investment with zero
risk. The risk-free rate represents the interest an investor would expect from an
absolutely risk-free investment over a specified period of time.
Reviewer 422
Management Advisory Services
In theory, the risk-free rate is the minimum return an investor expects for any
investment because he will not accept additional risk unless the potential rate of
return is greater than the risk-free rate.
In practice, however, the risk-free rate does not exist because even the safest
investments carry a very small amount of risk. Thus, the interest rate on a three-
month U.S. Treasury bill is often used as the risk-free rate for U.S.-based investors.
Determination of a proxy for the risk-free rate of return for a given situation must
consider the investor's home market, while negative interest rates can complicate the
issue.
Currency Risk
The three-month U.S. Treasury bill is a useful proxy because the market considers
there to be virtually no chance of the government defaulting on its obligations. The
large size and deep liquidity of the market contribute to the perception of safety.
However, a foreign investor whose assets are not denominated in dollars
incurs currency risk when investing in U.S. Treasury bills. The risk can be hedged
via currency forwards and/or options but impacts the rate of return.
The short-term government bills of other highly rated countries, such as Germany and
Switzerland, offer a risk-free rate proxy for investors with assets in euros or Swiss
francs. Investors based in less highly rated countries that are within the eurozone,
such as Portugal and Greece, are able to invest in German bonds without incurring
currency risk. By contrast, an investor with assets in Russian rubles cannot invest in a
highly rated government bond without incurring currency risk.
Negative Interest Rates
Businesses face all kinds of risks, some of which can cause serious loss of
profits or even bankruptcy. But while all large companies have extensive "risk
management" departments, smaller businesses tend not to look at the issue in
such a systematic way.
So in this four-part series of tutorials, you’ll learn the basics of risk management
and how you can apply them in your business.
In this first tutorial, we’ll look at the main types of risk your business may face.
You’ll get a rundown of strategic risk, compliance risk, operational risk, financial
risk, and reputational risk, so that you understand what they mean, and how they
could affect your business. Then we’ll get into the specifics of identifying and
dealing with these risks in later tutorials in the series.
1. Strategic Risk
This is strategic risk. It’s the risk that your company’s strategy becomes less
effective and your company struggles to reach its goals as a result. It could be
due to technological changes, a powerful new competitor entering the market,
shifts in customer demand, spikes in the costs of raw materials, or any number of
other large-scale changes.
History is littered with examples of companies that faced strategic risk. Some
managed to adapt successfully; others didn’t.
A classic example is Kodak, which had such a dominant position in the film
photography market that when one of its own engineers invented a digital
camera in 1975, it saw the innovation as a threat to its core business model, and
failed to develop it.
It’s easy to say with hindsight, of course, but if Kodak had analyzed the strategic
risk more carefully, it would have concluded that someone else would start
producing digital cameras eventually, so it was better for Kodak to cannibalize its
own business than for another company to do it.
Failure to adapt to a strategic risk led to bankruptcy for Kodak. It’s now emerged
from bankruptcy as a much smaller company focusing on corporate imaging
solutions, but if it had made that shift sooner, it could have preserved its
dominance.
Reviewer 424
Management Advisory Services
2. Compliance Risk
Are you complying with all the necessary laws and regulations that apply to your
business?
Of course you are (I hope!). But laws change all the time, and there’s always a
risk that you’ll face additional regulations in the future. And as your own business
expands, you might find yourself needing to comply with new rules that didn’t
apply to you before.
For example, let’s say you run an organic farm in California, and sell your
products in grocery stores across the U.S. Things are going so well that you
decide to expand to Europe and begin selling there.
That’s great, but you’re also incurring significant compliance risk. European
countries have their own food safety rules, labeling rules, and a whole lot more.
And if you set up a European subsidiary to handle it all, you’ll need to comply
with local accounting and tax rules. Meeting all those extra regulatory
requirements could end up being a significant cost for your business.
Even if your business doesn’t expand geographically, you can still incur new
compliance risk just by expanding your product line. Let’s say your California
farm starts producing wine in addition to food. Selling alcohol opens you up to a
whole raft of new, potentially costly regulations.
And finally, even if your business remains unchanged, you could get hit with new
rules at any time. Perhaps a new data protection rule requires you to beef up
your website’s security, for example. Or employee safety regulations mean you
need to invest in new, safer equipment in your factory. Or perhaps you’ve
unwittingly been breaking a rule, and have to pay a fine. All of these things
involve costs, and present a compliance risk to your business.
In extreme cases, a compliance risk can also affect your business’s future,
becoming a strategic risk too. Think of tobacco companies facing new advertising
restrictions, for example, or the late-1990s online music-sharing services that
Reviewer 425
Management Advisory Services
were sued for copyright infringement and were unable to stay in business. We’re
breaking these risks into different categories, but they often overlap.
3. Operational Risk
So far, we’ve been looking at risks stemming from external events. But your own
company is also a source of risk.
In some cases, operational risk has more than one cause. For example, consider
the risk that one of your employees writes the wrong amount on a check, paying
out $100,000 instead of $10,000 from your account.
That’s a “people” failure, but also a “process” failure. It could have been
prevented by having a more secure payment process, for example having a
second member of staff authorize every major payment, or using an electronic
system that would flag unusual amounts for review.
In some cases, operational risk can also stem from events outside your control,
such as a natural disaster, or a power cut, or a problem with your website host.
Anything that interrupts your company’s core operations comes under the
category of operational risk.
While the events themselves can seem quite small compared with the large
strategic risks we talked about earlier, operational risks can still have a big
impact on your company. Not only is there the cost of fixing the problem, but
operational issues can also prevent customer orders from being delivered or
make it impossible to contact you, resulting in a loss of revenue and damage to
your reputation.
4. Financial Risk
Most categories of risk have a financial impact, in terms of extra costs or lost
revenue. But the category of financial risk refers specifically to the money flowing
in and out of your business, and the possibility of a sudden financial loss.
For example, let’s say that a large proportion of your revenue comes from a
single large client, and you extend 60 days credit to that client (for more on
extending credit and dealing with cash flow, see our earlier cash flow tutorial).
In that case, you have a significant financial risk. If that customer is unable to
pay, or delays payment for whatever reason, then your business is in big trouble.
Reviewer 426
Management Advisory Services
Having a lot of debt also increases your financial risk, particularly if a lot of it is
short-term debt that’s due in the near future. And what if interest rates suddenly
go up, and instead of paying 8% on the loan, you’re now paying 15%? That’s a
big extra cost for your business, and so it’s counted as a financial risk.
5. Reputational Risk
There are many different kinds of business, but they all have one thing in
common: no matter which industry you’re in, your reputation is everything.
Reputational risk can take the form of a major lawsuit, an embarrassing product
recall, negative publicity about you or your staff, or high-profile criticism of your
products or services. And these days, it doesn’t even take a major event to
cause reputational damage; it could be a slow death by a thousand negative
tweets and online product reviews.
Next Steps
So now you know about the main risks your business could face. We’ve covered
five types of business risk, and given examples of how they can affect your
business.
This is the foundation of a risk management strategy for your business, but of
course there’s much more work to be done. The next step is to look more deeply
at each type of risk, and identify specific things that could go wrong, and the
impact they could have.
It’s not much use, for example, to say, “Our business is subject to operational
risk.” You need to get very granular, and go through every aspect of your
Reviewer 427
Management Advisory Services
operations to come up with specific things that could go wrong. Then you can
come up with a strategy for dealing with those risks.
BUSINESS / OPERATING
One area that may involve operational risk is the maintenance of necessary
systems and equipment. If two maintenance activities are required, but it is
determined only one can be afforded at the time, making the choice to perform
one over the other alters the operational risk depending on which system is left in
disrepair. If a system fails, the negative impact is associated directly with the
operational risk.
Other areas that qualify as operational risk tend to involve the human element
within the organization. If a sales-oriented business chooses to maintain a
subpar sales staff, due to its lower salary costs or any other factor, this is
considered an operational risk. The same can be said for failing to properly staff
to avoid certain risks. In manufacturing, choosing not to have a qualified
Reviewer 428
Management Advisory Services
mechanic on staff, and having to rely on third parties for that work, can be
classified as an operational risk. Not only does this impact a system's operation,
it also involves additional time delays as it relates to the third party.
When you own or manage a business, there's always a risk of loss or failure.
Your decisions can affect how much risk your company faces, whether it's a
financial risk, the risk of adopting a bad business strategy or the risk of your
employees making mistakes. Business analysts have divided the risks
companies face into subcategories, two of which are operational risk and
business risk.
Business Risk
Business risk is the risk that results from your decisions about the products and
services you offer. When you decide to develop and market a particular product,
there's a risk that the product won't work as well as you hoped or that your
marketing campaign will fail. Other business risks include changes in the cost of
raw materials or shipping and managing technological developments that affect
sales or manufacturing.
Operational Risk
Operational risks exist in the way your company tries to carry out your decisions.
Even if you decide on the right product to manufacture, weaknesses in your
supply chain, outdated manufacturing equipment or a poor sales force can make
it impossible to generate the profits you anticipate. A risk-management strategy
that focuses on management decisions and ignores how the staff operates can
leave you with a dangerously high risk level. If your IT department doesn't
maintain Internet security, for example, one hacking incident could cost you vital
corporate information or customers' credit card numbers.
There's rarely a 100 percent safe path in the business world. Developing a new
product or moving into a new market carries a risk of losing money, but not
expanding or growing can be just as risky, allowing more daring competitors to
gain market share. When weighing alternatives, look at the probability of
business risk from each choice and the consequences if the worst happens.
Then you have to balance the chance of success against the loss to your
company if you fail.
Strategic business decisions may seem full of risk, but lower-level operational
risks can be a bigger challenge, as there are so many points where your
operations can go off the rails. What you can do is make sure there are control
systems in place to keep your staff following the right procedures. Other
protective steps include insurance and having a contingency plan in place. If your
equipment breaks down, for instance, having a plan to keep operating until
insurance covers the losses could be vital.
Operational risk has also been defined as: ‘The risk of loss resulting from
inadequate or failed internal processes, people and systems, or from external
events.’ (Basel Committee on Banking Supervision, 2004)
Risk management is: ‘A process of understanding and managing the risks that
the entity is inevitably subject to in attempting to achieve its corporate objectives.
For management purposes, risks are usually divided into categories such as
operational, financial, legal compliance, information and personnel. One example
of an integrated solution to risk management is enterprise risk management.’
(CIMA Official Terminology, 2005)
manage that risk. There are a number of different techniques that can be used to
identify risk. A common method used in risk identification is the use of workshops
to ‘brainstorm’. This can be used at different levels of the organisation and can
identify a large number of risks in a short time. To keep ideas flowing, it is
important to keep identification sessions focused on identifying risks and not to
move on to evaluate the risks. Operational risks are largely based on procedures
and processes, so this lends itself to the use of audit for risk identification
purposes. Risk based audit can be used as a tool to identify risks, as well as a
method of reporting to the board on the effectiveness of the organisation’s risk
management framework. Operational risk Topic Gateway Series Risk based
audit can use the following methods to assess risks: • intuitive or judgemental
assessment • risk assessment matrix • risk ranking. Another approach to
identifying operational risk is to look for critical dependencies in people,
processes, systems and external structures. Once identified, the dependencies
can be managed or engineered by adding fail-safes and system redundancies.
Other approaches include physical inspection and incident investigation. Once
risks have been identified based on a suitable way of categorising them, it
becomes possible to think of tools that may be used to measure and manage
them. Risk assessment and measuring Various methods may be used to assess
the severity of each risk once it has been identified. One of the reasons for
measuring risk is that it allows the most significant risks to be prioritised. The
result or impact of a risk occurring may be financial loss, damage to reputation,
process change or a combination of these. One of the simplest ways to measure
risks is to apply an impact and likelihood matrix which provides an overall risk
rating. Adapted from: Emergency Preparedness (Guidance on part 1 of the Civil
Contingencies Act 2004) 6 Operational risk Topic Gateway Series 7 One of the
issues with measuring risk is that there are objective or subjective risks. Many
risks are subjective and qualitative, rather than objectively identifiable and
measurable. For example, the risks of litigation, economic downturn, loss of key
employees, natural disasters and loss of reputation are all subjective
judgements. There is an important distinction between objective, measurable
risks and subjective, perceived risks. Some of the factors that influence this
distinction are: • how recently the risk has occurred • how visible the risk is • how
management perceives the risk • how the organisation establishes formal or
informal ways of dealing with the risk. The analysis can be either quantitative or
qualitative, but it should allow for comparison and trend analysis. One of the
issues with risk assessment is that traditional risk assessment techniques often
focus on those elements that can be quantified easily. Such techniques fail to
address all critical drivers of successful risk management. Impact When
considering the impact of operational risk there are three primary areas that
affect the business activity. Property exposures – these relate to the physical
assets belonging to or entrusted to the business. Personnel exposures – these
relate to the risks faced by all those who work for and with the business,
including customers, suppliers and contractors. Financial exposures – these
relate to all aspects of the company’s ability to trade, whether profitability or not,
and cover internal and external exposures of all types. Financial exposures also
include intellectual property, goodwill and patents. Operational risk Topic
Gateway Series 8 Managing operational risks Risk evaluation is used to make
decisions about the significance of the risks to the organisation and whether
each specific risk should be accepted or treated. When looking at operational risk
Reviewer 431
Management Advisory Services
Risk.net staff
@riskdotnet
23 Jan 2017
Tweet
Facebook
LinkedIn
Save this article
Send to
Print this page
In a series of interviews that took place in November and December
2016, Risk.net spoke to chief risk officers, heads of operational risk and other op
Reviewer 433
Management Advisory Services
risk practitioners at financial services firms, including banks, insurers and asset
managers. Based on the op risk concerns most frequently selected by those
practitioners, we present our ranking of the top 10 operational risks for 2017.
Click to go to section
#1 Cyber risk and data security | #2 Regulation | #3 Outsourcing | #4 Geopolitical
risk | #5 Conduct risk | #6 Organisational change | #7 IT failure | #8 AML, CTF
and sanctions compliance | #9 Fraud | #10 Physical attack
to spend more time defining their risk appetite instead of trying to ensure their
systems are impenetrable, practitioners counsel.
Industry view
Rajat Baijal, head of enterprise risk at BGC and Cantor Fitzgerald :
"Cyber risk will stay pertinent for a while. What I find quite fascinating about
cyber risk is the sheer pace of change: recent events suggest that the hackers
are one step ahead of the banks in this rapidly evolving space. Given the
uncertainties, firms may choose to strike a balance between actively managing
the risk by investing in suitable resource and infrastructure, and accepting or
transferring the risk by buying a suitable insurance policy for example. This
balance between managing and accepting and transferring the risk will vary
across firms, and should be a key part of defining the firm's risk appetite."
Stephanie Snyder, senior vice president, Aon professional risk solutions:
"We talk about the evolving nature of cyber risk, which is only going to increase
with the Internet of Things and additional automation. I believe that, as we move
into 2017, we're going to start seeing more cyber-related business interruption
losses; you're not going to read about them in the press, but every organisation
that runs off of a technology infrastructure – which is, really, every organisation –
is going to be impacted."
Jonathan Wyatt, global lead of IT governance and risk management, Protiviti :
"What a cyber strategy should really be doing is not trying to prevent the attack –
because that is very difficult – but trying to manage the outcome. The problem
we have with cyber is most people in financial services are not doing it this way.
They're not stepping back and thinking about outcomes, risk appetite and what
they do; they're throwing money at it, trying to make the door more secure – but
there are still plenty of people who know how to open the door. When you get
techies talking to board executives about threats, vulnerabilities, weaknesses,
the dialogue breaks down."
#2: Regulation
To many op risk practitioners, the landmark regulations of the post-crisis era –
the overhaul of the capital adequacy framework, widespread market structure
reforms, far-reaching changes to accounting practices – represent a laundry list
of potential operational risks for their institution.
Fines and penalties for noncompliance, the restructuring of desks and operations
and the shuttering of businesses all present complex and hard-to-model threats.
In the US, the Dodd-Frank Act alone – irrespective of President Trump's promise
to expunge it – has produced thousands of pages of rulemakings from prudential
and markets regulators, covering everything from stress testing to clearing, trade
execution to hedge fund reporting.
Reviewer 435
Management Advisory Services
#3: Outsourcing
Outsourcing makes it into our top three operational risks this year, spurred by a
clear message from regulators that firms must improve oversight of third-party
risk management, or else face punitive sanctions.
Aviva provided one of the highest-profile examples of last year. In October 2016,
the firm was hit with an £8.2 million fine from the UK Financial Conduct Authority
for failure to ensure adequate controls and oversight of outsourced client money
handling arrangements.
The size of the penalty, combined with the undesirable publicity the case
attracted, caused alarm for many op risk practitioners, and emphasised that
regulators are actively hunting for breaches.
Under the EU's forthcoming GDPR legislation (see Cyber segment), financial
organisations must review their existing outsourcing arrangements to ensure
they don't face eye-watering fines – even if the failures are those of third-party
service providers.
GDPR compliance will represent a significant burden, managers say. Banks will
need to know exactly where their customer data is held at all times, and be able
to present this data on demand in a portable format. That will require a thorough
understanding of a complex web of relationships with various outsourcers,
practitioners say.
Industry view
Steve Holt, financial services partner, EY:
"Many companies are only worried about the top 10% of outsourced
arrangements – the ones that they spend most money on. That's not necessarily
reflective of their risk profile; you may be spending millions with a global
outsourcer, but it may be a small outsourcer with not-very-mature controls that's
holding some key customer personal data where you suffer a loss... In many
cases, outsourcing providers actually outsource to other organisations, so it
becomes a massively complex ecosystem. [But] financial services firms still have
overall responsibility for ensuring that the data is controlled and secure. This is a
key requirement of the GDPR."
countries that makes sense at the moment will not backfire in a couple of years.
To ignore this reality and not think about possible scenarios might prove very
costly for international banks in the upcoming years."
Ariane Chapelle, director at Chapelle Consulting:
"Brexit will likely be an important cause of uncertainty, loss of business, third-
party risk, relocation risk and project management risk, caused by uncertainty
and unfamiliarity with new processes"
#7: IT failure
Unlike cyber crime, IT failure involves fewer unknown variables. For that reason,
it is perhaps perceived as more manageable by op risk practitioners; but its
impact can be just as debilitating.
Reviewer 440
Management Advisory Services
Cloud computing was flagged by many respondents to this year's survey as one
of the most important technological trends in 2017. But as well as its advantages
in terms of flexibility and cost-effectiveness, it is prone to outages, with
undesirable consequences potentially including financial losses and damaged
relationships with clients.
Amazon Web Services – now used by many banks for additional processing
capacity, as well as for data storage – experienced a disruption in services in
Sydney in June 2016, causing multiple websites and online services reliant on
the platform to shut down, affecting everything from banking services to pizza
deliveries.
At the beginning of 2016, HSBC suffered a two-day service outage during which
millions of retail customers were unable to access their accounts. That wasn't the
only IT failure to hit the bank in the last couple of years: in 2015 its electronic
payment system experienced disruptions affecting thousands of clients just
before a UK bank holiday weekend.
Industry view
Head of operational risk at a European bank:
"[The impact of IT failure] can be big, not just in terms of direct losses but also
indirect losses, like losing a lot of customers. Many banks, not in Europe but in
Asia, are already talking about cloud solution storing. I can't assess right now
how [disruptions] might affect the business, but I think in terms of mobility of
clients, this could be severe."
#9: Fraud
The threat from internal fraud can be as pernicious as that from external actors,
as Wells Fargo found out the hard way last year. Though the $187.5 million in
penalties and restitution the bank incurred for fabricating customer approval to
open checking and credit card accounts in order to meet sales targets might
barely dent its bottom line, the blow to its reputation was far more serious.
The US Office of the Comptroller of the Currency (OCC) has identified internal
control weaknesses, such as the lack of an effective audit programme, as
common deficiencies in many banks. Even though reliance on strong internal
controls has never been more critical, its supervisory examinations indicate
weakness in audit coverage and other internal controls in some banks.
"Internal and external fraud, which the OCC views as increasing, generally
results in operational losses," says Beth Dugan, deputy comptroller for
operational risk at the OCC in Washington, DC. "A strong internal control system
can help a bank avoid fraud and unintentional errors. Industry trends show that
internal control weakness can lead to increased levels of fraud related losses
and longer times for fraud identification."
Pressure to achieve sales targets or investor expectations can cause otherwise
conscientious employees to act in a way that is ethically or morally wrong, say
practitioners. The chief executive of peer-to-peer lending company Lending Club,
for example, was forced out in May amid allegations the company had altered
the dates on some of its loans to satisfy criteria that allowed it to securitise them.
Reviewer 442
Management Advisory Services
The threat from external actors – some sophisticated, some dull but malignant –
is a growing threat too, say risk managers.
"We continue to see bad actors developing new schemes and fraudulent
techniques," says the head of operational risk at a US bank. "We've seen
widespread fraud targeting credit card accounts; now we're seeing the same
thing happen in payments. It's a matter of trying to remain a step ahead of bad
actors. When the fraud event happens at another entity, like a store or a hotel
chain, it's a fraud event at our bank, because now the criminals have access to
credit card data and account numbers."
Industry view
Rajat Baijal, head of enterprise risk, BGC and Cantor Fitzgerald :
"Banks are having to make strategic changes as a result of falling volumes,
which puts additional pressure on the front office. This could further aggravate
the risk of market manipulation, fraud and collusion with external third parties, as
traders strive to meet aggressive targets."
Zahra Al Halwachi, operational risk manager, Mashreq Bank:
"Frauds internally and externally are critical risks to any organisation. Controls
and measures need to be put in place to overcome these types of risk."
"We are assessing physical security of our people and our buildings in response
to domestic and international terrorist attacks. The risk of increasing terrorist
attacks impacts our physical security preparedness as well as our business
continuity preparedness," says Jodi Richard, head of op risk at US Bank in
Minneapolis.
A recent study from the Institute for Economics and Peace put the cost of
terrorism to the global economy at $89.6 billion in 2015 – the second-highest
level since 2000. Over the last 15 years, the economic and
opportunity costs arising from terrorism have increased roughly eleven-fold, it
estimates.
Industry view
Industry consultant and former op risk manager :
"A physical terrorist attack is feasible as many capital cities remain on high alert.
Should such an attack include the use of biological or chemical components,
whole areas or cities could become 'no-go' areas, leaving companies at the
mercy of their distributed business continuity plans, which in turn might be
rendered obsolete if the city's infrastructure is affected also."
FINANCING
Financial risk is the possibility that shareholders will lose money when they invest
in a company that has debt, if the company's cash flow proves inadequate to
meet its financial obligations. When a company uses debt financing,
its creditors are repaid before its shareholders if the company becomes
insolvent. Financial risk also refers to the possibility of a corporation or
government defaulting on its bonds, which would cause those bondholders to
lose money.
Financial risk is the general term for many different types of risks related to the
finance industry. These include risks involving financial transactions such us
company loans, and its exposure to loan default. The term is typically used to
reflect an investor's uncertainty of collecting returns and the potential for
monetary loss.
Credit risk is also referred to as default risk. This type of risk is associated with
people who borrowed money and who are unable to pay for the money they
borrowed. As such, these people go into default. Investors affected by credit risk
suffer from decreased income and lost principal and interest, or they deal with a
rise in costs for collection.
Liquidity risk involves securities and assets that cannot be purchased or sold fast
enough to cut losses in a volatile market. Asset-backed risk is the risk that asset-
backed securities may become volatile if the underlying securities also change in
Reviewer 445
Management Advisory Services
value. The risks under asset-backed risk include prepayment risk and interest
rate risk.
The determination of how an organization will pay for loss events in the most
effective and least costly way possible. Risk financing involves the identification
of risks, determining how to finance the risk, and monitoring the effectiveness of
the financing technique that is chosen.
Risk financing is designed to help a business align its desire to take on new risks
in order to grow, with its ability to pay for those risks. Businesses must weigh the
potential costs of its actions against whether the action will help the business
reach its objectives. The business will examine its priorities in order to determine
whether it is taking on the appropriate amount of risk in order to reach its
objectives, whether it is taking the right types of risks, and whether the costs of
these risks are being accounted for financially.
Companies typically forecast the losses that they expect to experience over a
period of time, and then determine the net present value of the costs associated
with the different risk financing alternatives available to them. Each option is
likely to have different costs depending on the risks that need coverage, the loss
development index that is most applicable to the company, the cost of
Reviewer 446
Management Advisory Services
maintaining a staff to monitor the program, and any consulting, legal, or external
experts that are needed.
The return of any investment has an average, which is also the expected return,
but most returns will be different from the average: some will be more, others will
be less. The more individual returns deviate from the expected return, the greater
the risk and the greater the potential reward. The degree to which all returns for a
particular investment or asset deviate from the expected return of the investment
is a measure of its risk.
The sum of the deviations, both positive and negative, forms a normal
distribution about the mean. The normal distribution describes the variation of
many natural quantities, such as height and weight. It also describes the
distribution of investment returns. The normal distribution has the property that
small deviations from the mean are more probable than larger deviations. When
graphed, it forms a bell-shaped curve.
The mean is subtracted from each deviation, then squared to ensure that all
deviations are positive numbers, then divided by the number of returns minus 1,
which is the degrees of freedom for a small sample. This is called the variance.
The square root of the variance is the standard deviation, which is simply the
average deviation from the expected return. Standard deviations can measure
the probability that a value will fall within a certain range. For normal
distributions, 68% of all values will fall within 1 standard deviation of the mean,
Reviewer 447
Management Advisory Services
95% of all values will fall within 2 standard deviations, and 99.7% of all values
will fall within 3 standard deviations.
s = Standard Deviation
rk = Specific Return
rexpected = Expected Return
n = Number of Returns (sample
size).
Sample 1 Sample 2
Return 11 6 9
Return 2 4 11
Return 3 6 9
Return 4 4 11
Expected Return 5 10
Reviewer 448
Management Advisory Services
1.15470053 1.15470053
Standard Deviation
8 8
On the left hand side, you have an investment with an expected return of $5
where each specific return deviates by $1 from the expected return. On the right
hand side, the specific returns also deviate by $1, but the expected return is $10.
Because the difference between the expected returns and the specific returns for
each sample is 1, the standard deviation is the same, but, nonetheless, the risk
is not the same, because $1 is only 10% of $10, but 20% of $5.
In the above case, both samples have the same standard deviation, but have a
significant difference in the coefficient of variation. It is obvious that the
investment with the smaller return has the greater risk in this case.
Standard
Deviation = ((6 - 5)2 + (4 - 5)2 + (6 - 5)2 + (4 -
5)2 / 4 - 1)1/2
Using Microsoft = (4/3)1/2 = 1.154700538
Excel = STDEV(6,4,6,4) = 1.154700538
Coefficient of = 1.154700538 / 5 = 0.230940108
Variation
table as the input to the STDEV function. There is no Excel function for the
coefficient of variation, but it is simple enough to calculate, knowing the standard
deviation.
COEFFICIENT OF VARIATION
The CV is particularly useful when you want to compare results from two different
surveys or tests that have different measures or values. For example, if you are
comparing the results from two tests that have different scoring mechanisms. If
sample A has a CV of 12% and sample B has a CV of 25%, you would say that
sample B has more variation, relative to its mean.
Formula
SD 10.2 12.7
SD 10.2 12.7
CV 17.03 28.35
Looking at the standard deviations of 10.2 and 12.7, you might think that the
tests have similar results. However, when you adjust for the difference in the
means, the results have more significance:
Regular test: CV = 17.03
Randomized answers: CV = 28.35
σ is the standard deviation for a population, which is the same as “s” for the
sample.
μ is the mean for the population, which is the same as XBar in the sample.
In other words, to find the coefficient of variation, divide the standard deviation by
the mean and multiply by 100.
You can calculate the coefficient of variation in Excel using the formulas for
standard deviation and mean. For a given column of data (i.e. A1:A10), you
could enter: “=stdev(A1:A10)/average(A1:A10)) then multiply by 100.
SD 11.2 12.9
Step 1: Divide the standard deviation by the mean for the first sample:
11.2 / 50.1 = 0.22355
Step 3: Divide the standard deviation by the mean for the second sample:
12.9 / 45.8 = 0.28166
STANDARD DEVIATION
Standard deviation
From Wikipedia, the free encyclopedia
For other uses, see Standard deviation (disambiguation).
A plot of normal distribution (or bell-shaped curve) where each band has a width
of 1 standard deviation – See also: 68–95–99.7 rule
The graph shows the metabolic rate for males and females. By visual inspection,
it appears that the variability of the metabolic rate is greater for males than for
females.
The sample standard deviation of the metabolic rate for the female fulmars is
calculated as follows. The formula for the sample standard deviation is
where are the observed values of the sample items, is the mean value of these
observations, and N is the number of observations in the sample.
In the sample standard deviation formula, for this example, the numerator is the
sum of the squared deviation of each individual animal's metabolic rate from the
mean metabolic rate. The table below shows the calculation of this sum of
squared deviations for the female fulmars. For females, the sum of squared
deviations is 886047.09, as shown in the table.
For the male fulmars, a similar calculation gives a sample standard deviation of
894.37, approximately twice as large as the standard deviation for the females.
The graph shows the metabolic rate data, the means (red dots), and the
standard deviations (red lines) for females and males.
Use of the sample standard deviation implies that these 14 fulmars are a sample
from a larger population of fulmars. If these 14 fulmars comprised the entire
population (perhaps the last 14 surviving fulmars), then instead of the sample
standard deviation, the calculation would use the population standard deviation.
In the population standard deviation formula, the denominator is N instead of N-
1. It is rare that measurements can be taken for an entire population, so, by
default, statistical software packages calculate the sample standard deviation.
Similarly, journal articles report the sample standard deviation unless otherwise
specified.
Population standard deviation of grades of eight students[edit]
Suppose that the entire population of interest was eight students in a particular
class. For a finite set of numbers, the population standard deviation is found by
taking the square rootof the average of the squared deviations of the values from
their average value. The marks of a class of eight students (that is, a statistical
population) are the following eight values:
These eight data points have the mean (average) of 5:
First, calculate the deviations of each data point from the mean, and square the
result of each:
The variance is the mean of these values:
and the population standard deviation is equal to the square root of the variance:
This formula is valid only if the eight values with which we began form the
complete population. If the values instead were a random sample drawn from
some large parent population (for example, they were 8 marks randomly and
independently chosen from a class of 2 million), then one often divides
Reviewer 457
Management Advisory Services
and where the integrals are definite integrals taken for x ranging over the set of
possible values of the random variable X.
In the case of a parametric family of distributions, the standard deviation can be
expressed in terms of the parameters. For example, in the case of the log-normal
distribution with parameters μ and σ2, the standard deviation is
[(exp(σ ) − 1)exp(2μ + σ2)]1/2.
2
Estimation[edit]
See also: Sample variance
Main article: Unbiased estimation of standard deviation
One can find the standard deviation of an entire population in cases (such
as standardized testing) where every member of a population is sampled. In
cases where that cannot be done, the standard deviation σ is estimated by
examining a random sample taken from the population and computing
a statistic of the sample, which is used as an estimate of the population standard
deviation. Such a statistic is called an estimator, and the estimator (or the value
of the estimator, namely the estimate) is called a sample standard deviation,and
is denoted by s (possibly with modifiers). However, unlike in the case of
estimating the population mean, for which the sample mean is a simple estimator
with many desirable properties (unbiased, efficient, maximum likelihood), there is
no single estimator for the standard deviation with all these properties,
and unbiased estimation of standard deviation is a very technically involved
problem. Most often, the standard deviation is estimated using the corrected
sample standard deviation (using N − 1), defined below, and this is often referred
to as the "sample standard deviation", without qualifiers. However, other
estimators are better in other respects: the uncorrected estimator (using N) yields
lower mean squared error, while using N − 1.5 (for the normal distribution)
almost completely eliminates bias.
Uncorrected sample standard deviation[edit]
The formula for the population standard deviation (of a finite population) can be
applied to the sample, using the size of the sample as the size of the population
(though the actual population size from which the sample is drawn may be much
larger). This estimator, denoted by sN, is known as the uncorrected sample
standard deviation, or sometimes the standard deviation of the
sample (considered as the entire population), and is defined as follows:[citation needed]
where are the observed values of the sample items and is the mean value of
these observations, while the denominator N stands for the size of the sample:
this is the square root of the sample variance, which is the average of
the squared deviations about the sample mean.
This is a consistent estimator (it converges in probability to the population value
as the number of samples goes to infinity), and is the maximum-likelihood
estimate when the population is normally distributed.[citation needed] However, this is
a biased estimator, as the estimates are generally too low. The bias decreases
as sample size grows, dropping off as 1/N, and thus is most significant for small
or moderate sample sizes; for the bias is below 1%. Thus for very large sample
Reviewer 459
Management Advisory Services
Example of samples from two populations with the same mean but different
standard deviations. Red population has mean 100 and SD 10; blue population
has mean 100 and SD 50.
A large standard deviation indicates that the data points can spread far from the
mean and a small standard deviation indicates that they are clustered closely
around the mean.
For example, each of the three populations {0, 0, 14, 14}, {0, 6, 8, 14} and {6, 6,
8, 8} has a mean of 7. Their standard deviations are 7, 5, and 1, respectively.
The third population has a much smaller standard deviation than the other two
because its values are all close to 7. It will have the same units as the data
points themselves. If, for instance, the data set {0, 6, 8, 14} represents the ages
of a population of four siblings in years, the standard deviation is 5 years. As
another example, the population {1000, 1006, 1008, 1014} may represent the
distances traveled by four athletes, measured in meters. It has a mean of 1007
meters, and a standard deviation of 5 meters.
Standard deviation may serve as a measure of uncertainty. In physical science,
for example, the reported standard deviation of a group of
repeated measurements gives the precision of those measurements. When
deciding whether measurements agree with a theoretical prediction, the standard
deviation of those measurements is of crucial importance: if the mean of the
measurements is too far away from the prediction (with the distance measured in
standard deviations), then the theory being tested probably needs to be revised.
This makes sense since they fall outside the range of values that could
reasonably be expected to occur, if the prediction were correct and the standard
deviation appropriately quantified. See prediction interval.
While the standard deviation does measure how far typical values tend to be
from the mean, other measures are available. An example is the mean absolute
deviation, which might be considered a more direct measure of average
Reviewer 462
Management Advisory Services
whose coordinates are the mean of the values we started out with.
Dark blue is one standard deviation on either side of the mean. For the normal
distribution, this accounts for 68.27 percent of the set; while two standard
deviations from the mean (medium and dark blue) account for 95.45 percent;
three standard deviations (light, medium, and dark blue) account for 99.73
percent; and four standard deviations account for 99.994 percent. The two points
of the curve that are one standard deviation from the mean are also the inflection
points.
The central limit theorem says that the distribution of an average of many
independent, identically distributed random variables tends toward the famous
bell-shaped normal distribution with a probability density function of
Reviewer 465
Management Advisory Services
Percentage within(z)
z(Percentage within)
Reviewer 466
Management Advisory Services
where N is the number of observations in the sample used to estimate the mean.
This can easily be proven with (see basic properties of the variance):
(Statistical Independence is assumed.)
hence
Resulting in:
It should be emphasized that in order to estimate the standard deviation of the
mean it is necessary to know the standard deviation of the entire
population beforehand. However, in most applications this parameter is
unknown. For example, if a series of 10 measurements of a previously unknown
quantity is performed in a laboratory, it is possible to calculate the resulting
sample mean and sample standard deviation, but it is impossible to calculate the
standard deviation of the mean.
Rapid calculation methods[edit]
See also: Algorithms for calculating variance
The following two formulas can represent a running (repeatedly updated)
standard deviation. A set of two power sums s1 and s2 are computed over a set
of N values of x, denoted as x1, ..., xN:
Given the results of these running summations, the values N, s1, s2 can be used
at any time to compute the current value of the running standard deviation:
Where N, as mentioned above, is the size of the set of values (or can also be
regarded as s0).
Similarly for sample standard deviation,
In a computer implementation, as the three sj sums become large, we need to
consider round-off error, arithmetic overflow, and arithmetic underflow. The
method below calculates the running sums method with reduced rounding errors.
[14]
This is a "one pass" algorithm for calculating variance of n samples without
the need to store prior data during the calculation. Applying this method to a time
series will result in successive values of standard deviation corresponding
to n data points as n grows larger with each new sample, rather than a constant-
width sliding window calculation.
For k = 1, ..., n:
where A is the mean value.
Note: since or
Sample variance:
Population variance:
Weighted calculation[edit]
When the values xi are weighted with unequal weights wi, the power
sums s0, s1, s2 are each computed as:
And the standard deviation equations remain unchanged. Note that s0 is now the
sum of the weights and not the number of samples N.
The incremental method with reduced rounding errors can also be applied, with
some additional complexity.
A running sum of weights must be computed for each k from 1 to n:
and places where 1/n is used above must be replaced by wi/Wn:
Reviewer 468
Management Advisory Services
Leverage computes the effect in the numerator for every unit change in the
denominator.
This ratio summarizes the effects of combining financial and operating leverage,
and what effect this combination, or variations of this combination, has on the
corporation's earnings. Not all corporations use both operating and financial
leverage, but this formula can be used if they do. A firm with a relatively high
level of combined leverage is seen as riskier than a firm with less combined
leverage, as the high leverage means more fixed costs to the firm.
The degree of operating leverage measures the effects that operating leverage
has on a company's earnings potential and indicates how earnings are affected
Reviewer 469
Management Advisory Services
% ∆ EBIT CM Sales−VC
DOL= = =
% ∆ Sales EBIT Sales−VC−FC
its sales increased by 6.04% during the same period. The degree of operating
leverage is:
EBIT
DFL=
EBT
The DFL calculation focuses on EBIT with and without interest. This formula is:
DFL = EBIT/(EBIT-Interest)
Reviewer 471
Management Advisory Services
ABC Company earned $500,000 in Year 1. It had no debt, so its EBIT and EBIT
– Interest are the same. The DFL ratio is 1. Now assume ABC is considering
expanding its manufacturing facility, at a cost of $1 million. If ABC borrows the
money, it will incur $60,000 in interest expenses. The decision to borrow is based
on the amount ABC’s managers think revenue will increase because of the
expansion.
Assume it is estimated that ABC’s revenue for Year 2 will increase to $600,000
as a result of the expanded business. Now ABC’s DFL is:
This means that for every change in earnings before taxes, there is a 1.11x
change in EBIT. If this, in fact, does happen, then management’s decision to
borrow the money paid off, because the increase in revenue more than covered
the debt incurred to fund the expansion.
The degree of total leverage equation shows the total leverage of a company.
You can find the DTL either by multiplying the degree of operating leverage and
degree of financial leverage or by dividing the percentage change in earnings per
share by the percentage change in sales -- both produce the same result. When
the result is greater than 1, the company has total leverage.
DOL x DFL
The first way to figure the DTL is by multiplying the DOL by the DFL. The DOL
equals the company's percentage change in earnings before interest and taxes
divided by the company's percentage change in sales, while the DFL equals the
percentage change in earnings per share divided by the percentage change in
EBIT. For example, if the company has a 40 percent increase in EBIT, a 30
percent change in sales and a 50 percent increase in earnings per share, divide
40 by 30 to get 1.333 and 50 by 40 to get 1.25. Then, multiply 1.333 by 1.25 to
get a DTL of 1.67.
CAPITAL STRUCTURE
A firm's capital structure can be a mixture of long-term debt, short-term debt, common
equity and preferred equity. A company's proportion of short- and long-term debt is
considered when analyzing capital structure. When analysts refer to capital structure,
they are most likely referring to a firm's debt-to-equity (D/E) ratio, which provides
insight into how risky a company is. Usually, a company that is heavily financed by
debt has a more aggressive capital structure and therefore poses greater risk to
investors. This risk, however, may be the primary source of the firm's growth.
Debt is one of the two main ways companies can raise capital in the capital markets.
Companies like to issue debt because of the tax advantages. Interest payments are
tax-deductible. Debt also allows a company or business to retain ownership, unlike
equity. Additionally, in times of low interest rates, debt is abundant and easy to
access.
Equity is more expensive than debt, especially when interest rates are low. However,
unlike debt, equity does not need to be paid back if earnings decline. On the other
hand, equity represents a claim on the future earnings of the company as a part
owner.
Both debt and equity can be found on the balance sheet. The assets listed on the
balance sheet are purchased with this debt and equity. Companies that use more
debt than equity to finance assets have a high leverage ratio and an aggressive
capital structure. A company that pays for assets with more equity than debt has a
low leverage ratio and a conservative capital structure. That said, a high leverage
ratio and/or an aggressive capital structure can also lead to higher growth rates,
whereas a conservative capital structure can lead to lower growth rates. It is the goal
of company management to find the optimal mix of debt and equity, also referred to
as the optimal capital structure.
Analysts use the D/E ratio to compare capital structure. It is calculated by dividing
debt by equity. Savvy companies have learned to incorporate both debt and equity
into their corporate strategies. At times, however, companies may rely too heavily on
external funding, and debt in particular. Investors can monitor a firm's capital
structure by tracking the D/E ratio and comparing it against the company's peers.
Reviewer 473
Management Advisory Services
• The goal of the capital structure decision is to determine the financial leverage that maximizes the
value of the company (or minimizes the weighted average cost of capital).
• In the Modigliani and Miller theory developed without taxes, capital structure is irrelevant and has
no effect on company value.
• The deductibility of interest lowers the cost of debt and the cost of capital for the company as a
whole. Adding the tax shield provided by debt to the Modigliani and Miller framework suggests that
the optimal capital structure is all debt.
• In the Modigliani and Miller propositions with and without taxes, increasing a company’s relative
use of debt in the capital structure increases the risk for equity providers and, hence, the cost of
equity capital.
• When there are bankruptcy costs, a high debt ratio increases the risk of bankruptcy.
• Using more debt in a company’s capital structure reduces the net agency costs of equity.
• The costs of asymmetric information increase as more equity is used versus debt, suggesting the
pecking order theory of leverage, in which new equity issuance is the least preferred method of
raising capital.
• According to the static trade-off theory of capital structure, in choosing a capital structure, a
company balances the value of the tax benefit from deductibility of interest with the present value
of the costs of financial distress. At the optimal target capital structure, the incremental tax shield
benefit is exactly offset by the incremental costs of financial distress.
• A company may identify its target capital structure, but its capital structure at any point in time may
not be equal to its target for many reasons.
• Many companies have goals for maintaining a certain credit rating, and these goals are influenced
by the relative costs of debt financing among the different rating classes.
• In evaluating a company’s capital structure, the financial analyst must look at the capital structure
of the company over time, the capital structure of competitors that have similar business risk, and
company-specific factors that may affect agency costs.
• Good corporate governance and accounting transparency should lower the net agency costs of
equity.
• When comparing capital structures of companies in different countries, an analyst must consider a
variety of characteristics that might differ and affect both the typical capital structure and the debt
maturity structure.
1. 1. 1 Chapter 12 Part 2 Determining the Financing Mix Lecture Notes© 1996, Prentice Hall, Inc.
2. 2. 2Learning Objectives Understand the concept of an optimal capital structure. Explain the
main underpinnings of capital structure theory. Distinguish between the independence
hypothesis and dependence hypothesis as these concepts relate to capital structure theory theory,
and identify the Nobel prize winners in economics who are leading proponents of the
independence hypothesis. Understand and be able to graph the moderate position on capital
Reviewer 474
Management Advisory Services
structure importance. Incorporate the concepts of agency costs and free cash flow into a
discussion on capital structure management. Use the basic tools of capital structure
management. Familiarize others with corporate financing policies in practice.
3. 3. 3Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Balance Sheet Assets Liabilities Current
Liabilities Long Term Liabilities Financial Structure Equity Total Assets
4. 4. 4Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Capital structure is the mix of the long term
sources of funds Balance Sheet Assets Liabilities Current Liabilities Long Term Liabilities Capital
Structure Equity Total Assets
5. 5. 5Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Capital structure is the mix of the long term
sources of funds Capital structure is the focus of this chapter, so current liabilities will not be
included. Balance Sheet Assets Liabilities Current Liabilities Long Term Liabilities Capital Structure
Equity Total Assets
6. 6. 6Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price
7. 7. 7Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory
8. 8. 8Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory For all theories, will use a
simple valuation model: D where: P0 = price of stock P0 = kc D = constant dividend Kc = cost of
equity capital
9. 9. 9Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory For all theories, will use a
simple valuation model: D where: P0 = price of stock P0 = kc D = constant dividend Kc = cost of
equity capital If all earnings paid as dividends, so there is no growth: D EPS where: EPS =
Earnings per share P0 = kc = kc
10. 10. 10Capital Structure TheoriesModerate Position Interest is tax deductible The use of
financial leverage increases the likelihood of bankruptcy. The costs of equity and debt rise
causing a “saucer- shaped” cost of capital function. Firms should choose financial leverage with
lowest cost of capital Capital kc Costs kO kd Financial Leverage
11. 11. 11Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors.
12. 12. 12Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors.
13. 13. 13Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
Reviewer 475
Management Advisory Services
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
Financial Leverage
14. 14. 14Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
Value of Unlevered Firm Financial Leverage
15. 15. 15Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value Financial Leverage
16. 16. 16Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value PV of Tax Shields Financial Leverage
17. 17. 17Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value Actual Value of the Firm Financial Leverage
18. 18. 18Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm ndep f Lever PV of Agency and I Value o } Bankruptcy Costs Actual
Value of the Firm Financial Leverage
19. 19. 19Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13)
20. 20. 20Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT.
21. 21. 21Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT. EBIT-EPS Analysis Graphically demonstrates
the impact of leverage on EPS at different levels of EBIT. EPS 50% Leverage 40% Leverage EBIT
22. 22. 22Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT. EBIT-EPS Analysis Graphically demonstrates
the impact of leverage on EPS at different levels of EBIT. EPS 50% Leverage 40% Leverage
Indifference Point EBIT
23. 23. 23EBIT-EPS Analysis Compute EBIT at which EPS will be the same regardless of financing
plan
24. 24. 24EBIT-EPS Analysis Compute EBIT at which EPS will be the same regardless of financing
plan Set EPS for each plan equal to each otherAt the EBIT indifference level: EPS50% debt =
Reviewer 476
Management Advisory Services
EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% where: I = Interest cost of
plan S = # of shares of plan
25. 25. 25EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%
26. 26. 26EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40%
27. 27. 27EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 S = $500,000/$10 = 50,000
28. 28. 28EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000
29. 29. 29EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000 (EBIT - $40,000)(1 - .40) (EBIT - $32,000)(1 - .40) = 50,000 60,000
30. 30. 30EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000 (EBIT - $40,000)(1 - .40) (EBIT - $32,000)(1 - .40) = 50,000 60,000 Solve for EBIT: EBIT =
$80,000
31. 31. 31Capital Structure in Practice The majority of financial officers believe there is an optimal
capital structure for their company. Managers adapt financial leverage to the business cycle,
taking advantage of debt when it is less expensive. The most important factor in determining
leverage is a firm’s business risk. Managers’ optimal choice to finance new projects is to use
retained earnings. Only after internal funds are exhausted, managers’ choice of leverage is
consistent with the Moderate Theory of financial leverage.
A healthy capital structure that reflects a low level of debt and a corresponding
high level of equity is a very positive sign of financial fitness. (Learn about market
capitalization in Market Capitalization Defined.)
The debt ratio compares total liabilities to total assets. Obviously, more of the
former means less equity and, therefore, indicates a more leveraged position.
The problem with this measurement is that it is too broad in scope, which, as a
consequence, gives equal weight to operational and debt liabilities. The same
criticism can be applied to the debt/equity ratio, which compares total liabilities to
total shareholders' equity. Current and non-current operational liabilities,
particularly the latter, represent obligations that will be with the company forever.
Also, unlike debt, there are no fixed payments of principal or interest attached to
operational liabilities.
always more desirable than a high percentage of debt. (To continue reading
about ratios, see Debt Reckoning.)
1. Business Risk
Excluding debt, business risk is the basic risk of the company's operations. The
greater the business risk, the lower the optimal debt ratio.
3. Financial Flexibility
Financial flexibility is essentially the firm's ability to raise capital in bad times. It
should come as no surprise that companies typically have no problem raising
capital when sales are growing and earnings are strong. However, given a
company's strong cash flow in the good times, raising capital is not as hard.
Companies should make an effort to be prudent when raising capital in the good
times and avoid stretching their capabilities too far. The lower a company's debt
level, the more financial flexibility a company has.
Let's take the airline industry as an example. In good times, the industry
generates significant amounts of sales and thus cash flow. However, in bad
times, that situation is reversed and the industry is in a position where it needs to
Reviewer 479
Management Advisory Services
borrow funds. If an airline becomes too debt ridden, it may have a decreased
ability to raise debt capital during these bad times because investors may doubt
the airline's ability to service its existing debt when it has new debt loaded on top.
(Learn more about this industry in Dead Airlines And What Killed Them and 4
Reasons Why Airlines Are Always Struggling.)
4. Management Style
Management styles range from aggressive to conservative. The more
conservative a management's approach is, the less inclined it is to use debt to
increase profits. An aggressive management may try to grow the firm quickly,
using significant amounts of debt to ramp up the growth of the
company's earnings per share (EPS).
5. Growth Rate
Firms that are in the growth stage of their cycle typically finance that growth
through debt by borrowing money to grow faster. The conflict that arises with this
method is that the revenues of growth firms are typically unstable and unproven.
As such, a high debt load is usually not appropriate.
More stable and mature firms typically need less debt to finance growth as their
revenues are stable and proven. These firms also generate cash flow, which can
be used to finance projects when they arise.
6. Market Conditions
Market conditions can have a significant impact on a company's capital-structure
condition. Suppose a firm needs to borrow funds for a new plant. If the market is
struggling, meaning that investors are limiting companies' access to capital
because of market concerns, the interest rate to borrow may be higher than a
company would want to pay. In that situation, it may be prudent for a company to
wait until market conditions return to a more normal state before the company
tries to access funds for the plant. (Read more about market conditions in The
Cost Of Unemployment To The Economy and Betting On The Economy: What
Are The Odds?)
borrower is required to furnish the book of the creditor with audited annual
financial statements and directed quarterly or monthly statements. 3. The
borrower is prohibited to dispose his business property, except inventories.
4. The borrower is prohibited from incurring additional long – term debts or
additional lease obligation. 5. The borrower is not allowed to repurchase the
company’s own stock. Private Financial Institution (Private commercial
Bank)
17. 17. 1. Equal Principal Payments 2. Equal Amortization 3. Balloon Payment
4. Deferred Payment of Principal with Grace Period. Repayment of Term
Loans: Private Financial Institution (Private commercial Bank)
18. 18. EqualPrincipal Payments EqualAmortization BalloonPayment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% Under this arrangement, the
loan is repaid in equal amount of principal - Year Outstanding Principal at
the beginning of year(P) Interest due at end of year(I) (P*8%) Repayment of
principal at end of year (RP) Total Payment at end of year(TP) (I+RP) 1 2 3
4 5 6 7 8 9 10 PhP. 100,000 90,000 80,000 70,000 60,000 50,000 40,000
30,000 20,000 10,000 PhP. 80,000 7,200 6,400 5,600 4,800 4,000 3,200
2,400 1,600 800 PhP. 10,000 10,000 10,000 10,000 10,000 10,000 10,000
10,000 10,000 10,000 PhP. 18,000 17,200 16,400 15,600 14,800 14,000
13,200 12,200 16,600 10,800 Private Financial Institution (Private
commercial Bank)
19. 19. EqualPrincipalPayments Equal Amortization BalloonPayment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% -Under this arrangement, the
loan is repaid in equal installments. Years Outstanding Principal at the
beginning of year(P) (P-RP) Interest due at end of year(I) (P*8%)
Repayment of principal at end of year (RP) Total Payment at end of
year(TP) 1 2 3 4 5 6 7 8 9 10 PhP.100,000.00 93,097.00 85,641.76
77,590.10 68,894.30 59,502.84 49,360.06 38,405.86 26,575.32 13,798.34
PhP. 8,000.00 7,447.76 6,851.34 6,207.20 5,511.54 4,760.22 3,948.80
3,073.46 2,126.02 1,103.86 PhP. 6,903.00 7,455.24 8,051.66 8,695.80
9,391.46 10,142.78 10,954.20 11,830.54 12,776.98 13,798.34 PhP. 14,903
14,903 14,903 14,903 14,903 14,903 14,903 14,903 14,903 14,903 Private
Financial Institution (Private commercial Bank)
20. 20. EqualPrincipalPayments EqualAmortization Balloon Payment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% -Loan is repaid in equal
installments for a number of years, then, a large and final payment is made
at maturity date. Years Outstanding Principal at the beginning of year(P) (P-
RP) Interest due at end of year(I) (P*8%) Repayment of principal at end of
year (RP) Total Payment at end of year(TP) 1 2 3 4 5 6 7 8 9 10
PhP.100,000.00 94,000.00 87,520.00 80,521.60 72,693.32 64,800.38
55,984.41 46,463.16 36,180.21 25,074.62 PhP. 8,000.00 7,520.00 7,001.60
Reviewer 484
Management Advisory Services
COST OF CAPITAL
WHAT IT IS:
The return an investor receives on a company security is the cost of that security
to the company that issued it. A company's overall cost of capital is a mixture of
returns needed to compensate all creditors and stockholders. This is often called
the weighted average cost of capital and refers to the weighted average costs of
the company's debtand equity.
WHY IT MATTERS:
The cost of various capital sources varies from company to company, and
depends on factors such as its operating history, profitability, credit worthiness,
etc. In general, newer enterprises with limited operating histories will have higher
costs of capital than established companies with a solid track record, since
lenders and investors will demand a higher risk premium for the former.
Every company has to chart out its game plan for financing the business at an
early stage. The cost of capital thus becomes a critical factor in deciding which
financing track to follow – debt, equity or a combination of the two. Early-stage
companies seldom have sizable assets to pledge as collateral for debt financing,
so equity financing becomes the default mode of funding for most of them.
The cost of debt is merely the interest rate paid by the company on such debt.
However, since interest expense is tax-deductible, the after-tax cost of debt is
calculated as: Yield to maturity of debt x (1 - T) where T is the
company’s marginal tax rate.
The firm’s overall cost of capital is based on the weighted average of these
costs. For example, consider an enterprise with a capital structure consisting of
70% equity and 30% debt; its cost of equity is 10% and after-tax cost of debt is
7%. Therefore, its WACC would be (0.7 x 10%) + (0.3 x 7%) = 9.1%. This is the
cost of capital that would be used to discount future cash flows from potential
projects and other opportunities to estimate their Net Present Value (NPV) and
ability to generate value.
Companies strive to attain the optimal financing mix, based on the cost of capital
for various funding sources. Debt financing has the advantage of being more tax-
efficient than equity financing, since interest expenses are tax-deductible
and dividends on common shares have to be paid with after-tax dollars.
However, too much debt can result in dangerously high leverage, resulting in
higher interest rates sought by lenders to offset the higher default risk.
Long-term debt consists of loans and financial obligations lasting over one year.
Long-term debt for a company would include any financing or leasing obligations
that are to come due in a greater than 12-month period. Long-term debt also
applies to governments: nations can also have long-term debt.
In the U.K., long-term debts are known as "long-term loans."
Bonds are one of the most common types of long-term debt. Companies may
issuing bonds to raise funds for a variety of reasons. Bond sales bring in
Reviewer 488
Management Advisory Services
immediate income, but the company ends up paying for the use of investors'
capital due to interest payments.
Aside from need, there are many factors that go into a company's decision to
take on more or less long-term debt. During the Great Recession, many
companies learned the dangers of relying too heavily on long-term debt. In
addition, stricter regulations have been imposed to prevent businesses from
falling victim to economic volatility. This trend affected not only businesses, but
also individuals, such as homeowners.
Since debt sums tend to be large, these loans take many years to pay off.
Companies with too much long-term debt will find it hard to pay off these debts
and continue to thrive, as much of their capital is devoted to interest payments
and it can be difficult to allocate money to other areas. A company can determine
whether it has accrued too much long-term debt by examining its debt to equity
ratio.
A high debt to equity ratio means the company is funding most of its ventures
with debt. If this ratio is too high, the company is at risk of bankruptcy if it
becomes unable to finance its debt due to
decreased income or cash flow problems. A high debt to equity ratio also tends
to put a company at a disadvantage against its competitors who may have more
cash. Many industries discourage companies from taking on too much long-term
debt in order to reduce the risks and costs closely associated with unstable forms
of income, and they even pass regulations that restrict the amount of long-term
debt a company can acquire.
A low debt to equity ratio is a sign that the company is growing or thriving, as it is
no longer relying on its debt and is making payments to lower it. It consequently
has more leverage with other companies and a better position in the current
financial environment. However, the company must also compare its ratio to
those of its competitors, as this context helps determines economic leverage.
For example, Adobe Systems Inc. (ADBE) reported a higher amount of long-term
debt in Q2 of 2015 than it had in the previous seven years. This debt is still low
compared with many of its competitors, such as Microsoft Corp. (MSFT) and
Apple Inc. (AAPL), so Adobe retains relatively the same place in the market.
However, comparisons fluctuate with competitors such as Symantec
Corp. (SYMC) and Quintiles Transnational (Q), who carry a similar amount of
long-term debt as Adobe.
A company's long-term debt may also put bond investors at risk in an illiquid
bond market. The question of the liquidity of the bond market has become an
issue since the Great Recession, as banks that used to make markets for bond
traders have been constrained by greater regulatory oversight.
Long-term debt is not all bad, though, and in moderation, it is necessary for any
company. Think of it as a credit card for a business: in the short-term, it allows
the company to invest in the tools it needs to advance and thrive while it is still
young, with the goal of paying off the debt when the company is established and
in the financial position to do so. Without incurring long-term debt, most
companies would never get off the ground. Long-term debt is a given variable for
any company, but how much debt is acquired plays a large role in the company's
image and its future.
Bank loans and financing agreements, in addition to bonds and notes that have
maturities greater than one year, would be considered long-term debt. Other
securities such as repos and commercial papers would not be long-term debt,
because their maturities are typically shorter than one year.
Cost of debt refers to the effective rate a company pays on its current debt. In
most cases, this phrase refers to after-tax cost of debt, but it also refers to a
company's cost of debt before taking taxes into account. The difference in cost of
debt before and after taxes lies in the fact that interest expenses are deductible.
debt, so this measure is useful for giving an idea as to the overall rate being paid
by the company to use debt financing. The measure can also give investors an
idea of the riskiness of the company compared to others, because riskier
companies generally have a higher cost of debt.
To calculate its cost of debt, a company needs to figure out the total amount of
interest it is paying on each of its debts for the year. Then, it divides this number
by the total of all of its debt. The quotient is its cost of debt.
For example, say a company has a $1 million loan with a 5% interest rate and a
$200,000 loan with a 6% rate. It has also issued bonds worth $2 million at a 7%
rate. The interest on the first two loans is $50,000 and $12,000, respectively, and
the interest on the bonds equates to $140,000. The total interest for the year is
$202,000. As the total debt is $3.2 million, the company's cost of debt is 6.31%.
To calculate after-tax cost of debt, subtract a company's effective tax rate from 1,
and multiply the difference by its cost of debt. Do not use the
company's marginal tax rate; rather, add together the company's state and
federal tax rate to ascertain its effective tax rate.
For example, if a company's only debt is a bond it has issued with a 5% rate, its
pre-tax cost of debt is 5%. If its tax rate is 40%, the difference between 100%
and 40% is 60%, and 60% of 5% is 3%. The after-tax cost of debt is 3%.
The rationale behind this calculation is based on the tax savings the company
receives from claiming its interest as a business expense. To continue with the
above example, imagine the company has issued $100,000 in bonds at a 5%
rate. Its annual interest payments are $5,000. It claims this amount as an
expense, and this lowers the company's income on paper by $5,000. As the
company pays a 40% tax rate, it saves $2,000 in taxes by writing off its interest.
As a result, the company only pays $3,000 on its debt. This equates to a 3%
interest rate on its debt.
It is important to note that k d represents thecost to issue new debt, not the firm\'s
existing debt.
Preferred stocks straddle the line between stocks and bonds. Technically, they
are equity securities, but they share many characteristics with debt instruments.
Preferred stocks are issued with a fixed par value and pay dividends based on a
percentage of that par at a fixed rate.
R
D
where:
Dps = preferred
dividends
price
Assume Newco's preferred stock pays a dividend of $2 per share and sells for
$100 per share. If the cost to Newco to issue new shares is 4%, what is Newco's
cost of preferred stock?
Answer:
Rps = Dps/Pnet = $2/$100(1-0.04) = 2.1%
For more on this subject, read Prefer Dividends? Why Not Look At Preferred
Stock?
Next, we'll take a look at the weighted average cost of capital, a calculation that
will put our formulas for both the cost of equity and the cost of debt to work.
share (0.08$50 par$4). Before the cost of preferred stock is calculated, any
dividends stated as percentages should be converted to annual dollar dividends.
Calculating the Cost of Preferred Stock The cost of preferred stock, kp, is the
ratio of the preferred stock dividend to the firm’s net proceeds from the sale of
the preferred stock. The net proceeds represents the amount of money to be
received minus any flotation costs. Equation 10.3 gives the cost of preferred
stock, kp, in terms of the annual dollar dividend, Dp, and the net proceeds from
the sale of the stock, Np: kp (10.3) Dp Np LG2 cost of preferred stock, kp The
ratio of the preferred stock dividend to the firm’s net proceeds from the sale of
preferred stock; calculated by dividing the annual dividend, Dp , by the net
proceeds from the sale of the preferred stock, Np. 396 PART 4 Long-Term
Financial Decisions constant-growth valuation (Gordon) model Assumes that the
value of a share of stock equals the present value of all future dividends
(assumed to grow at a constant rate) that it is expected to provide over an infinite
time horizon. Because preferred stock dividends are paid out of the firm’s after-
tax cash flows, a tax adjustment is not required. EXAMPLE Duchess Corporation
is contemplating issuance of a 10% preferred stock that is expected to sell for its
$87-per-share par value. The cost of issuing and selling the stock is expected to
be $5 per share. The first step in finding the cost of the stock is to calculate the
dollar amount of the annual preferred dividend, which is $8.70 (0.10$87). The net
proceeds per share from the proposed sale of stock equals the sale price minus
the flotation costs ($87$5$82). Substituting the annual dividend, Dp, of $8.70 and
the net proceeds, Np, of $82 into Equation 10.3 gives the cost of preferred stock,
10.6% ($8.70 $82). The cost of Duchess’s preferred stock (10.6%) is much
greater than the cost of its long-term debt (5.6%). This difference exists primarily
because the cost of long-term debt (the interest) is tax deductible.
Cost of Equity
Firms obtain capital from two kinds of sources: lenders and equity investors.
From the perspective of capital providers, lenders seek to be rewarded
with interest and equity investors seek dividends and/or appreciation in the value
of their investment (capital gain). From a firm's perspective, they must pay for the
capital it obtains from others, which is called its cost of capital. Such costs are
separated into a firm's cost of debt and cost of equity and attributed to these two
kinds of capital sources.
models for estimating a particular firm's cost of equity such as the capital asset
pricing model, or CAPM. Another method is derived from the Gordon Model,
which is a discounted cash flow model based on dividend returns and eventual
capital return from the sale of the investment. Another simple method is the Bond
Yield Plus Risk Premium (BYPRP), where a subjective risk premium is added to
the firm's long-term debt interest rate. Moreover, a firm's overall cost of capital,
which consists of the two types of capital costs, can be estimated using
the weighted average cost of capital model.
The Cost of Common Stock The cost of common stock is the return required on
the stock by investors in the marketplace. There are two forms of common stock
financing: (1) retained earnings and (2) new issues of common stock. As a first
step in finding each of these costs, we must estimate the cost of common stock
equity. Finding the Cost of Common Stock Equity The cost of common stock
equity, ks, is the rate at which investors discount the expected dividends of the
firm to determine its share value. Two techniques are used to measure the cost
of common stock equity. One relies on the constantgrowth valuation model, the
other on the capital asset pricing model (CAPM). Using the Constant-Growth
Valuation (Gordon) Model In Chapter 7 we found the value of a share of stock to
be equal to the present value of all future dividends, which in one model were
assumed to grow at a constant annual rate over an infinite time horizon. This is
the constant-growth valuation model, also known as the Gordon model. The key
expression derived for this model was presented as Equation 7.4 and is restated
here: P0 (10.4) D1 ksg cost of common stock equity, ks The rate at which
investors discount the expected dividends of the firm to determine its share
value. LG3 CHAPTER 10 The Cost of Capital 397 where P0value of common
stock D1per-share dividend expected at the end of year 1 ksrequired return on
common stock gconstant rate of growth in dividends Solving Equation 10.4 for ks
results in the following expression for the cost of common stock equity: ks g
(10.5) Equation 10.5 indicates that the cost of common stock equity can be found
by dividing the dividend expected at the end of year 1 by the current price of the
stock and adding the expected growth rate. Because common stock dividends
are paid from after-tax income, no tax adjustment is required. EXAMPLE
Duchess Corporation wishes to determine its cost of common stock equity, ks.
The market price, P0, of its common stock is $50 per share. The firm expects to
Reviewer 494
Management Advisory Services
pay a dividend, D1, of $4 at the end of the coming year, 2004. The dividends
paid on the outstanding stock over the past 6 years (1998–2003) were as
follows: Using the table for the present value interest factors, PVIF (Table A–2),
or a financial calculator in conjunction with the technique described for finding
growth rates in Chapter 4, we can calculate the annual growth rate of dividends,
g. It turns out to be approximately 5% (more precisely, it is 5.05%). Substituting
D1$4, P0$50, and g5% into Equation 10.5 yields the cost of common stock
equity: ks 0.050.080.050.130, or 13 . 0 % The 13.0% cost of common stock
equity represents the return required by existing shareholders on their
investment. If the actual return is less than that, shareholders are likely to begin
selling their stock. Using the Capital Asset Pricing Model (CAPM) Recall from
Chapter 5 that the capital asset pricing model (CAPM) describes the relationship
between the required return, ks, and the nondiversifiable risk of the firm as
measured by the beta coefficient, b. The basic CAPM is ksRF[b(km RF)] (10.6)
$4 $50 Year Dividend 2003 $3.80 2002 3.62 2001 3.47 2000 3.33 1999 3.12
1998 2.97 D1 P0 capital asset pricing model (CAPM) Describes the relationship
between the required return, ks, and the nondiversifiable risk of the firm as
measured by the beta coefficient, b. 398 PART 4 Long-Term Financial Decisions
where RFrisk-free rate of return km market return; return on the market portfolio
of assets Using CAPM indicates that the cost of common stock equity is the
return required by investors as compensation for the firm’s nondiversifiable risk,
measured by beta. EXAMPLE Duchess Corporation now wishes to calculate its
cost of common stock equity, ks, by using the capital asset pricing model. The
firm’s investment advisers and its own analyses indicate that the risk-free rate,
RF, equals 7%; the firm’s beta, b, equals 1.5; and the market return, km, equals
11%. Substituting these values into Equation 10.6, the company estimates the
cost of common stock equity, ks, to be ks7.0%[1.5(11.0%7.0%)]7.0%6.0%1 3 . 0
% The 13.0% cost of common stock equity represents the required return of
investors in Duchess Corporation common stock. It is the same as that found by
using the constant-growth valuation model. The Cost of Retained Earnings As
you know, dividends are paid out of a firm’s earnings. Their payment, made in
cash to common stockholders, reduces the firm’s retained earnings. Let’s say a
firm needs common stock equity financing of a certain amount; it has two choices
relative to retained earnings: It can issue additional common stock in that amount
and still pay dividends to stockholders out of retained earnings. Or it can
increase common stock equity by retaining the earnings (not paying the cash
dividends) in the needed amount. In a strict accounting sense, the retention of
earnings increases common stock equity in the same way that the sale of
additional shares of common stock does. Thus the cost of retained earnings, kr,
to the firm is the same as the cost of an equivalent fully subscribed issue of
additional common stock. Stockholders find the firm’s retention of earnings
acceptable only if they expect that it will earn at least their required return on the
reinvested funds. Viewing retained earnings as a fully subscribed issue of
additional common stock, we can set the firm’s cost of retained earnings, kr,
equal to the cost of common stock equity as given by Equations 10.5 and 10.6.4
krks (10.7) It is not necessary to adjust the cost of retained earnings for flotation
costs, because by retaining earnings, the firm “raises” equity capital without
incurring these costs. 4. Technically, if a stockholder received dividends and
wished to invest them in additional shares of the firm’s stock, he or she would
first have to pay personal taxes on the dividends and then pay brokerage fees
Reviewer 495
Management Advisory Services
expected average future cost of funds over the long run; found by weighting the
cost of each specific type of capital by its proportion in the firm’s capital structure.
Subtracting the $5.50 per share underpricing and flotation cost from the current
$50 share price results in expected net proceeds of $44.50 per share ($50.00
$5.50). Substituting D1$4, Nn$44.50, and g5% into Equation 10.8 results in a
cost of new common stock, kn, as follows: kn 0.050.090.050.140, or 14 . 0 %
Duchess Corporation’s cost of new common stock is therefore 14.0%. This is the
value to be used in subsequent calculations of the firm’s overall cost of capital.
Revi
The Weighted Average Cost of Capital Now that we have calculated the cost of
specific sources of financing, we can determine the overall cost of capital. As
noted earlier, the weighted average cost of capital (WACC), ka, reflects the
expected average future cost of funds over the long run. It is found by weighting
the cost of each specific type of capital by its proportion in the firm’s capital
structure. Calculating the Weighted Average Cost of Capital (WACC) Calculating
the weighted average cost of capital (WACC) is straightforward: Multiply the
specific cost of each form of financing by its proportion in the firm’s capital
structure and sum the weighted values. As an equation, the weighted average
cost of capital, ka, can be specified as follows: ka(wiki )(wpkp)(wskr or n) (10.9)
where wiproportion of long-term debt in capital structure wpproportion of
preferred stock in capital structure wsproportion of common stock equity in
capital structure wi wp ws1.0 Three important points should be noted in
Equation 10.9: 1. For computational convenience, it is best to convert the
weights into decimal form and leave the specific costs in percentage terms.
$4.00 $44.50 LG4 CHAPTER 10 The Cost of Capital 401 2. The sum of the
weights must equal 1.0. Simply stated, all capital structure components must be
accounted for. 3. The firm’s common stock equity weight, ws, is multiplied by
either the cost of retained earnings, kr, or the cost of new common stock, kn.
Which cost is used depends on whether the firm’s common stock equity will be
financed using retained earnings, kr, or new common stock, kn. EXAMPLE In
earlier examples, we found the costs of the various types of capital for Duchess
Corporation to be as follows: Cost of debt, ki 5.6% Cost of preferred stock,
kp10.6% Cost of retained earnings, kr13.0% Cost of new common stock,
kn14.0% The company uses the following weights in calculating its weighted
average cost of capital: Because the firm expects to have a sizable amount of
retained earnings available ($300,000), it plans to use its cost of retained
earnings, kr, as the cost of common stock equity. Duchess Corporation’s
weighted average cost of capital is calculated in Table 10.1. The resulting
weighted average cost of capital for Duchess is 9.8%. Assuming an unchanged
risk level, the firm should accept all projects that will earn a return greater than
9.8%. Source of capital Weight Long-term debt 40% Preferred stock 10 Common
stock equity 5 0 Total 10 0 % TABLE 10.1 Calculation of the Weighted Average
Cost of Capital for Duchess Corporation Weighted cost Weight Cost [(1)(2)]
Source of capital (1) (2) (3) Long-term debt 0.40 5.6% 2.2% Preferred stock 0.10
10.6 1.1 Common stock equity 0. 5 0 13.0 6. 5 Totals 1.00 9. 8 % Weighted
average cost of capital9.8% 402 PART 4 Long-Term Financial Decisions market
Reviewer 498
Management Advisory Services
value weights Weights that use market values to measure the proportion of each
type of capital in the firm’s financial structure. historical weights Either book or
market value weights based on actual capital structure proportions. target
weights Either book or market value weights based on desired capital structure
proportions. Weighting Schemes Weights can be calculated on the basis of
either book value or market value and using either historical or target
proportions. Book Value Versus Market Value Book value weights use
accounting values to measure the proportion of each type of capital in the firm’s
financial structure. Market value weights measure the proportion of each type of
capital at its market value. Market value weights are appealing, because the
market values of securities closely approximate the actual dollars to be received
from their sale. Moreover, because the costs of the various types of capital are
calculated by using prevailing market prices, it seems reasonable to use market
value weights. In addition, the long-term investment cash flows to which the cost
of capital is applied are estimated in terms of current as well as future market
values. Market value weights are clearly preferred over book value weights.
Historical Versus Target Historical weights can be either book or market value
weights based on actual capital structure proportions. For example, past or
current book value proportions would constitute a form of historical weighting, as
would past or current market value proportions. Such a weighting scheme would
therefore be based on real—rather than desired—proportions. Target weights,
which can also be based on either book or market values, reflect the firm’s
desired capital structure proportions. Firms using target weights establish such
proportions on the basis of the “optimal” capital structure they wish to achieve.
(The development of these proportions and the optimal structure are discussed
in detail in Chapter 11 .) When one considers the somewhat approximate nature
of the calculation of weighted average cost of capital, the choice of weights may
not be critical. However, from a strictly theoretical point of view, the preferred
weighting scheme is target market value proportions, and these are assumed
throughout this chapter.
The Marginal Cost and Investment Decisions The firm’s weighted average cost
of capital is a key input to the investment decision-making process. As
demonstrated earlier in the chapter, the firm should make only those investments
for which the expected return is greater LG5 LG6 book value weights Weights
that use accounting values to measure the proportion of each type of capital in
the firm’s financial structure. CHAPTER 10 The Cost of Capital 403 break point
The level of total new financing at which the cost of one of the financing
components rises, thereby causing an upward shift in the weighted marginal cost
of capital (WMCC). than the weighted average cost of capital. Of course, at any
given time, the firm’s financing costs and investment returns will be affected by
the volume of financing and investment undertaken. The weighted marginal cost
of capital and the investment opportunities schedule are mechanisms whereby
financing and investment decisions can be made simultaneously. The Weighted
Marginal Cost of Capital (WMCC) The weighted average cost of capital may vary
over time, depending on the volume of financing that the firm plans to raise. As
the volume of financing increases, the costs of the various types of financing will
increase, raising the firm’s weighted average cost of capital. Therefore, it is
Reviewer 499
Management Advisory Services
useful to calculate the weighted marginal cost of capital (WMCC), which is simply
the firm’s weighted average cost of capital (WACC) associated with its next dollar
of total new financing. This marginal cost is relevant to current decisions. The
costs of the financing components (debt, preferred stock, and common stock)
rise as larger amounts are raised. Suppliers of funds require greater returns in
the form of interest, dividends, or growth as compensation for the increased risk
introduced by larger volumes of new financing. The WMCC is therefore an
increasing function of the level of total new financing. Another factor that causes
the weighted average cost of capital to increase is the use of common stock
equity financing. New financing provided by common stock equity will be taken
from available retained earnings until this supply is exhausted and then will be
obtained through new common stock financing. Because retained earnings are a
less expensive form of common stock equity financing than the sale of new
common stock, the weighted average cost of capital will rise with the addition of
new common stock. Finding Break Points To calculate the WMCC, we must
calculate break points, which reflect the level of total new financing at which the
cost of one of the financing components rises. The following general equation
can be used to find break points: BPj (10.10) where BPjbreak point for financing
source j AFjamount of funds available from financing source j at a given cost
wjcapital structure weight (stated in decimal form) for financing source j
EXAMPLE When Duchess Corporation exhausts its $300,000 of available
retained earnings (at kr13.0%), it must use the more expensive new common
stock financing (at kn14.0%) to meet its common stock equity needs. In addition,
the firm expects that it can borrow only $400,000 of debt at the 5.6% cost;
additional debt will have an after-tax cost (ki ) of 8.4%. Two break points
therefore exist: (1) when the $300,000 of retained earnings costing 13.0% is
exhausted, and (2) when the $400,000 of long-term debt costing 5.6% is
exhausted. AFj wj weighted marginal cost of capital (WMCC) The firm’s
weighted average cost of capital (WACC) associated with its next dollar of total
new financing. 404 PART 4 Long-Term Financial Decisions The break points can
be found by substituting these values and the corresponding capital structure
weights given earlier into Equation 10.10. We get the dollar amounts of total new
financing at which the costs of the given financing sources rise: BPcommon
equity $600,000 BPlong-term debt $1,000,000 Calculating the WMCC Once the
break points have been determined, the next step is to calculate the weighted
average cost of capital over the range of total new financing between break
points. First, we find the WACC for a level of total new financing between zero
and the first break point. Next, we find the WACC for a level of total new
financing between the first and second break points, and so on. By definition, for
each of the ranges of total new financing between break points, certain
component capital costs (such as debt or common equity) will increase. This will
cause the weighted average cost of capital to increase to a higher level than that
over the preceding range. Together, these data can be used to prepare a
weighted marginal cost of capital (WMCC) schedule. This is a graph that relates
the firm’s weighted average cost of capital to the level of total new financing.
EXAMPLE Table 10.2 summarizes the calculation of the WACC for Duchess
Corporation over the three ranges of total new financing created by the two break
points— $400,000 0.40 $300,000 0.50 TABLE 10.2 Weighted Average Cost of
Capital for Ranges of Total New Financing for Duchess Corporation Weighted
cost Range of total Source of capital Weight Cost [(2)(3)] new financing (1) (2)
Reviewer 500
Management Advisory Services
(3) (4) $0 to $600,000 Debt .40 5.6% 2.2% Preferred .10 10.6 1.1 Common .50
13.0 6. 5 Weighted average cost of capital 9. 8 % $600,000 to $1,000,000 Debt .
40 5.6% 2.2% Preferred .10 10.6 1.1 Common .50 14.0 7 . 0 Weighted average
cost of capital 10 . 3 % $1,000,000 and above Debt .40 8.4% 3.4% Preferred .10
10.6 1.1 Common .50 14.0 7 . 0 Weighted average cost of capital 11 . 5 %
weighted marginal cost of capital (WMCC) schedule Graph that relates the firm’s
weighted average cost of capital to the level of total new financing. CHAPTER 10
The Cost of Capital 405 investment opportunities schedule (IOS) A ranking of
investment possibilities from best (highest return) to worst (lowest return). 5.
Because the calculated weighted average cost of capital does not apply to risk-
changing investments, we assume that all opportunities have equal risk similar to
the firm’s risk. 0 1,000 1,500 500 Total New Financing ($000) 9.5 10.0 10.5 11.0
11.5 Weighted Average Cost of Capital (%) Range of total new financing WACC
$0 to $600,000 $600,000 to $1,000,000 $1,000,000 and above 9.8% 10.3 11.5
9.8% 10.3% 11.5% WMCC FIGURE 10.1 WMCC Schedule Weighted marginal
cost of capital (WMCC) schedule for Duchess Corporation $600,000 and
$1,000,000. Comparing the costs in column 3 of the table for each of the three
ranges, we can see that the costs in the first range ($0 to $600,000) are those
calculated in earlier examples and used in Table 10.1. The second range
($600,000 to $1,000,000) reflects the increase in the common stock equity cost
to 14.0%. In the final range, the increase in the long-term debt cost to 8.4% is
introduced. The weighted average costs of capital (WACC) for the three ranges
are summarized in the table shown at the bottom of Figure 10.1. These data
describe the weighted marginal cost of capital (WMCC), which increases as
levels of total new financing increase. Figure 10.1 presents the WMCC schedule.
Again, it is clear that the WMCC is an increasing function of the amount of total
new financing raised. The Investment Opportunities Schedule (IOS) At any given
time, a firm has certain investment opportunities available to it. These
opportunities differ with respect to the size of investment, risk, and return.5 The
firm’s investment opportunities schedule (IOS) is a ranking of investment
possibilities from best (highest return) to worst (lowest return). Generally, the first
project selected will have the highest return, the next project the second highest,
and so on. The return on investments will decrease as the firm accepts additional
projects. 406 PART 4 Long-Term Financial Decisions EXAMPLE Column 1 of
Table 10.3 shows Duchess Corporation’s current investment opportunities
schedule (IOS) listing the investment possibilities from best (highest return) to
worst (lowest return). Column 2 of the table shows the initial investment required
by each project. Column 3 shows the cumulative total invested funds necessary
to finance all projects better than and including the corresponding investment
opportunity. Plotting the project returns against the cumulative investment
(column 1 against column 3) results in the firm’s investment opportunities
schedule (IOS). A graph of the IOS for Duchess Corporation is given in Figure
10.2. Using the WMCC and IOS to Make Financing/Investment Decisions As
long as a project’s internal rate of return is greater than the weighted marginal
cost of new financing, the firm should accept the project.6 The return will
decrease with the acceptance of more projects, and the weighted marginal cost
of capital will increase because greater amounts of financing will be required.
The decision rule therefore would be: Accept projects up to the point at which the
marginal return on an investment equals its weighted marginal cost of capital.
Beyond that point, its investment return will be less than its capital cost. This
Reviewer 501
Management Advisory Services
approach is consistent with the maximization of net present value (NPV) for
conventional projects for two reasons: (1) The NPV is positive as long as the IRR
exceeds the weighted average cost of capital, ka. (2) The larger the difference
between the IRR and ka, the larger the resulting NPV. Therefore, the acceptance
of projects beginning with those that have the greatest positive difference
between IRR and ka, down to the point at which IRR just equals ka, should result
in the maximum total NPV for all independent projects accepted. Such an
outcome is completely consistent with the firm’s goal of maximizing owner
wealth. EXAMPLE Figure 10.2 shows Duchess Corporation’s WMCC schedule
and IOS on the same set of axes. By raising $1,100,000 of new financing and
investing these funds in TABLE 10.3 Investment Opportunities Schedule (IOS)
for Duchess Corporation Internal rate Initial Cumulative Investment of return
(IRR) investment investmenta opportunity (1) (2) (3) A 15.0% $100,000 $
100,000 B 14.5 200,000 300,000 C 14.0 400,000 700,000 D 13.0 100,000
800,000 E 12.0 300,000 1,100,000 F 11.0 200,000 1,300,000 G 10.0 100,000
1,400,000 aThe cumulative investment represents the total amount invested in
projects with higher returns plus the investment required for the corresponding
investment opportunity. 6. Although net present value could be used to make
these decisions, the internal rate of return is used here because of the ease of
comparison it offers. CHAPTER 10 The Cost of Capital 407 projects A, B, C, D,
and E, the firm should maximize the wealth of its owners, because these projects
result in the maximum total net present value. Note that the 12.0% return on the
last dollar invested (in project E) exceeds its 11.5% weighted average cost.
Investment in project F is not feasible, because its 11.0% return is less than the
11.5% cost of funds available for investment. The firm’s optimal capital budget of
$1,100,000 is marked with an X in Figure 10.2. At that point, the IRR equals the
weighted average cost of capital, and the firm’s size as well as its shareholder
value will be optimized. In a sense, the size of the firm is determined by the
market—the availability of and returns on investment opportunities, and the
availability and cost of financing. In practice, most firms operate under capital
rationing. That is, management imposes constraints that keep the capital
expenditure budget below optimal (where IRRka). Because of this, a gap
frequently exists between the theoretically optimal capital budget and the firm’s
actual level of financing/investment.
The firm's debt component is stated as kd and since there is a tax benefit from
interest payments then the after tax WACC component is k d(1-T); where T is
the tax rate.
Equity[edit]
Advantages:
no legal obligation to pay (depends on class of shares)
no maturity
lower financial risk
it could be cheaper than debt, with good prospects of profitability
Disadvantages:
new equity dilutes current ownership share of profits and voting rights (control)
cost of underwriting equity is much higher than debt
too much equity = target for a leveraged buy-out by another firm
no tax shield, dividends are not tax deductible, and may exhibit double-taxation
3 ways of calculating KKe:
1. Capital Asset Pricing Model
2. Dividend Discount Method
3. Bond Yield Plus Risk Premium Approach
Cost of new equity should be the adjusted cost for any underwriting fees terme
flotation costs (F)
Ke = D1/P0(1-F) + g; where F = flotation costs, D1 is dividends, P0 is price of the
stock, and g is the growth rate.
Weighted average cost of capital equation:
WACC= (Wd)[(Kd)(1-t)]+ (Wpf)(Kpf)+ (Wce)(Kce)
More to come: (K preferred shares, EVA, MCC, MCC schedule and
demonstration, IOS schedule and demonstration, MCC/IOS schedules)
The marginal cost of capital (MCC) is the cost of the last dollar of capital raised,
essentially the cost of another unit of capital raised. As more capital is raised, the
marginal cost of capital rises.
With the weights and costs given in our previous example, we computed
Newco's weighted average cost of capital as follows:
Reviewer 503
Management Advisory Services
ompany continues to raise capital, the MCC can be higher than the WACC.
used. When this occurs, the company's cost of capital increases. This is known
as the "breakpoint" and can be calculated as follows:
Formula 11.9
for retained earnings
wce
Example:
For Newco, assume we expect it to earn $50 million next year. As mentioned in
our previous examples, Newco's payout ratio is 30%. What is Newco's
breakpoint on the marginal cost curve, if we assume wce = 55%?
Answer:
Newco's breakpoint = $50 million (1-0.3) = $63.6 million
0.55
Thus, after Newco raises roughly $64 million of total capital, new common equity
will need to be issued and Newco's WACC will increase to 8.6%.
Factors that affect the cost of capital can be categorized as those that are
controlled by the company and those that are not.
NATURE
These activities of management consultants can involve two types of encounters with
clients:
Consultations; and
Reviewer 505
Management Advisory Services
Engagements
OBJECTIVES
“To utilize the essential qualifications it has available to provide advice and technical
assistance which will enable client management to conduct its affairs effectively.”
1) Technical competence.
2) Familiarity with the client’s finance and control systems and his business
problems.
3) Analytical ability and experience in problem solution.
4) Professional independence, objectivity, and integrity.
1) Independent viewpoint
2) Professional advisor and counselor
3) Temporary professional service
4) Agent of change
SCOPE
A consultant’s dream or a nightmare both for the company and the consultant alike?
And what is this terrible thing?
Run a project,
Bring about organisational change,
Implement new practices and procedures,
Reviewer 506
Management Advisory Services
However what can happen, particularly in larger organisations, is that what started as
a short term engagement can result in the consultant or contractor being in the
company for months or even years.
The consultancy objectives and outcomes were not clearly defined and agreed in
the beginning. The consultant is at fault from the perspective of not highlighting
the absence of defined scope and the company management is at fault for not
ensuring that the scope existed correctly in the first place.
The consultant becomes involved with areas outside of the specific scope remit
and becomes, in essence, an operational resource i.e. like a standard member of
the team.
The in-house skills are insufficient to cover once the consultant leaves i.e. the
competence in the organisation is absent.
These are just some of the consequences of an uncontrolled engagement and none
of it does either party any good!
Further, if the business end up using the consultant for “other things” normally
covered by an operational, full-time employee then the business case needs to be
addressed and understood for doing so. If there is benefit to that activity – great, if
not, it should be stopped straight away.
In reality, if you are using contractors or consultants to fulfil longer term key roles
within your organisation, then you would be better to bite the bullet and engage a full
time member of staff or to examine the existing teams skill-sets and ensuring that you
have the right team on board.
So when you are hiring any sort of external consultant or contract resource, ensure
that the lines are drawn as to what they will/will not be responsible for and what is
expected as the end result.
Reviewer 507
Management Advisory Services
Technical Skills
Interpersonal Skills
These include personal attributes that make an individual amiable among people
and effective in accomplishing desirable objectives through people.
These involve the ability to understand and use the following approach in solving
business problems:
AREAS
Dimensions
Traditional Services:
Managerial Accounting
Design and appraisal of accounting system
Financial Management-related services
Project Feasibility Studies
STAGES
Solution development is the third phase of the problem solving process. The steps
involved in this phase are:
MANAGEMENT
Project evaluation and controls provide the means of successfully administering the
work plan, which defines what tasks are to be performed, when the tasks are to be
performed, and who will perform them. Without this information, the consultant and
Reviewer 509
Management Advisory Services
Administrative controls
Time reporting procedures
Independent quality assurance reviews
NATURE
For example, a small school looking to expand its campus might perform a feasibility
study to determine if it should follow through, taking into account material and labor
costs, how disruptive the project would be to the students, the public opinion of the
expansion, and laws that might have an effect on the expansion.
A feasibility study tests the viability of an idea, a project or even a new business. The
goal of a feasibility study is to place emphasis on potential problems that could occur
if a project is pursued and determine if, after all significant factors are considered, the
project should be pursued. Feasibility studies also allow a business to address where
and how it will operate, potential obstacles, competition and the funding needed to
get the business up and running.
Reviewer 510
Management Advisory Services
It is a systematic gathering and analysis of data and information which aims to find
out the practicability and profitability of a proposed business undertakings.
PURPOSE
The feasibility study provides a base – technical, economic, and commercial – for an
investment decision on an industrial project. A feasibility study is not an end in itself,
but only a means to arrive at an investment decision. It will define and analyze the
critical elements that relate to the production of a given product together with
alternative approaches to such production. It will likewise provide a project of a
defined production capacity at a selected location, using a particular technology in
relation to defined materials and inputs, at identified investment and production costs
and sales revenues yielding a defined return on investment.
One of the most important uses of a project feasibility study is the minimization of the
risk of failure of business ventures thereby reducing the waste of valuable resources.
Immediate causes of business failures such as (1) undetected presence of a more
superior competing product; (2) failure to perfect the manufacturing process; (3)
failure to sell the goods at a reasonable price; (4) failure to raise adequate working
capital and many more.
COMPONENTS
Description – a layout of the business, the products and/or services to be offered and
how they will be delivered.
Market feasibility – describes the industry, the current and future market potential,
competition, sales estimations and prospective buyers.
Technical feasibility – lays out details on how a good or service will be delivered,
which includes transportation, business location, technology needed, materials and
labor.
A Feasibility Study is a formal project document that shows results of the analysis,
research and evaluation of a proposed project and determines if this project is
Reviewer 511
Management Advisory Services
technically feasible, cost-effective and profitable. The primary goal of feasibility study
is to assess and prove the economic and technical viability of the business idea. A
project feasibility study allows exploring and analysing business opportunities and
making a strategic decision on the necessity to initiate the project. For each project
passing through the Initiation Phase, a feasibility study should be developed in order
for investors to ensure that their project is technically feasible, cost-effective and
profitable. A thorough feasibility study can give you the right answer before you
spend money, time and resources on an idea that is not viable. It must therefore be
conducted with an objective, unbiased approach to provide information upon which
decisions can be based.
If you are planning on conducting a feasibility study, you will need to include the
following important elements:
The project scope: The first step is to clearly define the business problem/opportunity
that has to be addressed. The project scope has to definitive and to the point.
Rambling narratives serves no purpose and can actually confuse participants. Also
ensure that you define the parts of the business that would be affected either directly
or indirectly. This would include project participants and end-users. A well-defined
project scope can ensure an accurate feasibility study. Starting a project without a
well-defined scope can easily lead to wandering outside budget and time.
The current Market analysis: This step is critical as it examines the business
environment in which the new product or service will be placed. From this analysis,
you can discover the strengths and weaknesses of the current approach. Reviewing
the strengths, weaknesses, opportunities, and threats faced by a project helps
decision makers focus on the big picture. In some organizations, the executives may
not want to approach a new market unless they know they can dominate it. Other
companies prefer to focus on profits gained instead of market share.
The approach: You will next have to consider and choose the recommended solution
or course of action to meet your requirements. You can consider various alternatives
and then choose a solution that is the most preferable. Before you finalize on the
approach, ask yourself the following questions: Does the approach meet my
requirements? Is the approach taken a practical and viable solution?
Review: Finally, all the above elements will be assembled into a feasibility study and
a formal review will be conducted. The review will be used verify the accuracy of the
feasibility study and to make a project decision. At this stage, you can approve, reject
or even revise the study for making a decision. If the feasibility study is approved,
make sure that all the involved parties sign the document.
Economic / Marketing
Before the project is formulated, the size and composition of the present effective
market demand, by segment, should be determined in order to estimate the possible
degree of market penetration by a particular product. Also, the income from sales
should be projected taking into account technology, plant capacity, production
program, and marketing strategy. The latter has to be set up during the feasibility
study giving due consideration to product pricing, promotional measures, distribution
systems, and costs.
Once the sales projections are available, a detailed production program should be
made showing the various production activities and their timing. The final step at this
stage of a feasibility study is to determine the plant capacity taking into account
alternative levels of production, investment outlay and sales revenues.
Technical
The technical aspect of a project feasibility study will cover the following:
Production Program
Data and alternatives
Selection of production program
Estimate costs of emissions disposal
Plant Capacity
Data and alternatives
Determination of feasible normal plant capacity
Materials and Inputs
Data alternative
Supply program
Location and Site
Location
o Data and alternatives
Site
o Data and alternatives
o Site selection
o Cost estimate
Process Engineering
Project layouts
Scope of project
Technology(ies)
Equipment
Civil engineering works
Reviewer 513
Management Advisory Services
Financial
By reading through the following pages you will receive a high level understanding of
the following:
1. The purpose of good financial planning
2. The approach to arriving at realistic Start-up or Expansion Costs
3. The up-front homework and planning process in developing Key Assumptions for
sales, cost of production and general and administration expenses
4. The up-front homework and planning process required in developing Key
Assumptions for cash flow planning
5. An overview and an example of a Balance Sheet and Income Statement
Reviewer 514
Management Advisory Services
Introduction
Entrepreneurs, start-up companies, or existing companies will utilize and require the
development of numerous financial documents during the planning and operational
stages. Each plays an important role in planning and managing your business. Some
may be used in the earliest stages - simply to determine whether or not your
proposed or existing business is feasible or sustainable. Others will be used to
provide information that will enable you to attract partners, investors or financing
capital; while some will monitor and benchmark your business activities on an
ongoing basis.
The structure of your business will determine the variation and format of some of the
financial documents that you will utilize. The typical business structures are: sole
proprietorship, partnerships or corporations. Additional types of business structures
may possibly include new generation co-ops or joint ventures. Your financial and/or
legal professional will assist you in determining the structure best suited to your
business needs.
Critical business decisions need to be made before you invest significant time and
capital. It is important to adequately complete market research, hold discussions with
possible suppliers and be able to place estimated costs into models that will enable
you to more accurately complete feasibility assessments.
Tip: It is important to understand that all three financial statements are related and
connected indicators of the businesses feasibility, risk and profitability. (Balance
Reviewer 515
Management Advisory Services
As you go through the preparation of your financial documents and business plans,
you will need to document and sort the information that is used to create these
documents. A spreadsheet (or combination of several spreadsheets) is one of the
most effective tools for gathering, compiling and managing this information.
Tip: Linking your spreadsheets to one another and merging the data together will
make it much simpler and faster to update your documents.
It is highly recommended that you discuss your business start-up or expansion idea in
advance with your financial coach so he or she may provide you with guidance in the
key assumptions they suggest or recommend. They may help you develop detailed
spreadsheets, and provide supporting comments.
Tip: The greater the accuracy of the key assumptions/information that is used in the
initial planning stages of your business - the greater will be your ability to make good
business decisions moving forward. Utilize your suppliers and other business
contacts (as needed) to aid you in gathering up-to-date information.
Not all assumptions require a detailed breakdown. Your financial professional will aid
you in finding the best spreadsheet tools suited to your needs. Every business is
unique and therefore each may require additional or specific information to be
collected.
Start-up Costs
What will it cost to get your business off the ground or implement expansion plans?
Begin collecting the data. Talk to potential suppliers for initial pricing of supplies and
materials. If you require capital, make some early inquiries to determine anticipated
borrowing expenses and terms.
As you collect your information, keep a record of the information you gather. Below is
a simple example of a common Start-up/Expansion Capital Worksheet. This example
shows some of the basic information that would commonly be used in a start-up
business.
Combine and add your own specific information that is right for your business.
Tip: You should use startup cost planning for a start-up company and also when
expanding your business or launching a new product line. Customize the spreadsheet
for your own purposes.
Tip: The greater the accuracy of the key assumptions/information that is used in the
Reviewer 516
Management Advisory Services
initial planning stages of your business - the greater will be your ability to make good
business decisions moving forward. Utilize your suppliers and other business
contacts (as needed) to aid you in gathering up-to-date information.
In addition to tracking the total estimated costs of starting up your business, this
particular spreadsheet example also allows you to assign the source(s) of the capital
required.
Figure 1-2
Similar to startup or expansion costs, you need to investigate and give careful
consideration to the development of other key data that would be utilized in the
completion of the opening balance sheet, forecasted profit and loss statements and
the development of cash flows.
One of the first key assumptions that needs to be addressed in the startup of a new
business venture, and or expansion, is the source of equity and or debt. This would
be the assumption around the contributions to be made to the business by ownership,
whether sole proprietor, partners, or shareholders. Contributions can take the form of
cash contributions through share purchase, shareholders/partners loans, and
contributions of assets in return for equity. You would be advised to develop a
spreadsheet that shows the timing and amount of each contribution and the terms in
which they are being made. The spreadsheet should show both contributions and the
formation of the business and throughout the planning period.
Prior to forecasting your sales projections and revenue, you need to calculate a
realistic cost for your product(s) and break the cost down into a per unit basis. The
cost must include all production inputs: raw materials, utilities (power/water etc),
packaging, handling expenses and any other items involved in production. Labour
costs associated with production should be addressed here as well. Below is an
example of a basic worksheet to calculate product cost.
Reviewer 517
Management Advisory Services
Tip: If you manufacture a product, it is advisable that you include not only your
material costs in your cost of sales, but all manufacturing costs such as rent (only
equipment rent) utilities and labour - anything that is variable and related to
manufacturing your product.
Placing the right selling price on your product or service can be the difference
between financial success and failure. In order to price your product or service
profitably, you need to take into consideration many factors such as cost of
production, your customer, your competitors and how much value the market places
on your product.
The cost of production includes both variable and fixed costs. This is a very important
step and is the foundation to establishing an accurate price for you product. Do not
guess, know your costs and be sure to include all costs.
Price is not the same as value. Value is a perception in your customer’s mind. If you
have a unique product that the customer needs or wants, they will place a higher
value on it. Your price should reflect how much value your customer places on your
product. If the product you are producing is commonly available and you have
considerable competition customers will place less value on your product and it may
be very difficult to establish a market share.
Critical Questions to ask yourself are:
1. Do you have a unique product with high consumer value?
2. Can you produce your product better or cheaper than all the other suppliers?
3. Do you have much competition?
4. What is the competition doing to maintain or grow their market share?
5. Will people buy your product over the competition and why?
6. How much would your customers be willing to pay?
7. Is there room for your product in the marketplace?
Answers to these and many other critical questions will require thorough market
research and other investigation efforts. Consider consulting a market analyst if you
are unsure of your product/service potential.
Once you have established that you have a product worthwhile to market, and you
have established a realistic price for your product (a cost price to produce, ship and
market, plus a profit margin) you can then determine if the market will support your
venture.
Tip: Research into pricing of similar or like products can include the use of your own
inquiries into the marketplace, focus groups, trial markets or enlisting the assistance
of professionals.
Reviewer 518
Management Advisory Services
One of the most significant expenses a business will incur is that of salaries (wages
and benefits). Create an accurate monthly estimate of your labour costs through each
of your planning stages. You will also need to project labour costs in your cash flow
summaries, to ensure your business can manage and meet payroll obligations. Below
is an example of a labour cost spreadsheet that also estimates the company costs of
employee benefits. If you intend to pay bonuses, you would simply add another row
or rows as required. It will be critical to outline your assumptions as to the timing of
these bonuses as your financial advisor will require this information to manage your
cash flow. Bonuses should only be paid out if the company is profitable.
In this particular spreadsheet example, the jobs have been highlighted in different
colours. This is to help assign their associated cost to either overhead costs (fixed) or
cost of sales.
Often janitorial and maintenance services will be split between fixed costs and cost of
sales.
Tip: At times you may have special sales, (seasonal highs or lows) that affect your
forecasts. It is very important that you include in your key assumptions how you
managed to arrive at these various forecasted levels. Maintain a record of your
specific assumptions in these areas.
The preparation of your projected income statement is the planning for the profit of
your financial plan. The example below is for a single product, you would need to
complete this for each additional product and/or source of revenue.
Figure 1-5
Tip: As you are developing your sales forecast, it is critical that you document and
develop a narrative in your business plan that can support your projections including
the best estimate of timing of the conversion of sales to cash. The assumption of the
timing from invoice to conversion of cash is required by your financial coach. Are
these sales projections reasonable? Can they be supported though signed orders,
contracts or letters of intent from your customers? Do you have a competitive
advantage with your product that fills a consumer need or is at a price better than
anything else currently on the market? Can your operation’s infrastructure support the
volume of sales? Lenders or investors will need evidence that these projections are
realistic. Over-estimating your sales forecasts could result in financial disaster.
To complete an accurate cash flow forecast it will be critical to make key assumptions
around the following:
1. The amount and timing of cash equity contributions by the owners
2. The amount and timing (advancements) of any loans that will be requested for approval
3. The timing and amount of payment for capital acquisitions (ie. land, building and development)
Reviewer 520
Management Advisory Services
Tip: Quite often the development of an initial cash flow statement will initiate a revised
cash flow statement that will include the additional financing required to fund the cash
flow deficit.
The Balance Sheet is a summary of the assets and liabilities and equity of a business
at a specific point of time. In addition it provides a picture of the financial solvency
and risk bearing ability of the business.
The Balance Sheet will vary slightly depending on the legal structure of your company
whether it is a sole proprietorship, partnership or corporation. This is an example of
what a typical balance sheet may look like for a corporate entity (Limited Company). If
your business is a sole proprietorship, the equity section of the balance sheet will
simply be the difference between the assets and liabilities - there will be no indication
of original share capital reflected. If you choose to operate the business as a
partnership or corporation, the owners' equity section will reflect the equity
breakdown amongst partners depending on their percentage of ownership.
Reviewer 521
Management Advisory Services
Tip: As
mentioned,
balance
sheets will
look
different
depending
on
corporate
structures.
A Sole
Proprietors
hip will not
be showing
any share
capital.
Equity will
simply be
the
difference
between
assets and
liabilities.
For
Partnership
s the equity
portion will
be shown
as per the
breakdown
amongst
the
partners. In
a
corporation,
(as per the
example on
the left)
equity will
be shown
as share
capital and
retained
earnings of
Reviewer 522
Management Advisory Services
Figure 1.6 - A larger version of the Balance Sheet (13K PDF) is available for your
review.
The Income Statement, commonly referred to as the P&L statement, summarizes the
revenue and expenses for a specific time period (one month, one quarter, one year,
etc.) The Projected Income Statement is a snapshot of your forecasted sales, cost of
sales, and expenses. For existing companies the projected income statement should
be for the 12 month period from the end of the latest business yearend and compared
to your previous results. Any large differences in line items should be explained in
detail.
Reviewer 523
Management Advisory Services
Tip: T
here
will
be no
forec
ast in
the
incom
e
state
ment
for
the
paym
ent of
taxes
(for a
sole
propri
etors
hip)
The
main
differ
ence
betwe
en a
comp
any,
partn
ershi
p and
the
sole
propri
etors
hip is
the
area
of
taxes
paya
ble
and
Reviewer 524
Management Advisory Services
remu
nerati
on.
Your
financ
ial
advis
or will
assist
you in
how
you
will
reflec
t this
in
your
forec
ast(s)
. For
exam
ple
there
may
be no
salary
expe
nse in
a sole
propri
etors
hip or
partn
ershi
p
(they
may
be
show
n as
withdr
awals
after
profit
Reviewer 525
Management Advisory Services
calcul
ations
wher
eas
active
share
holde
rs'
remu
nerati
on for
wage
s and
bonu
ses
may
be
show
n as
a
mana
geme
nt
expe
nse in
the
gener
al
admi
nistra
tion
sectio
n of
the
incom
e
state
ment.
Depr
eciati
on
expe
nses
could
Reviewer 526
Management Advisory Services
also
be
handl
ed
differ
ently
in a
sole
propri
etors
hip if
these
asset
s are
utilize
d in
the
gener
ation
of
reven
ues
not
assoc
iated
to this
ventu
re.
You
are
enco
urage
d to
enga
ge
profe
ssion
al
assist
ance
in the
creati
on of
these
Reviewer 527
Management Advisory Services
docu
ment
s.
Your
advis
or will
help
you
compl
ete
these
forms
in
accor
danc
e with
gener
al
accep
ted
accou
nting
princi
pals
(GAP
P).
Figure 1-7 - A larger version of the Income Statement (13K PDF) is available for your
review.
Tip: The above example is for a startup company and this is why no beginning
inventory is shown. Professional accounts may choose to show the cost of goods
sold section in various formats depending on the industry.
Tip: If the whole area of financial documents is new to you, you may wonder the
difference between the income and cash flow statements. The income statement is your
revenue and expenses for a point in time. The revenue is recorded at the point it is
earned, not when payment is received and the expense is recorded at the time it is
incurred, not paid. The cash flow statement forecasts the assumptions as to when
revenues from sales, and other incoming funds are going to be received , and the
assumptions on the timing of paying of expenses, capital putchases, and any loan
repayments.
Once you have made your sales projections based on volume, calculate the cash flow
projections by converting your sales volumes into income. In the example below accounts
receivable are shown based on cash sales with 30/60/and 90 day receivables. Deduct
outflows from all cash inflows and you will be able to predict your cash flow requirements
for each month. If you find yourself in a negative position, it becomes a critical decision
whether or not to move forward, with your business unless you can make valid
adjustments to either your inflows or outflows through the extension of accounts payable
or approved operating lines of credit. These options should only be considered if in future
months there will be cash excess to pay down operating loans and or accounts payable.
For a new business, the cash flow forecast can be more important than the forecast of the
Income Statement because it details the amount and timing of expected cash inflow and
outflows. Usually the levels of profits, particularly during the startup years of a business,
will not be sufficient to finance operating cash needs. Moreover, cash inflows do not
match the outflows on a short-term basis. The cash flow forecasts will indicate these
conditions and if necessary the aforementioned cash flow management strategies may
have to be implemented.
Given a level of projected sales, associated expenses and capital expenditure plans over
a specific period, the cash flow statement will highlight the need for and the timing of
additional financing and show your peak requirements for working capital. You must
decide how this additional financing is to be obtained, on what terms and how it is to be
repaid.
Reviewer 529
Management Advisory Services
Tip A good cash flow projection should forecast monthly amounts for month end
receivables, payables and inventory. This information is often required so that
management can calculate their operating loan margin requirements as stipulated by
their lender. Forecasting these month end numbers and testing them against margin
conditions, in advance, eliminates challenges you may experience with your lender if
your unable to meet your conditions at a later date. Being able to test these numbers,
allows you to alter your financial projections and take alternative measures.
Financial Ratios
Ratios are useful when comparing your company with the competition on financial
performance and also when benchmarking the performance of your company. Ratios
can measure your company’s performance against the performance of other
companies. Most ratios will be calculated from information provided by the financial
statements. Financial ratios can analyze trends and compare your financial status to
other similar companies. They can also be used to monitor your companies overall
financial status. In the table below, many of the common ratios are shown along with
the formulas that are used to calculate them.
Reviewer 530
Management Advisory Services
Figure 1-9 - A larger version of the Ratio Analysis (24K PDF) is available for your
review.
Liquidity ratios provide information about your company’s ability to meet its short term
debt. The Current Ratio and Quick Ratio (also known as the acid test) represent
assets that can quickly be converted to cash to cover creditor demands.
Asset Turnover Ratios indicate how well you are utilizing your company’s assets.
Receivable Turnover, Average Collection Period and Inventory Turnover are the main
tools to monitor your assets.
Financial Leverage Ratios indicate your financial state and the solvency of your
company. They measure your company’s ability to manage and use long term debt.
The Debt Ratio and Debt-to-Equity (Leverage Ratio) Ratio are used in these
calculations.
Profitability Ratios include Gross Profit Margin, Return on Assets and Return on
Equity ratios. These ratios primarily are used to indicate your company’s ability to
generate profits, and return to the shareholders’ investments.
Your financial advisor will assist you in these ratio calculations and utilize the ones
that best measure your company’s financial well being.
If you are new or uncomfortable in working with your financial business plan, work
with a financial advisor who can guide you through the processes involved in
continually monitoring the financial affairs of your business or business venture.
Keep your information current and review the documents on a regular basis (monthly
or more often if needed). Review them with key individuals within your company.
A simple checklist such as the one below may help you in your ongoing management
practices.
Figure 1-10 - A larger version of the Checklist (10K PDF) is available for your review.
Tip: Create and customize your own monthly checklist that helps you to be in control
of the day to day operations. Take immediate action if you find areas that need
attention on anything appears to be questionable.
Reviewer 532
Management Advisory Services
Review these suggested tasks with your financial advisor to see if he or she has other
recommendations to add.
Tip: If Key Performance Indicators (KPI) are not being met, an action plan needs to
be implemented.
Conclusion
The information provided here gives you some guidelines and examples from which
to begin the development of your own financial documents and/or business plan.
Every company has a unique set of circumstances and due diligence is required on
your part to seek out professional guidance in preparation of these important
documents. The more you are able to accurately forecast and estimate your
expenses, sales volumes and revenues – the more you will be able to make sound
business decisions to proceed, stop or alter your business plans moving forward.
As you complete your documents, time will pass and some of the key assumptions in
the information will change. Keep this information current; update the most critical
assumptions regularly. Maintaining accurate up-to-date financial documents will
enable you to have accurate information to present to a lender or potential investor.
These documents will provide you with the management tools you need to make
sound business decisions at any time.
Tip: Before a business and financial justification can be made to proceed with a start-
up business, and or expansion the target market must be sharply defined, the product
concept and positioning strategy must be confirmed; the benefits to be delivered and
the value proposition defined and validated, as well as the physical attributes of the
product features, specifications, and performance requirements. All costs of the
proposed plans need to be well investigated and key assumptions documented.
A good financial plan, developed with the assistance of financial professionals will be
invaluable to ensuring good decisions are made.
The Statement of Cash Flows is the most critical forecast since it reflects viability
rather than profitability. It can also be the most uncertain statement as projections
extend into the future. Therefore, monthly cash flow is a key statement since it
enables calculation of “coverage” at any given point.
Preparing projected financial statements can be very time consuming and it requires
a careful analysis of the company’s past and present financial health. Projected
financial statements project or forecast a company’s performance in the near future.
Various factors are considered for analysis of the financial health of the company. An
analyst uses the following points to evaluate the position of the company:
Whether the company’s operational activities are up to the mark
If the company is well equipped financially
Condition of the market- if the market is in the process of growth, is at equilibrium or
shriveling up.
The status of the company in relation to the other companies in the industry.
Strengths, weaknesses prevailing in the management of the company, type of
product produced by the company, economic cycle of the company, accompanying
hazards in the production of goods.
Reviewer 534
Management Advisory Services
By carefully studying the various trends in the company’s past performances, the
analyst tries to predict the company’s performance in future. Even if the financial
health of a company has remained fairly stable over the years and the projected
financial statements forecast a still better growth trend in the financial statement, any
unforeseen event may change the course, in the projected financial statement.
The unforeseen events may occur in any part of the globe thereby impacting global
economy in an adverse manner. An analyst keeps provision for such events and
prepares details of a contingency fund, which can be made use of, if the above
mentioned circumstances are encountered by any company.
There are many online templates for financial projections that are a good place to
start when you are preparing to draft your projections. It is also recommended that
you include charts and tables when explaining copious amounts of numerical data; it
is a much cleaner and engaging presentation than just paragraphs of numbers and
figures.
Income Statement: An Income Statement shows your revenues, expenses and profit
for a particular period. If you are developing these projections prior to starting your
business, this is where you will want to do the bulk of your forecasting. The key
sections of an income statement are:
Revenue – This is the money you will earn from whatever goods or services you
provide.
Expenses – Be sure to account for all of the expenses you will encounter,
including Direct Costs (i.e. materials, equipment rentals, employee wages, your
salary, etc.) and General and Administrative Costs(i.e. accounting and legal fees,
advertising, bank charges, insurance, office rent, telecommunications, etc.).
Total Income – Your revenue minus your expenses, before income taxes.
Income Taxes
Net Income – Your total income without income taxes.
Cash Flow Projection: A Cash Flow Projection will demonstrate to a loan officer or
investor that you are a good credit risk and can pay back a loan if it’s granted. The
three sections of a Cash Flow Projection are:
Cash Revenues – This is an overview of your estimated sales for a given time
period. Be sure that you only account for cash sales you will collect and not
credit.
Cash Disbursements – Look through your ledger and list all of the cash
expenditures that you expect to pay that month.
Reconciliation of Cash Revenues to Cash Disbursements – This one is pretty
easy: you just take the amount of cash disbursements and subtract it from your
total cash revenue. If you have a balance from the previous month, you’ll want to
carry this amount over and add it to your cash revenue total.
Note – One of the key pitfalls of working on your cash flow projections is being
overly optimistic about your revenue.
Balance Sheet: This overview will present a picture of your business’ net worth at a
particular time. It is a summary of all your business’ financial data in three categories:
assets, liabilities and equity.
Assets – These are the tangible objects of financial value owned by your
company.
Liabilities – These are any debts your business owes to a creditor.
Equity – The net difference between your organization’s total liabilities minus its
total assets.
Note – You will want to be sure that the information contained in the balance
sheet is a summary of the information you previously presented in the Income
Statement and Cash Flow Projection. This is the place to triple-check your work
Reviewer 536
Management Advisory Services
– investors and creditors will be looking for any inconsistencies, and that can
greatly impact their willingness to extend your company a line of credit.
To complete your financial projections, you’ll want to provide a quick overview and
analysis of the included information. Think of this overview as an executive summary,
providing a concise overview of the figures you’ve presented.
Financial Projections Making sense of the money The Burning Questions… • What
are your capital needs? – Projections • How will you get that capital? – Structure:
Equity or debt? • Ownership structure – Up-front or staged? • What about a return for
your investors? – How soon? – How much? – What is the exit-strategy? It’s the
cash… • Many entrepreneurs of profitable and rapidly growing companies are
puzzled by the fact that they never seem to have enough cash. Financial Forecasting
• Build a set of assumptions • Estimate your operating cycle • Forecast sales • Use
the sales to create – Pro forma balance sheet, income statement and cash flow
statements – Factor risk into the projections • Sum up your cash needs to get past
the burn out point Financial Forecasting Creating the Pro Forma Analysis • Develop
assumptions – Pricing assumptions – Sales level and growth assumptions –
Inventory needs assumptions – Payables and wage cycle assumptions – Fixed cost
and tax expectations • Project cash needs – Monthly or quarterly • Project an Income
Statement • Project an Balance Sheet 6 Example OPERATING & CASH BUDGET
CASH BUDGET APRIL MAY JUNE JULY Begin. Cash $23,000 $23,000 $23,000
$23,000 Cash receipts: Customer collections 105,800 156,400 156,400 124,200 Totl
cash before financing 128,800 179,400 179,400 147,200 Cash disbursements:
Merchandise 98,210 111,090 93,380 75,670 Wages & Comm 21,275 28,175 29,900
24,725 Misc Exp 5,750 9,200 6,900 5,750 Rent 4,600 4,600 4,600 4,600 Truck
Purchase 6,900 0 0 0 Total Disbursements 136,735 153,065 134,780 110,745 7
Example OPERATING & CASH BUDGET APRIL MAY JUNE JULY Total
Disbursements 136,735 153,065 134,780 110,745 Min. cash balance 23,000 23,000
23,000 23,000 Total cash needed 159,735 176,065 157,780 133,745 Excess of total
cash -$30,935 $3,335 $21,620 $13,455 Financing New borrowing $30,935 $0 $0 $0
Repayments 2,871 21,199 6,865 Loan balance $30,935 $28,064 $6,865 $0 Interest 0
464 421 103 Total effects of financing $30,935 -$3,335 -$21,620 -$6,968 Cash
balance 23,000 23,000 23,000 29,487 Building Pro Forma Statements Income
Statement Net Income Historical data or industry ratios Cash Flow Statement NI +
Dep.≈ Op. Cash Flow Balance Sheet Assets needed to support sales Current
Permanent Liabilities (debt) Sales estimates and assumptions Debt determines
interest expense Building Pro Forma Statements Income Statement Net Income
Historical data or industry ratios Cash Flow Statement NI + Dep.≈ Op. Cash Flow
Reviewer 537
Management Advisory Services
+NWC needs +Capital investment needs Balance Sheet Assets needed to support
sales Current Permanent Liabilities (debt) Sales estimates and assumptions Year on
year changes determine cash flow needs Building Pro Forma Statements Income
Statement Net Income Historical data or industry ratios Cash Flow Statement NI +
Dep.≈ Op. Cash Flow +NWC needs +Capital investment needs Balance Sheet
Assets needed to support sales Current Permanent Liabilities (debt) Sales estimates
and assumptions Debt determines interest expense Critical Determinants of Financial
Needs • Minimum Efficient Scale • Profitability • Sales Growth • Cash Flow Critical
Determinants of Financial Needs Minimum Efficient Scale • Estimating how much
volume is needed to get to the industry MES – Capital intensive high MES –
Consulting low MES • How to know – Look at existing structure of the industry –
Look at the fixed and intangible assets needed to compete Critical Determinants of
Financial Needs Profitability • High profit margins lower cash needs • Rapid
profitability lower cash needs • However, high profitability can lead to rapid growth
– High growth high cash needs Critical Determinants of Financial Needs Sales
Growth • Key Questions – When will the venture begin to generate revenues? – Once
revenues are being generated how rapidly will they grow? – What is the best time
frame for forecasting? • 3 years, 5 years, 10 years…. – What is the appropriate
forecasting interval? • Monthly, quarterly annually Critical Determinants of Financial
Needs Sales Growth • Identify a yardstick company – Comparability? • Target
audience • Distribution channels • Substitutes • Manufacturing technologies Critical
Determinants of Financial Needs Sales Growth • Identify a yardstick company •
Gather data • Supply-side approach – Test market – Fundamental analysis Critical
Determinants of Financial Needs Sales Growth • Growth assumptions drive revenues
• Collection assumptions drive cash inflows Start-up Growth Growth Rate 0 0.2 0.4
0.6 0.8 1 1.2 1.4 1.6 0-18 19-23 24 30 36 42 48 54 66 78 on Growth Rate 0 5 10 15
20 25 30 0-18 19-23 24 30 36 42 48 54 66 78 on Revenues growth rate Fundamental
Determinants of Sales Revenues • What geographic market will the venture serve? •
How many potential customers are in the market? – What segments will be interested
in this product? • How rapidly is the market growing? • How much, in terms of
quantity, is the typical customer expected to purchase during the forecast period? •
How are purchase amounts likely to change in the future? • What is the expected
average price of the venture’s product? – How price elastic is the product? • How
aggressively and effectively, compared to competitors, will the venture be able to
promote its product? • How are competitors likely to react to the venture? • Who else
in considering entering the market, and how likely are they to do so? Critical
Determinants of Financial Needs Cash Flow Projections • Cash Flow • You can’t pay
the bills with profits • Things that affect cash flow – Capitalized assets – Terms of
trade • To your customers • From your suppliers – Debt servicing The Cash Flow
Cycle Capital (Debt and Equity) Beginning Cash Materials Product Ending cash
Accounts Rec. Labor Fixed Assets Equity Returns Debt Service Taxes Factors
Impacting a Firm’s Cash Needs • High MES markets – Need for fixed asset
investment – High start-up costs • Tight profit margins • Expect high rates of growth •
Must depend on depreciable assets • Must offer attractive terms of trade to attract
customers • Aren’t able to access favorable terms of trade from suppliers Estimating
the Cash Conversion Cycle Inventory conversion period plus Receivables collection
period minus Payables conversion period Inventory Conversion Period New Venture
Considerations • From raw material to customer-ready • How long is the product in
process? – How much variability is there in the production cycle? • How many days of
raw material inventory is needed to keep production going? • Do you need to keep
Reviewer 538
Management Advisory Services
Formal Methods for Communicating– If they don’t exist already, create them. Make
occasions when info should be presented.
1. Meetings – One of the most common ways to communicate. They can vary from
only 1 person to thousands based on message and audience appropriate. It is up to
you to maximize every minute of the time spent to have dialogue. Make sure it is a
dialogue and not a monologue. It is the best way as you have the verbal and non
verbal cues that enhance the communication and avoid misinterpretation.
2. Conference Calls– These days this is the most common as it does not require the
time and expense of travel. The dialogue can take place though its dependant on
voice intonation and clarity of the verbal message. They only require cost of phone
call and there are many paid and free services that will facilitate use of a conference
call line for many people to dial into. Its also a common way for classes to be
recorded and replayed when its convenient for you.
Reviewer 539
Management Advisory Services
3. Newsletters/ Email/ Posters – This strategy is one way communication and utilizes
emailed updates, hard copy brochures, posters, newsletters mailed or emailed. One
of the weaknesses is that messages are delivered and you cannot guage if they were
read and understood, deleted as sometimes there is no feedback. That immediate
feedback is valuable for strengthening your message and making sure impacts and
feedback are quickly received.
Informal Methods – It is important to not only rely on formal channels but to utilize
informal communication as well. The impromptu channels are often more information
rich and critical for relationship building.
5. Lunch Meetings, Drink at the bar after work – These casual environments can be
great for connecting, getting feedback, ideas, and work to build support
6. Sporting events – tennis, golf, etc are an easy forum to get the input on what
support exists, feedback on ideas, brainstorming to strengthen your communication
and build stakeholder support
7. Voice mail – this is often underutilized since email is so common but still shown to
be more often listened to than an email will be read. By using voice intonation for
excitement, urgency, etc it can be more compelling. This can be a solo voice mail, a
voice mail broadcast to large team or you could pursue use of automated calling to
get the word out depending on the size of audience
Its not enough to just have a plan. It is critical to seek to understand what your
stakeholders desire both spoken and unspoken. The expectations must be carefully
managed from beginning to end. Every team and project varies in its rate of change,
so pick the most advantageous communication channel, frequency and make sure its
effective. Just as having the plan is important, monitoring its effectiveness, adding
and canceling supplemental ways of communicating will be required.
(CNN Philippines) — Filipinos are more concerned with economic issues compared
to national security and socio-political affairs, according to a recent survey by Pulse
Asia.
In a statement released on Tuesday (March 24), the pollster noted that the leading
urgent concerns among Filipinos are inflation control (46%), the increase of workers'
pay (44%), and the fight against government corruption (40%).
Rounding up the upper half are poverty reduction (37%), job creation (34%), and the
fight against crime (22%).
On the other hand, Filipinos are least concerned with national territorial integrity (5%),
terrorism (5%), and charter change (4%).
The results are not much different when grouped according to the country's three
major island chains. Mindanaoans (52%) are most concerned with the increasing cost
of goods and services. A majority of Visayans (53%) and residents of Luzon (48%) —
exluding Metro Manila — cite low worker's pay as their top issue.
Those from Metro Manila (49%) rank government corruption as their most pressing
issue.
In the same survey, Pulse Asia found that the Aquino administration's highest
approval ratings were in calamity response (49%), environmental protection (48%),
and in defending Philippine territorial integrity (43%).
In all issues raised by Pulse Asia among its respondents, the pollster notes that "The
Aquino administration fails to score a majority approval rating..."
The administration's lowest ratings were in economic issues — the very topics that
Filipinos are most concerned about. Only about three out of 10 Filipinos approve its
Reviewer 541
Management Advisory Services
performance improving workers' pay (33%), poverty reduction (28%). and inflation
control (29%).
On a similar note, the latest figures from the Philippine Statistics Authority show that
about one in four FIlipinos (25.8%) lived in poverty during the first half of 2014.
However, the same agency notes that the country's unemployment fell to 6.6% in
January 2015 from, 7.5% the previous year.
According to the results of the Pulse Asia’s Ulat ng Bayan survey conducted from July 2
to 8 which was released on Wednesday, Filipinos want the new Duterte administration
to prioritize three economic issues: control increase in the prices of goods (68 percent),
create jobs (56 percent) and implement pro-poor initiatives (55 percent).
Busting criminality has been cited as the fourth most pressing concern among Filipinos
at 48 percent.
Across socio-economic classes, those belonging to Class ABC and D believe that the
President should address the economic issues immediately while a majority of Class E
respondents said that they want inflation to be controlled.
The survey has 1,200 respondents and has a ± 3 percent error margin at 95 percent
confidence level. Subnational estimates for Metro Manila, the rest of Luzon, Visayas
and Mindanao have a ± 6 percent error margin.
The Philippine economy slowed to a 5.6 percent growth in the second quarter, falling
below the government’s target.
MANILA, Philippines - Most Filipinos consider two economic-related issues - inflation
and workers' pay - as the most urgent national concerns, according to a Pulse Asia
survey released last week.
The survey showed that 47 percent of Filipinos are most concerned with the country's
inflation while 46 percent are concerned with workers' pay.
Corruption in government (39 percent), employment (36 percent) and poverty (35
percent) are among the second cluster of national issues deemed urgent by Filipinos.
The third group of urgent national issues include peace (21 percent), criminality (20
percent), rule of law (16 percent) and environmental destruction (15 percent).
The survey also showed that Filipinos are least concerned about rapid population
growth (9 percent), national territorial integrity (7 percent), charter change (4 percent)
and terrorism (4 percent).
Workers' pay, employment and inflation are the only issues considered urgent by
majorities across geographic areas and socio-economic classes.
Headlines ( Article MRec ), pagematch: 1, sectionmatch: 1
Location Class
National Concerns
Overall NCR Luzon Visayas Mindanao ABC D E
Controlling inflation 47 42 45 47 53 40 46 52
Improving/increasing
46 53 46 42 45 48 46 46
the pay of workers
Fighting garft and
corruption in 39 40 38 33 44 43 42 31
government
Creating more jobs 36 37 34 51 26 32 35 41
Reducing poverty of
35 31 36 35 36 30 34 38
many Filipinos
Increasing peace in the
21 15 19 21 29 13 22 21
country
Fighting criminality 20 26 18 18 22 29 19 21
Enforcing the law on all,
whether influential or 16 18 16 12 20 19 16 15
ordinary people
Stopping the destruction
and abuse of our 15 13 16 18 12 12 18 9
environment
Controlling fast
9 9 13 8 4 18 10 5
population growth
Defending integrity of 7 10 9 4 3 8 6 9
Philippine territory
Reviewer 543
Management Advisory Services
against foreigners
Changing the
4 2 4 5 3 1 4 5
Constitution
Preparing to
successfully face any 4 3 4 6 3 6 3 6
kind of terrorism
The survey was conducted from May 30 to June 5 using face-to-face interviews
among 1,200 respondents who are 18 years old and above.
The respondents were asked which issues that the current administration should
immediately address. They were allowed to have multiple response, up to three
answers.
The total value of goods produced and services provided in a country during one
year.
UNEMPLOYMENT
What is 'Unemployment'
Frictional Unemployment
Cyclical Unemployment
Cyclical unemployment comes around due to the business cycle itself. Cyclical
unemployment rises during recessionary periods and declines during periods of
economic growth. For example, the number of weekly jobless claims in the United
States has slowed in the month of June, as oil prices begin to rise and the economy
starts to stabilize, adding jobs to the market.
Structural Unemployment
INFLATION
Reviewer 545
Management Advisory Services
What is 'Inflation'
Inflation is the rate at which the general level of prices for goods and services is rising
and, consequently, the purchasing power of currency is falling. Central banks attempt
to limit inflation, and avoid deflation, in order to keep the economy running smoothly.
As a result of inflation, the purchasing power of a unit of currency falls. For example,
if the inflation rate is 2%, then a pack of gum that costs $1 in a given year will cost
$1.02 the next year. As goods and services require more money to purchase, the
implicit value of that money falls.
The Federal Reserve uses core inflation data, which excludes volatile industries such
as food and energy prices. External factors can influence prices on these types of
goods, which does not necessarily reflect the overall rate of inflation. Removing these
industries from inflation data paints a much more accurate picture of the state of
inflation.
The Fed's monetary policy goals include moderate long-term interest rates, price
stability and maximum employment, and each of these goals is intended to promote a
stable financial environment. The Federal Reserve clearly communicates long-term
inflation goals in order to keep a steady long-term rate of inflation, which in turn
maintains price stability. Price stability, or a relatively constant level of inflation, allows
businesses to plan for the future, since they know what to expect. It also allows the
Fed to promote maximum employment, which is determined by nonmonetary factors
that fluctuate over time and are therefore subject to change. For this reason, the Fed
doesn't set a specific goal for maximum employment, and it is largely determined by
members' assessments. Maximum employment does not mean zero unemployment,
as at any given time people there is a certain level of volatility as people vacate and
start new jobs.
Today, few currencies are fully backed by gold or silver. Since most world currencies
are fiat money, the money supply could increase rapidly for political reasons, resulting
in inflation. The most famous example is the hyperinflation that struck the German
Weimar Republic in the early 1920s. The nations that had been victorious in World
War I demanded reparations from Germany, which could not be paid in German
paper currency, as this was of suspect value due to government borrowing.
Germany attempted to print paper notes, buy foreign currency with them, and use
that to pay their debts.
Reviewer 546
Management Advisory Services
This policy led to the rapid devaluation of the German mark, and with it,
hyperinflation. German consumers exacerbated the cycle by trying to spend their
money as fast as possible, expecting that it would be worth less and less the longer
they waited. More and more money flooded the economy, and its value plummeted to
the point where people would paper their walls with the practically worthless
bills. Similar situations have occurred in Peru in 1990 and Zimbabwe in 2007-2008.
Central banks have tried to learn from such episodes, using monetary policy tools to
keep inflation in check. Since the 2008 financial crisis, the U.S. Federal Reserve has
kept interest rates near zero and pursued a bond-buying program – now discontinued
– known as quantitative easing. Some critics of the program alleged it would cause a
spike in inflation in the U.S. dollar, but inflation peaked in 2007 and declined steadily
over the next eight years. There are many, complex reasons why QE didn't lead to
inflation or hyperinflation, though the simplest explanation is that the recession was a
strong deflationary environment, and quantitative easing ameliorated its effects.
Inflation is one of the primary reasons that people invest in the first place. Just as the
pack of gum that costs a dollar will cost $1.02 in a year, assuming 2% inflation, a
savings account that was worth $1,000 would be worth $903.92 after 5 years, and
$817.07 after 10 years, assuming that you earn no interest on the deposit. Stuffing
cash into a mattress, or buying a tangible asset like gold, may make sense to people
who live in unstable economies or who lack legal recourse. However, for those who
can trust that their money will be reasonably safe if they make
prudent equity or bond investments, this is arguably the way to go.
Reviewer 547
Management Advisory Services
There is still risk, of course: bond issuers can default, and companies that issue stock
can go under. For this reason it's important to do solid research and create a diverse
portfolio. But in order to keep inflation from steadily gnawing away at your money, it's
important to invest it in assets that can be reasonably be expected to yield at a
greater rate than inflation.
Inflation is defined as a sustained increase in the general level of prices for goods
and services in a county, and is measured as an annual percentage change. Under
conditions of inflation, the prices of things rise over time. Put differently, as inflation
rises, every dollar you own buys a smaller percentage of a good or service. When
prices rise, and alternatively when the value of money falls you have inflation.
The value of a dollar (or any unit of money) is expressed in terms of its purchasing
power, which is the amount of real, tangible goods or actual services that money can
buy at a moment in time. When inflation goes up, there is a decline in the purchasing
power of money. For example, if the inflation rate is 2% annually, then theoretically a
$1 pack of gum will cost $1.02 in a year. After inflation, your dollar does not go as far
as it did in the past. This why a pack of gum cost just $0.05 in the 1940’s – the price
has risen, or from a different perspective, the value of the dollar has declined. In
recent years, most developed countries have attempted to sustain an inflation rate of
2-3% by using monetary policy tools put to use by central banks. This general form of
monetary policy is known as inflation targeting.
Causes of Inflation
There is no single theory for the cause of inflation that is universally agreed upon by
economists and academics, but there are a few hypotheses that are commonly held.
Costs of Inflation
Inflation affects different people in different ways, with some benefiting from its effects
at the expense of some who lose out. It also depends on whether changes to the rate
Reviewer 548
Management Advisory Services
Here is a brief account of the typical winners and losers from inflation:
Creditors (lenders) lose and debtors (borrowers) gain under inflation. For
example, suppose a bank issues you a 30-year mortgage to buy a house at
a fixed interest rate of 5% per year, costing $1,000 per month. As inflation
rises, the “cost” of that $1,000 per month decreases, which benefits the
homeowner, especially if the rate of inflation exceeds the interest rate on the
loan.
Inflation hurts savers since a dollar saved will be worth less in the future.
Unless the money is saved in an account that pays an interest rate at or
above the rate of inflation, the purchasing power of savings will erode. This
phenomenon is sometimes called "cash-drag."
Workers with fixed salaries or contracts that do not adjust with inflation will
be hurt as the buying power of their incomes stay the same relative to rising
prices.
Similarly, people living off a fixed-income, such as those below the poverty
line, retirees or annuitants, see a decline in their purchasing power and,
consequently, their standard of living.
Landlords benefit, if they have a fixed mortgage (or no mortgage) as they
are able to raise the rent more each year.
Uncertainty about what will happen next makes corporations and consumers
less likely to spend. This hurts economic output in the long run.
The entire economy must absorb repricing costs (menu costs) as price lists,
labels, menus and more have to be updated.
If the domestic inflation rate is greater than that of other countries, domestic
products become less competitive.
Hyperinflation is unusually rapid inflation, typically more than 50% in a single month.
In extreme cases, this inflation gone awry can lead to the breakdown of a nation's
monetary system or even its economy. One of the most notable examples of
hyperinflation occurred in Germany in 1923, when prices rose 2,500% in one month!
Likewise, in Zimbabwe, hyperinflation led to Z$100 trillion bills being printed that were
worth only a few U.S. dollars. Hyperinflations have also famously occurred in
Hungary and Argentina in the 20th century.
People often complain when prices go up, but they often ignore the fact that wages
should be rising as well. The question shouldn't be whether inflation is rising, but
whether it's rising at a quicker pace than your wages. A modest inflation is a sign that
an economy is growing. In some situations, little inflation can be just as bad as high
inflation. The lack of inflation may be an indication that the economy is weakening. As
you can see, it's not so easy to label inflation as either good or bad – it depends on
the overall economy as well as your personal situation.
FISCAL POLICIES
Fiscal policy is the means by which a government adjusts its spending levels and tax
rates to monitor and influence a nation's economy. It is the sister strategy to monetary
policy through which a central bank influences a nation's money supply. These two
policies are used in various combinations to direct a country's economic goals. Here
we look at how fiscal policy works, how it must be monitored and how its
implementation may affect different people in an economy.
Before the Great Depression, which lasted from Sept. 4, 1929, to the late 1930s or
early 1940s, the government's approach to the economy was laissez-faire. Following
World War II, it was determined that the government had to take a proactive role in
the economy to regulate unemployment, business cycles, inflation and the cost of
money. By using a mix of monetary and fiscal policies (depending on the political
orientations and the philosophies of those in power at a particular time, one policy
may dominate over another), governments can control economic phenomena.
example, in 2012 many worried that the fiscal cliff, a simultaneous increase in tax
rates and cuts in government spending set to occur in January 2013, would send the
U.S. economy back to recession. The U.S. Congress avoided this problem by
passing the American Taxpayer Relief Act of 2012 on Jan. 1, 2013.
Balancing Act
The idea, however, is to find a balance between changing tax rates and public
spending. For example, stimulating a stagnant economy by increasing spending or
lowering taxes runs the risk of causing inflation to rise. This is because an increase in
the amount of money in the economy, followed by an increase in consumer demand,
can result in a decrease in the value of money - meaning that it would take more
money to buy something that has not changed in value.
Let's say that an economy has slowed down. Unemployment levels are up, consumer
spending is down, and businesses are not making substantial profits. A government
thus decides to fuel the economy's engine by decreasing taxation, which gives
consumers more spending money, while increasing government spending in the form
of buying services from the market (such as building roads or schools). By paying for
such services, the government creates jobs and wages that are in turn pumped into
the economy. Pumping money into the economy by decreasing taxation and
increasing government spending is also known as "pump priming." In the meantime,
overall unemployment levels will fall.
With more money in the economy and fewer taxes to pay, consumer demand for
goods and services increases. This, in turn, rekindles businesses and turns the cycle
around from stagnant to active.
If, however, there are no reins on this process, the increase in economic productivity
can cross over a very fine line and lead to too much money in the market. This
excess in supply decreases the value of money while pushing up prices (because of
the increase in demand for consumer products). Hence, inflation exceeds the
reasonable level.
For this reason, fine tuning the economy through fiscal policy alone can be a difficult,
if not improbable, means to reach economic goals. If not closely monitored, the line
between a productive economy and one that is infected by inflation can be easily
blurred.
When inflation is too strong, the economy may need a slowdown. In such a situation,
a government can use fiscal policy to increase taxes to suck money out of the
economy. Fiscal policy could also dictate a decrease in government spending and
thereby decrease the money in circulation. Of course, the possible negative effects of
such a policy, in the long run, could be a sluggish economy and high unemployment
levels. Nonetheless, the process continues as the government uses its fiscal policy to
fine-tune spending and taxation levels, with the goal of evening out the business
cycles.
Reviewer 551
Management Advisory Services
Unfortunately, the effects of any fiscal policy are not the same for everyone.
Depending on the political orientations and goals of the policymakers, a tax cut could
affect only the middle class, which is typically the largest economic group. In times of
economic decline and rising taxation, it is this same group that may have to pay more
taxes than the wealthier upper class.
Similarly, when a government decides to adjust its spending, its policy may affect only
a specific group of people. A decision to build a new bridge, for example, will give
work and more income to hundreds of construction workers. A decision to spend
money on building a new space shuttle, on the other hand, benefits only a small,
specialized pool of experts, which would not do much to increase aggregate
employment levels.
That said, the markets also react to fiscal policy. For example, in response to
President Trump's proposed corporate tax deduction plans, the S&P has been trading
higher according to Barclays.
One of the biggest obstacles facing policymakers is deciding how much involvement
the government should have in the economy. Indeed, there have been various
degrees of interference by the government over the years. But for the most part, it is
accepted that a degree of government involvement is necessary to sustain a vibrant
economy, on which the economic well-being of the population depends.
MONETARY POLICIES
Monetary policy consists of the actions of a central bank, currency board or other
regulatory committee that determine the size and rate of growth of the money supply,
which in turn affects interest rates. Monetary policy is maintained through actions
such as modifying the interest rate, buying or selling government bonds, and
changing the amount of money banks are required to keep in the vault (bank
reserves).
Broadly, there are two types of monetary policy, expansionary and contractionary.
Contractionary monetary policy slows the rate of growth in the money supply or
outright decreases the money supply in order to control inflation; while sometimes
necessary, contractionary monetary policy can slow economic growth, increase
unemployment and depress borrowing and spending by consumers and businesses.
An example would be the Federal Reserve's intervention in the early 1980s: in order
to curb inflation of nearly 15%, the Fed raised its benchmark interest rate to 20%.
This hike resulted in a recession, but did keep spiraling inflation in check.
In recent years, unconventional monetary policy has become more common. This
category includes quantitative easing, the purchase of varying financial assets from
commercial banks. In the US, the Fed loaded its balance sheet with trillions of dollars
in Treasury notes and mortgage-backed securitiesbetween 2008 and 2013. The Bank
of England, the European Central Bank and the Bank of Japan have pursued similar
policies. The effect of quantitative easing is to raise the price of securities, therefore
lowering their yields, as well as to increase total money supply. Credit easing is a
related unconventional monetary policy tool, involving the purchase of private-sector
assets to boost liquidity. Finally, signaling is the use of public communication to ease
markets' worries about policy changes: for example, a promise not to raise interest
rates for a given number of quarters.
Central banks are often, at least in theory, independent from other policy makers.
This is the case with the Federal Reserve and Congress, reflecting the separation of
monetary policy from fiscal policy. The latter refers to taxes and government
borrowing and spending.
Monetary policy is the process by which the monetary authority of a country, like
the central bank or currency board, controls the supply of money, often targeting
an inflation rate or interest rate to ensure price stability and general trust in the
currency.[1][2][3]
INTERNATIONAL TRADE
International trade is the exchange of goods and services between countries. This
type of trade gives rise to a world economy, in which prices, or supply and demand,
affect and are affected by global events. Political change in Asia, for example, could
result in an increase in the cost of labor, thereby increasing the manufacturing costs
for an American sneaker company based in Malaysia, which would then result in an
increase in the price that you have to pay to buy the tennis shoes at your local mall. A
decrease in the cost of labor, on the other hand, would result in you having to pay
less for your new shoes.
product that is bought from the global market is an import. Imports and exports are
accounted for in a country's current account in the balance of payments.
Let's take a simple example. Country A and Country B both produce cotton sweaters
and wine. Country A produces ten sweaters and six bottles of wine a year while
Country B produces six sweaters and ten bottles of wine a year. Both can produce a
total of 16 units. Country A, however, takes three hours to produce the ten sweaters
and two hours to produce the six bottles of wine (total of five hours). Country B, on
the other hand, takes one hour to produce ten sweaters and three hours to produce
six bottles of wine (total of four hours).
But these two countries realize that they could produce more by focusing on those
products with which they have a comparative advantage. Country A then begins to
produce only wine, and Country B produces only cotton sweaters. Each country can
now create a specialized output of 20 units per year and trade equal proportions of
both products. As such, each country now has access to 20 units of both products.
We can see then that for both countries, the opportunity cost of producing both
products is greater than the cost of specializing. More specifically, for each country,
the opportunity cost of producing 16 units of both sweaters and wine is 20 units of
both products (after trading). Specialization reduces their opportunity cost and
therefore maximizes their efficiency in acquiring the goods they need. With the
greater supply, the price of each product would decrease, thus giving an advantage
to the end consumer as well.
Note that, in the example above, Country B could produce both wine and cotton more
efficiently than Country A (less time). This is called an absolute advantage, and
Country B may have it because of a higher level of technology. However, according
to the international trade theory, even if a country has an absolute advantage over
another, it can still benefit from specialization.
International trade not only results in increased efficiency but also allows countries to
participate in a global economy, encouraging the opportunity of foreign direct
investment (FDI), which is the amount of money that individuals invest into foreign
companies and other assets. In theory, economies can, therefore, grow more
efficiently and can more easily become competitive economic participants.
For the receiving government, FDI is a means by which foreign currency and
expertise can enter the country. These raise employment levels, and, theoretically,
Reviewer 555
Management Advisory Services
lead to a growth in the gross domestic product. For the investor, FDI offers company
expansion and growth, which means higher revenues.
As with other theories, there are opposing views. International trade has two
contrasting views regarding the level of control placed on trade: free
trade and protectionism. Free trade is the simpler of the two theories: a laissez-
faire approach, with no restrictions on trade. The main idea is that supply and
demand factors, operating on a global scale, will ensure that production happens
efficiently. Therefore, nothing needs to be done to protect or promote trade and
growth, because market forces will do so automatically.
As it opens up the opportunity for specialization and therefore more efficient use of
resources, international trade has the potential to maximize a country's capacity to
produce and acquire goods. Opponents of global free trade have argued, however,
that international trade still allows for inefficiencies that leave developing nations
compromised. What is certain is that the global economy is in a state of continual
change, and, as it develops, so too must all of its participants.
In finance, an exchange rate of two currencies is the rate at which one currency will
be exchanged for another. It is also regarded as the value of one country’s currency
in relation to another currency.[1] For example, an interbank exchange rate of
114 Japanese yen to the United States dollar means that ¥114 will be exchanged for
each US$1 or that US$1 will be exchanged for each ¥114. In this case it is said that
the price of a dollar in relation to yen is ¥114, or equivalently that the price of a yen in
relation to dollars is $1/114.
until 22:00 GMT Friday. The spot exchange rate refers to the current exchange rate.
The forward exchange rate refers to an exchange rate that is quoted and traded
today but for delivery and payment on a specific future date.
In the retail currency exchange market, different buying and selling rates will be
quoted by money dealers. Most trades are to or from the local currency. The buying
rate is the rate at which money dealers will buy foreign currency, and the selling rate
is the rate at which they will sell that currency. The quoted rates will incorporate an
allowance for a dealer's margin (or profit) in trading, or else the margin may be
recovered in the form of a commission or in some other way. Different rates may also
be quoted for cash , a documentary form or electronically . The higher rate on
documentary transactions has been justified as compensating for the additional time
and cost of clearing the document. On the other hand, cash is available for resale
immediately, but brings security, storage, and transportation costs, and the cost of
tying up capital in a stock of banknotes (bills).
Exchange Rates
As we begin discussing exchange rates, we must make the same distinction that we
made when discussing GDP. Namely, how do nominal exchange rates and real
exchange rates differ?
The nominal exchange rate is the rate at which currency can be exchanged. If the
nominal exchange rate between the dollar and the lira is 1600, then one dollar will
purchase 1600 lira. Exchange rates are always represented in terms of the amount of
foreign currency that can be purchased for one unit of domestic currency. Thus, we
determine the nominal exchange rate by identifying the amount of foreign currency
that can be purchased for one unit of domestic currency.
The real exchange rate is a bit more complicated than the nominal exchange rate.
While the nominal exchange rate tells how much foreign currency can be exchanged
for a unit of domestic currency, the real exchange rate tells how much the goods and
services in the domestic country can be exchanged for the goods and services in a
foreign country. The real exchange rate is represented by the following equation: real
exchange rate = (nominal exchange rate X domestic price) / (foreign price).
Let's say that we want to determine the real exchange rate for wine between the US
and Italy. We know that the nominal exchange rate between these countries is 1600
lira per dollar. We also know that the price of wine in Italy is 3000 lira and the price of
wine in the US is $6. Remember that we are attempting to compare equivalent types
of wine in this example. In this case, we begin with the equation for the real exchange
rate of real exchange rate = (nominal exchange rate X domestic price) / (foreign
price). Substituting in the numbers from above gives real exchange rate = (1600 X
$6) / 3000 lira = 3.2 bottles of Italian wine per bottle of American wine.
Reviewer 557
Management Advisory Services
By using both the nominal exchange rate and the real exchange rate, we can deduce
important information about the relative cost of living in two countries. While a high
nominal exchange rate may create the false impression that a unit of domestic
currency will be able to purchase many foreign goods, in reality, only a high real
exchange rate justifies this assumption.
An important relationship exists between net exports and the real exchange rate
within a country. When the real exchange rate is high, the relative price of goods at
home is higher than the relative price of goods abroad. In this case, import is likely
because foreign goods are cheaper, in real terms, than domestic goods. Thus, when
the real exchange rate is high, net exports decrease as imports rise. Alternatively,
when the real exchange rate is low, net exports increase as exports rise. This
relationship helps to show the effects of changes in the real exchange rate.
CONCEPT
What is 'Supply'
Supply and demand trends form the basis of the modern economy. Each specific
good or service will have its own supply and demand patterns based on
price, utility and personal preference. If people demand a good and are willing to pay
more for it, producers will add to the supply. As the supply increases, the price will fall
given the same level of demand. Ideally, markets will reach a point
of equilibrium where the supply equals the demand (no excess supply and no
shortages) for a given price point; at this point, consumer utility and producer profits
are maximized.
‘Supply’ Basics
good’s price. Generally, if a good’s price increases so will the supply. The price of
related goods and the price of inputs (energy, raw materials, labor) also affect supply
as they contribute to increasing the overall price of the good sold.
The conditions of the production of the item in supply is also significant; for example,
when a technological advancement increases the quality of a good being supplied, or
if there is a disruptive innovation, such as when a technological advancement renders
a good obsolete or less in demand. Government regulations can also affect supply,
such as environmental laws, as well as the number of suppliers (which increases
competition) and market expectations. An example of this is when environmental laws
regarding the extraction of oil affect the supply of such oil.
History of ‘Supply’
Supply in economics and finance is often, if not always, associated with demand.
The Law of Supply and Demand is a fundamental and foundational principle of
economics. The law of supply and demand is a theory that describes how supply of a
good and the demand for it interact. Generally, if supply is high and demand low, the
corresponding price will also be low. If supply is low and demand is high, the price will
also be high. This theory assumes market competition in a capitalist system. Supply
and demand in modern economics has been historically attributed to John Locke in
an early iteration, as well as definitively used by Adam Smith’s well-known “An
Enquiry into the Nature and Causes of the Wealth of Nations,” published in Britain in
1776.
The graphical representation of supply curve data was first used in the 1870s by
English economic texts, and then popularized in the seminal textbook “Principles of
Economics” by Alfred Marshall in 1890. It has long been debated why Britain was the
first country to embrace, utilize and publish on theories of supply and demand, and
economics in general. The advent of the industrial revolution and the ensuing British
economic powerhouse, which included heavy production, technological innovation
and an enormous amount of labor, has been a well-discussed cause.
The total amount of a product (good or service) available for purchase at any
specified price.
Supply is determined by: (1) Price: producers will try to obtain the highest possible
price whereas the buyers will try to pay the lowest possible price both settling at the
equilibrium price where supply equals demand. (2) Cost of inputs: the lower the input
price the higher the profit at a price level and more product will be offered at that
price. (3) Price of other goods: lower prices of competing goods will reduce the price
and the supplier may switch to switch to more profitable products thus reducing the
supply.
In the goods market, supply is the amount of a product per unit of time that producers
are willing to sell at various given prices when all other factors are held constant. In
the labor market, the supply of labor is the amount of time per week, month, or year
that individuals are willing to spend working, as a function of the wage rate. In
the financial markets, the money supply is the amount of highly liquid assets available
in the money market, which is either determined or influenced by a
country's monetary authority.
A schedule showing the amounts of a good or service that sellers (or a seller) will
offer at various prices during some period.
In economics, supply refers to the quantity of a product available in the market for
sale at a specified price at a given point of time.
Unlike demand, supply refers to the willingness of a seller to sell the specified amount
of a product within a particular price and time.
Reviewer 560
Management Advisory Services
Supply is always defined in relation to price and time. For example, if a seller agrees
to sell 500 kgs of wheat, it cannot be considered as supply of wheat as the price and
time factors are missing.
Similarly, if a seller is ready to sell 500 kgs at a price of Rs. 30 per kg then again it
would not be considered as supply as the time element is missing. Therefore, the
statement “a seller is willing to sell 500 kgs at the price of Rs. 30 per kg in a week” is
ideal to understand the concept of supply as it relates supply with price and time.
Apart from this, the supply also depends on the stock and market price of the product.
Stock of a product refers to quantity of a product available in the market for sale
within a specified point of time.
Both stock and market price of a product affect its supply to a greater extent. If the
market price is more than the cost price, the seller would increase the supply of a
product in the market. However, the decrease in market price as compared to cost
price would reduce the supply of product in the market.
For example Mr. X has 100 kgs of a product. He expects the minimum price to be Rs.
90 per kg and the market price is Rs. 95 per kg. Therefore he would release certain
amount of the product, say around 50 kgs in the market, but would not release the
whole amount. The reason being he would wait for better rates for his product. In
such a case, the supply of his product would be 50kgs at Rs. 95 per kg.
Determinants of Supply:
Some of the factors that influence the supply of a product are described as follows:
i. Price:
Refers to the main factor that influences the supply of a product to a greater extent.
Unlike demand, there is a direct relationship between the price of a product and its
supply. If the price of a product increases, then the supply of the product also
increases and vice versa. Change in supply with respect to the change in price is
termed as the variation in supply of a product.
Speculation about future price can also affect the supply of a product. If the price of a
product is about to rise in future, the supply of the product would decrease in the
present market because of the profit expected by a seller in future. However, the fall
in the price of a product in future would increase the supply of product in the present
market.
Implies that the supply of a product would decrease with increase in the cost of
production and vice versa. The supply of a product and cost of production are
inversely related to each other. For example, a seller would supply less quantity of a
product in the market, when the cost of production exceeds the market price of the
product.
In such a case the seller would wait for the rise in price in future. The cost of
production rises due to several factors, such as loss of fertility of land, high wage
rates of labor, and increase in the prices of raw material, transport cost, and tax rate.
Implies that climatic conditions directly affect the supply of certain products. For
example, the supply of agricultural products increases when monsoon comes on
time. However, the supply of these products decreases at the time of drought. Some
of the crops are climate specific and their growth purely depends on climatic
conditions. For example Kharif crops are well grown at the time of summer, while
Rabi crops are produce well in winter season.
iv. Technology:
v. Transport Conditions:
Refer to the fact that better transport facilities increase the supply of products.
Transport is always a constraint to the supply of products, as the products are not
available on time due to poor transport facilities. Therefore even if the price of a
product increases, the supply would not increase.
In India sellers usually use road transport and the poorly maintained road makes it
difficult to reach the destination on time the products that are manufactured in one
part of the city need to be spread in the whole country through road transport This
may result in the damage of most of the products during the journey, which can cause
heavy loss for a seller. In addition the seller can also lose his/her customers because
of the delay in. the delivery of products.
Act as one of the major determinant of supply. The inputs, such as raw material man,
equipment, and machines, required at the time of production are termed as factors. If
Reviewer 562
Management Advisory Services
the factors are available in sufficient quantity and at lower price, then there would be
increase in production.
This would increase the supply of a product in the market. For example, availability of
cheap labor and raw material nearby the manufacturing plant of an organization
would help in reducing the labor and transportation costs. Consequently, the
production and supply of the product would increase.
Implies that the different policies of government, such as fiscal policy and industrial
policy, has a greater impact on the supply of a product. For example, increase in tax
on excise duties would decrease the supply of a product. On the other hand, if the tax
rate is low, then the supply of a product would increase.
Refer to fact that the prices of substitutes and complementary goods also affect the
supply of a product. For example, if the price of wheat increases, then farmers would
tend to grow more wheat than nee. This would decrease the supply of rice in the
market.
An increase in supply occurs when more is supplied at each price, this could
occur for the following reasons:
1. A decrease in costs of production. This means business can supply more at each
price. Lower costs could be due to lower wages, lower raw material costs
2. More firms. An increase in the number of producers will cause an increase in
supply.
3. Investment in capacity. Expansion in capacity of existing firms, e.g. building a
new factory
4. Related supply. An increase in supply of a related good e.g. beef and leather
5. Weather. Climatic conditions are very important for agricultural products
6. Technological improvements. Improvements in technology, e.g. computers,
reducing firms costs
7. Lower taxes. Lower direct taxes (e.g. tobacco tax, VAT) reduce the cost of goods
8. Government subsidies. Increase in government subsidies will also reduce the
cost of goods, e.g. train subsidies reduce the price of train tickets.
2. Demand
CONCEPT
A schedule showing the amounts of a good or service that buyers (or a buyer) wish to
purchase at various prices during some time period.
What is 'Demand'
Reviewer 563
Management Advisory Services
Think of demand as your willingness to go out and buy a certain product. For
example, market demand is the total of what everybody in the market wants.
Demand is closely related to supply. While consumers try to pay the lowest prices
they can for goods and services, suppliers try to maximize profits. If suppliers charge
too much, demand drops and suppliers do not sell enough product to earn sufficient
profits. If suppliers charge too little, demand increases but lower prices may not cover
suppliers’ costs or allow for profits. Some factors affecting demand include the appeal
of a good or service, the availability of competing goods, the availability of financing
and the perceived availability of a good or service.
Every consumer faces a different set of circumstances. The factors she faces vary in
type and degree. The extent to which these factors affect market demand overall is
different from the way they affect the demand of a particular individual. Aggregate
demand refers to the overall or average demand of many market participants.
Individual demand refers to the demand of a particular consumer. For example, a
particular consumer’s demand for a product is strongly influenced by her personal
income. However, her personal income does not significantly affect aggregate
demand in a large economy.
Supply and Demand Curves
Supply and demand factors are unique for a given product or service. These factors
are often summed up in demand and supply profiles plotted as slopes on a graph. On
such a graph, the vertical axis denotes the price, while the horizontal axis denotes
the quantity demanded or supplied. A demand profile slopes downward, from left to
right. As prices increase, consumers demand less of a good or service. A supply
curve slopes upward. As prices increase, suppliers provide less of a good or service.
Market Equilibrium
The point where supply and demand curves intersect represents the market clearing
or market equilibrium price. An increase in demand shifts the demand curve to the
right. The curves intersect at a higher price and consumers pay more for the product.
Equilibrium prices typically remain in a state of flux for most goods and services
because factors affecting supply and demand are always changing. Free, competitive
markets tend to push prices toward market equilibrium.
Reviewer 564
Management Advisory Services
Demand in economics is how many goods and services are bought at various prices
during a certain period of time. Demand is the consumer's need or desire to own the
product or experience the service. It's constrained by the willingness and ability of the
consumer to pay for the good or service at the price offered.
Demand is the underlying force that drives everything in the economy. Fortunately for
economics, people are never satisfied.
Price
Usually viewed as the most important factor that affects demand. Products have
different sensitivity to changes in price. For example, demand for necessities such as
bread, eggs and butter does not tend to change significantly when prices move up or
down
Income levels
When an individual’s income goes up, their ability to purchase goods and services
increases, and this causes demand to increase. When incomes fall there will be a
decrease in the demand for most goods
Consumer tastes and preferences
Changing tastes and preferences can have a significant effect on demand for
different products. Persuasive advertising is designed to cause a change in tastes
and preferences and thereby create an increase in demand. A good example of this
is the recent surge in sales of smoothies!
Competition
Competitors are always looking to take a bigger share of the market, perhaps by
cutting their prices or by introducing a new or better version of a product
Fashions
Even though the focus in economics is on the relationship between the price of a
product and how much consumers are willing and able to buy, it is important to
examine all of the factors that affect the demand for a good or service.
These factors include:
There is an inverse (negative) relationship between the price of a product and the
amount of that product consumers are willing and able to buy. Consumers want to
buy more of a product at a low price and less of a product at a high price. This
inverse relationship between price and the amount consumers are willing and able to
buy is often referred to as The Law of Demand.
The effect that income has on the amount of a product that consumers are willing and
able to buy depends on the type of good we're talking about. For most goods, there is
a positive (direct) relationship between a consumer's income and the amount of the
good that one is willing and able to buy. In other words, for these goods when income
rises the demand for the product will increase; when income falls, the demand for the
product will decrease. We call these types of goods normal goods.
However, for some goods the effect of a change in income is the reverse. For
example, think about a low-quality (high fat-content) ground beef. You might buy this
while you are a student, because it is inexpensive relative to other types of meat. But
if your income increases enough, you might decide to stop buying this type of meat
and instead buy leaner cuts of ground beef, or even give up ground beef entirely in
favor of beef tenderloin. If this were the case (that as your income went up, you were
willing to buy less high-fat ground beef), there would be an inverse relationship
between your income and your demand for this type of meat. We call this type of
good an inferior good. There are two important things to keep in mind about inferior
goods. They are not necessarily low-quality goods. The term inferior (as we use it in
economics) just means that there is an inverse relationship between one's income
and the demand for that good. Also, whether a good is normal or inferior may be
different from person to person. A product may be a normal good for you, but an
inferior good for another person.
As with income, the effect that this has on the amount that one is willing and able to
buy depends on the type of good we're talking about. Think about two goods that are
Reviewer 566
Management Advisory Services
typically consumed together. For example, bagels and cream cheese. We call these
types of goods compliments. If the price of a bagel goes up, the Law of Demand tells
us that we will be willing/able to buy fewer bagels. But if we want fewer bagels, we
will also want to use less cream cheese (since we typically use them together).
Therefore, an increase in the price of bagels means we want to purchase less cream
cheese. We can summarize this by saying that when two goods are complements,
there is an inverse relationship between the price of one good and the demand for the
other good.
On the other hand, some goods are considered to be substitutes for one another: you
don't consume both of them together, but instead choose to consume one or the
other. For example, for some people Coke and Pepsi are substitutes (as with inferior
goods, what is a substitute good for one person may not be a substitute for another
person). If the price of Coke increases, this may make Pepsi relatively more
attractive. The Law of Demand tells us that fewer people will buy Coke; some of
these people may decide to switch to Pepsi instead, therefore increasing the amount
of Pepsi that people are willing and able to buy. We summarize this by saying that
when two goods are substitutes, there is a positive relationship between the price of
one good and the demand for the other good.
This is a less tangible item that still can have a big impact on demand. There are all
kinds of things that can change one's tastes or preferences that cause people to want
to buy more or less of a product. For example, if a celebrity endorses a new product,
this may increase the demand for a product. On the other hand, if a new health study
comes out saying something is bad for your health, this may decrease the demand
for the product. Another example is that a person may have a higher demand for an
umbrella on a rainy day than on a sunny day.
The Consumer's Expectations
It doesn't just matter what is currently going on - one's expectations for the future can
also affect how much of a product one is willing and able to buy. For example, if you
hear that Apple will soon introduce a new iPod that has more memory and longer
battery life, you (and other consumers) may decide to wait to buy an iPod until the
new product comes out. When people decide to wait, they are decreasing the current
demand for iPods because of what they expect to happen in the future. Similarly, if
you expect the price of gasoline to go up tomorrow, you may fill up your car with gas
now. So your demand for gas today increased because of what you expect to happen
tomorrow. This is similar to what happened after Huricane Katrina hit in the fall of
2005. Rumors started that gas stations would run out of gas. As a result, many
consumers decided to fill up their cars (and gas cans), leading to long lines and a big
increase in the demand for gas. This was all based on the expectation of what would
happen.
Reviewer 567
Management Advisory Services
As more or fewer consumers enter the market this has a direct effect on the amount
of a product that consumers (in general) are willing and able to buy. For example,
a pizza shop located near a University will have more demand and thus higher sales
during the fall and spring semesters. In the summers, when less students are taking
classes, the demand for their product will decrease because the number of
consumers in the area has significantly decreased.
3. Market Equilibrium
MARKET EQUILIBRIUM
When the supply and demand curves intersect, the market is in equilibrium. This is where the quantity
demanded and quantity supplied are equal. The corresponding price is the equilibrium price or market-
clearing price, the quantity is the equilibrium quantity.
Putting the supply and demand
curves from the previous sections
together. These two curves will
intersect at Price = $6, and
Quantity = 20.
In this market, the equilibrium
price is $6 per unit, and
equilibrium quantity is 20 units.
At this price level, market is in
equilibrium. Quantity supplied is
equal to quantity demanded ( Qs
= Qd).
Market is clear.
If the market price is above the equilibrium price, quantity supplied is greater than quantity demanded,
creating a surplus. Market price will fall.
Example: if you are the producer, you have a lot of excess inventory that cannot sell. Will you put them on
sale? It is most likely yes. Once you lower the price of your product, your product’s quantity demanded will
rise until equilibrium is reached. Therefore, surplus drives price down.
Reviewer 568
Management Advisory Services
If the market price is below the equilibrium price, quantity supplied is less than quantity demanded, creating
a shortage. The market is not clear. It is in shortage. Market price will rise because of this shortage.
Example: if you are the producer, your product is always out of stock. Will you raise the price to make more
profit? Most for-profit firms will say yes. Once you raise the price of your product, your product’s quantity
demanded will drop until equilibrium is reached. Therefore, shortage drives price up.
If a surplus exist, price must fall in order to entice additional quantity demanded and reduce quantity
supplied until the surplus is eliminated. If a shortage exists, price must rise in order to entice additional
supply and reduce quantity demanded until the shortage is eliminated.
If the market price (P) is higher than $6 (where Qd = Qs),
for example, P=8, Qs=30, and Qd=10.
Since Qs>Qd, there are excess quantity supplied in the
market, the market is not clear. Market is in surplus.
If the market price is lower than equilibrium price, $6,
for example, P=4, Qs=10, and Qd=30.
Since Qs<Qd, There are excess quanitty demanded in the
market. Market is not clear. Market is in shortage.
Government regulations will create surpluses and shortages in the market. When a price ceiling is set,
there will be a shortage. When there is a price floor, there will be a surplus.
•Policy makers set floor price above the market equilibrium price which they believed is too low.
•Price floors are most often placed on markets for goods that are an important source of income for the
sellers, such as labor market.
•Price floor generate surpluses on the market.
•Example: minimum wage.
Price Ceiling: is legally imposed maximum price on the market. Transactions above this price is prohibited.
•Policy makers set ceiling price below the market equilibrium price which they believed is too high.
•Intention of price ceiling is keeping stuff affordable for poor people.
•Price ceiling generates shortages on the market.
•Example: Rent control.
Reviewer 569
Management Advisory Services
Equilibrium price and quantity are determined by the intersection of supply and demand. A change in
supply, or demand, or both, will necessarily change the equilibrium price, quantity or both. It is highly
unlikely that the change in supply and demand perfectly offset one another so that equilibrium remains the
same.
Example: This example is based on the assumption of Ceteris Paribus.
1) If there is an exporter who is willing to export oranges from Florida to Asia, he will increase the
demand for Florida’s oranges. An increase in demand will create a shortage, which increases the
equilibrium price and equilibrium quantity.
2) If there is an importer who is willing to import oranges from Mexico to Florida, he will increase
the supply for Florida’s oranges. An increase in supply will create a surplus, which lowers the equilibrium
price and increase the equilibrium quantity.
3) What will happen if the exporter and importer enter the Florida’s orange market at the same
time? From the above analysis, we can tell that equilibrium quantity will be higher. But the import and
exporter’s impact on price is opposite. Therefore, the change in equilibrium price cannot be determined
unless more details are provided. Detail information should include the exact quantity the exporter and
importer is engaged in. By comparing the quantity between importer and exporter, we can determine who
has more impact on the market.
In the following table, an example of demand and supply increase is illustrated.
For example, if the quantity demanded for a good increases 15% in response to a
10% decrease in price, the price elasticity of demand would be 15% / 10% = 1.5. The
degree to which the quantity demanded for a good changes in response to a change
in price can be influenced by a number of factors. Factors include the number of
close substitutes (demand is more elastic if there are close substitutes) and whether
the good is a necessity or luxury (necessities tend to have inelastic demand while
luxuries are more elastic).
Businesses evaluate price elasticity of demand for various products to help predict
the impact of a pricing on product sales. Typically, businesses charge higher prices if
demand for the product is price inelastic.
Price elasticities are almost always negative, although analysts tend to ignore the
sign even though this can lead to ambiguity. Only goods which do not conform to
the law of demand, such as Veblen and Giffen goods, have a positive PED. In
general, the demand for a good is said to be inelastic (or relatively inelastic) when the
PED is less than one (in absolute value): that is, changes in price have a relatively
small effect on the quantity of the good demanded. The demand for a good is said to
be elastic (or relatively elastic) when its PED is greater than one (in absolute value):
that is, changes in price have a relatively large effect on the quantity of a good
demanded. Demand for a good is:
Revenue is maximized when price is set so that the PED is exactly one. The
PED of a good can also be used to predict the incidence (or "burden") of a tax on
that good. Various research methods are used to determine price elasticity,
including test markets, analysis of historical sales data and conjoint analysis.
5. Market Structure
good example of vertical integration is the oil industry, where the major oil
companies own the rights to extract from oilfields, they run a fleet of tankers,
operate refineries and have control of sales at their own filling stations.
5. The extent of product differentiation (which affects cross-price elasticity of
demand)
6. The structure of buyers in the industry (including the possibility of
monopsony power)
7. The turnover of customers (sometimes known as "market churn") – i.e. how
many customers are prepared to switch their supplier over a given time
period when market conditions change. The rate of customer churn is
affected by the degree of consumer or brand loyalty and the influence of
persuasive advertising and marketing
PRODUCTION FUNCTION
Production function
From Wikipedia, the free encyclopedia
Reviewer 573
Management Advisory Services
Stages of production[edit]
Reviewer 577
Management Advisory Services
By definition, in the long run the firm can change its scale of operations by adjusting
the level of inputs that are fixed in the short run, thereby shifting the production
function upward as plotted against the variable input. If fixed inputs are lumpy,
adjustments to the scale of operations may be more significant than what is required
to merely balance production capacity with demand. For example, you may only need
to increase production by million units per year to keep up with demand, but the
production equipment upgrades that are available may involve increasing productive
capacity by 2 million units per year.
Reviewer 578
Management Advisory Services
If a firm is operating at a profit-maximizing level in stage one, it might, in the long run,
choose to reduce its scale of operations (by selling capital equipment). By reducing
the amount of fixed capital inputs, the production function will shift down. The
beginning of stage 2 shifts from B1 to B2. The (unchanged) profit-maximizing output
level will now be in stage 2.
There are two special classes of production functions that are often analyzed. The
production function is said to be homogeneous of degree , if given any positive
constant , . If , the function exhibits increasing returns to scale, and it
exhibits decreasing returns to scale if . If it is homogeneous of degree , it
exhibits constant returns to scale. The presence of increasing returns means that a
one percent increase in the usage levels of all inputs would result in a greater than
one percent increase in output; the presence of decreasing returns means that it
would result in a less than one percent increase in output. Constant returns to scale is
the in-between case. In the Cobb-Douglas production function referred to above,
returns to scale are increasing if , decreasing if , and constant if .
If a production function is homogeneous of degree one, it is sometimes called
"linearly homogeneous". A linearly homogeneous production function with inputs
capital and labour has the properties that the marginal and average physical products
of both capital and labour can be expressed as functions of the capital-labour ratio
alone. Moreover, in this case if each input is paid at a rate equal to its marginal
product, the firm's revenues will be exactly exhausted and there will be no excess
economic profit.[3]:pp.412–414
Homothetic functions are functions whose marginal technical rate of substitution (the
slope of the isoquant, a curve drawn through the set of points in say labour-capital
space at which the same quantity of output is produced for varying combinations of
the inputs) is homogeneous of degree zero. Due to this, along rays coming from the
origin, the slopes of the isoquants will be the same. Homothetic functions are of the
form where is a monotonically increasing function (the derivative of is positive ()),
and the function is a homogeneous function of any degree.
There are two major criticisms[which?] of the standard form of the production function.[4]
During the 1950s, '60s, and '70s there was a lively debate about the theoretical
soundness of production functions (see the Capital controversy). Although the
criticism was directed primarily at aggregate production functions, microeconomic
production functions were also put under scrutiny. The debate began in 1953
when Joan Robinson criticized the way the factor input capital was measured and
how the notion of factor proportions had distracted economists. She wrote:
As a result of the criticism on their weak theoretical grounds, it has been claimed that
empirical results firmly support the use of neoclassical well behaved aggregate
production functions. Nevertheless, Anwar Shaikh has demonstrated that they also
have no empirical relevance, as long as alleged good fit outcomes from an
accounting identity, not from any underlying laws of production/distribution.[6]
Natural resources[edit]
See also: Nicholas Georgescu-Roegen § Criticising neoclassical economics (weak
versus strong sustainability)
generated by the change of production function. This is the principle how the
production function is made a practical concept, i.e. measureable and understandable
in practical situations.
About production[edit]
Economic well-being is created in a production process, meaning all economic
activities that aim directly or indirectly to satisfy human needs. The degree to which
the needs are satisfied is often accepted as a measure of economic well-being. In
production there are two features which explain increasing economic well-being. They
are improving quality-price-ratio of commodities and increasing incomes from growing
and more efficient market production. The most important forms of production are
market production
public production
household production
In order to understand the origin of the economic well-being we must understand
these three production processes. All of them produce commodities which have value
and contribute to well-being of individuals.
The satisfaction of needs originates from the use of the commodities which are
produced. The need satisfaction increases when the quality-price-ratio of the
commodities improves and more satisfaction is achieved at less cost. Improving the
quality-price-ratio of commodities is to a producer an essential way to improve the
competitiveness of products but this kind of gains distributed to customers cannot be
measured with production data. Improving the competitiveness of products means
often to the producer lower product prices and therefore losses in incomes which are
to compensated with the growth of sales volume.
Economic well-being also increases due to the growth of incomes that are gained
from the growing and more efficient market production. Market production is the only
one production form which creates and distributes incomes to stakeholders. Public
production and household production are financed by the incomes generated in
market production. Thus market production has a double role in creating well-being,
i.e. the role of producing developing commodities and the role to creating income.
Because of this double role market production is the “primus motor” of economic well-
being and therefore here under review.
Main processes of a producing company[edit]
A producing company can be divided into sub-processes in different ways; yet, the
following five are identified as main processes, each with a logic, objectives, theory
and key figures of its own. It is important to examine each of them individually, yet, as
a part of the whole, in order to be able to measure and understand them. The main
processes of a company are as follows:
Reviewer 581
Management Advisory Services
The production process consists of the real process and the income distribution
process. A result and a criterion of success of the owner is profitability. The
profitability of production is the share of the real process result the owner has been
able to keep to himself in the income distribution process. Factors describing the
production process are the components of profitability, i.e., returns and costs. They
differ from the factors of the real process in that the components of profitability are
given at nominal prices whereas in the real process the factors are at periodically
fixed prices.
Monetary process refers to events related to financing the business. Market value
process refers to a series of events in which investors determine the market value of
the company in the investment markets.
Production growth and performance[edit]
Production growth is often defined as a production increase of an output of a
production process. It is usually expressed as a growth percentage depicting growth
of the real production output. The real output is the real value of products produced in
a production process and when we subtract the real input from the real output we get
the real income. The real output and the real income are generated by the real
process of production from the real inputs.
The real process can be described by means of the production function. The
production function is a graphical or mathematical expression showing the
relationship between the inputs used in production and the output achieved. Both
graphical and mathematical expressions are presented and demonstrated. The
production function is a simple description of the mechanism of income generation in
production process. It consists of two components. These components are a change
in production input and a change in productivity.[9][10]
The growth of production output does not reveal anything about the performance of
the production process. The performance of production measures production’s ability
to generate income. Because the income from production is generated in the real
process, we call it the real income. Similarly, as the production function is an
expression of the real process, we could also call it “income generated by the
production function”.
The real income generation follows the logic of the production function. Two
components can also be distinguished in the income change: the income growth
caused by an increase in production input (production volume) and the income
growth caused by an increase in productivity. The income growth caused by
increased production volume is determined by moving along the production function
graph. The income growth corresponding to a shift of the production function is
generated by the increase in productivity. The change of real income so signifies a
move from the point 1 to the point 2 on the production function (above). When we
want to maximize the production performance we have to maximize the income
generated by the production function.
The sources of productivity growth and production volume growth are explained as
follows. Productivity growth is seen as the key economic indicator of innovation. The
successful introduction of new products and new or altered processes, organization
structures, systems, and business models generates growth of output that exceeds
the growth of inputs. This results in growth in productivity or output per unit of input.
Income growth can also take place without innovation through replication of
established technologies. With only replication and without innovation, output will
increase in proportion to inputs. (Jorgenson et al. 2014,2) This is the case of income
growth through production volume growth.
Jorgenson et al. (2014,2) give an empiric example. They show that the great
preponderance of economic growth in the US since 1947 involves the replication of
existing technologies through investment in equipment, structures, and software and
expansion of the labor force. Further they show that innovation accounts for only
about twenty percent of US economic growth.
In the case of a single production process (described above) the output is defined as
an economic value of products and services produced in the process. When we want
to examine an entity of many production processes we have to sum up the value-
added created in the single processes. This is done in order to avoid the double
accounting of intermediate inputs. Value-added is obtained by subtracting the
intermediate inputs from the outputs. The most well-known and used measure of
value-added is the GDP (Gross Domestic Product). It is widely used as a measure of
the economic growth of nations and industries.
Absolute (total) and average income[edit]
The production performance can be measured as an average or an absolute income.
Expressing performance both in average (avg.) and absolute (abs.) quantities is
helpful for understanding the welfare effects of production. For measurement of the
average production performance, we use the known productivity ratio
Reviewer 584
Management Advisory Services
The dual approach has been recognized in growth accounting for long but its
interpretation has remained unclear. The following question has remained
unanswered: “Quantity based estimates of the residual are interpreted as a shift in
the production function, but what is the interpretation of the price-based growth
estimates?”[17]:18 We have demonstrated above that the real income change is
achieved by quantitative changes in production and the income distribution change to
the stakeholders is its dual. In this case the duality means that the same accounting
result is obtained by accounting the change of the total income generation (real
income) and by accounting the change of the total income distribution.
COST FUNCTION
Management uses this model to run different production scenarios and help predict
what the total cost would be to produce a product at different levels of output. The
cost function equation is expressed as C(x)= FC + V(x), where C equals total
production cost, FC is total fixed costs, V is variable cost and x is the number of units.
Example
First thing to do is to determine which costs are fixed and which ones are variable.
C(x) = FC + V(x)
A. At 1200
C(1,200) = $3,960* + 1,200 ($5 + $2)
C(1,200) = $ 12,360
Therefore, it would take $11,360 to produce 1,200 toys in a year.
B. At 1500
C(1,500) = $3,960* + 1,500 ($5 +$2)
C(1500)= $14,460