F5 Technical Article
F5 Technical Article
F5 Technical Article
This
article provides a step-by-step approach to decision trees, using a simple example to guide
you through
The addition of decision trees to the Paper F5 syllabus is a relatively recent one that
probably struck fear in the heart of many students. To be honest, I don’t blame them for
that. When I first studied decision trees, they had a similar effect on me: I hated them and
just didn’t fully understand the logic. The purpose of this article is to go through a step-by-
step approach to decision trees, using a simple example to guide you through. There is no
universal set of symbols used when drawing a decision tree but the most common ones
that we tend to come across in accountancy education are squares ( □), which are used to
represent ‘decisions’ and circles (○), which are used to represent ‘outcomes.’ Therefore, I
shall use these symbols in this article and in any suggested solutions for exam questions
where decision trees are examined.
For example, sales may be uncertain but costs may be uncertain too. The value of some
variables may also be dependent on the value of other variables too: maybe if sales are
100,000 units, costs are $4 per unit, but if sales are 120,000 units costs fall to $3.80 per
unit. Many outcomes may therefore be possible and some outcomes may also be
dependent on previous outcomes. Decision trees provide a useful method of breaking down
a complex problem into smaller, more manageable pieces.
There are two stages to making decisions using decision trees. The first stage is the
construction stage, where the decision tree is drawn and all of the probabilities and
financial outcome values are put on the tree. The principles of relevant costing are applied
throughout – ie only relevant costs and revenues are considered. The second stage is the
evaluation and recommendation stage. Here, the decision is ‘rolled back’ by calculating all
the expected values at each of the outcome points and using these to make decisions
while working back across the decision tree. A course of action is then recommended for
management.
A simple decision tree is shown below. It can be seen from the tree that there are two
choices available to the decision maker since there are two branches coming off the
decision point. The outcome for one of these choices, shown by the top branch off the
decision point, is clearly known with certainty, since there is no outcome point further
along this top branch. The lower branch, however, has an outcome point on it, showing
that there are two possible outcomes if this choice is made. Then, since each of the
subsidiary branches off this outcome point also has a further outcome point on with two
branches coming off it, there are clearly two more sets of outcomes for each of these
initial outcomes. It could be, for example, that the first two outcomes were showing
different income levels if some kind of investment is undertaken and the second set of
outcomes are different sets of possible variable costs for each different income level.
Once the basic tree has been drawn, like above, the probabilities and expected values
must be written on it. Remember, the probabilities shown on the branches coming off the
outcome points must always add up to 100%, otherwise there must be an outcome missing
or a mistake with the numbers being used. As well as showing the probabilities on the
branches of the tree, the relevant cash inflows/outflows must also be written on there too.
This is shown in the example later on in the article.
Once the decision tree has been drawn, the decision must then be evaluated.
1. Label all of the decision and outcome points – ie all the squares and circles. Start with
the ones closest to the right-hand side of the page, labelling the top and then the
bottom ones, and then move left again to the next closest ones.
2. Then, moving from right to left across the page, at each outcome point, calculate the
expected value of the cashflows by applying the probabilities to the cashflows. If
there is room, write these expected values on the tree next to the relevant outcome
point, although be sure to show all of your workings for them clearly beneath the tree
too.
Finally, the recommendation is made to management, based on the option that gives the
highest expected value.
It is worth remembering that using expected values as the basis for making decisions is
not without its limitations. Expected values give us a long run average of the outcome that
would be expected if a decision was to be repeated many times. So, if we are in fact
making a one-off decision, the actual outcome may not be very close to the expected value
calculated and the technique is therefore not very accurate. Also, estimating accurate
probabilities is difficult because the exact situation that is being considered may not well
have arisen before.
The expected value criterion for decision making is useful where the attitude of the
investor is risk neutral. They are neither a risk seeker nor a risk avoider. If the decision
maker’s attitude to risk is not known, it difficult to say whether the expected value
criterion is a good one to use. It may in fact be more useful to see what the worst-case
scenario and best-case scenario results would be too, in order to assist decision making.
Let me now take you through a simple decision tree example. For the purposes of
simplicity, you should assume that all of the figures given are stated in net present value
terms.
Example
A company is deciding whether to develop and launch a new product. Research and
development costs are expected to be $400,000 and there is a 70% chance that the
product launch will be successful, and a 30% chance that it will fail. If it is successful, the
levels of expected profits and the probability of each occurring have been estimated as
follows, depending on whether the product’s popularity is high, medium or low:
Profits Probability
If it is a failure, there is a 0.6 probability that the research and development work can be
sold for $50,000 and a 0.4 probability that it will be worth nothing at all.
The basic structure of the decision tree must be drawn, as shown below:
Next, the probabilities and the profit figures must be put on, not forgetting that the profits
from a successful launch last for two years, so they must be doubled.
Now, the decision points and outcome points must be labelled, starting from the right-
hand side and moving across the page to the left.
Now, calculate the expected values at each of the outcome points, by applying the
probabilities to the profit figures. An expected value will be calculated for outcome point A
and another one will be calculated for outcome point B. Once these have been calculated,
a third expected value will need to be calculated at outcome point C. This will be done by
applying the probabilities for the two branches off C to the two expected values that have
already been calculated for A and B.
These expected values can then be put on the tree if there is enough room.
Once this has been done, the decision maker can then move left again to decision point D.
At D, the decision maker compares the value of the top branch of the decision tree (which,
given there were no outcome points, had a certain outcome and therefore needs no
probabilities to be applied to it) to the expected value of the bottom branch. Costs will
then need to be deducted. So, at decision point D compare the EV of not developing the
product, which is $0, with the EV of developing the product once the costs of $400,000
have been taken off – ie $155,000.
Finally, the recommendation can be made to management. Develop the product because
the expected value of the profits is $155,000.
Often, there is more than one way that a decision tree could be drawn. In my example,
there are actually five outcomes if the product is developed:
Therefore, instead of decision point C having only two branches on it, and each of those
branches in turn having a further outcome point with two branches on, we could have
drawn the tree as follows:
You can see that the probabilities on the branches of the tree coming off outcome point A
are now new. This is because they are joint probabilities and they have been by
combining the probabilities of success and failure (0.7 and 0.3) with the probabilities of
high, medium and low profits (0.2, 0.5, 0.3). The joint probabilities are found easily simply
by multiplying the two variables together each time:
All of the joint probabilities above must, of course, add up to 1, otherwise a mistake has
been made.
Whether you use my initial method, which I always think is far easier to follow, or the
second method, your outcome will always be the same.
The decision tree example above is quite a simple one but the principles to be grasped
from it apply equally to a more complex decision resulting in a tree with far more decision
points, outcomes and branches on.
Finally, I always cross off the branch or branches after a decision point that show the
alternative I haven’t chosen, in this case being the ‘do not develop product’ branch. Not
everyone does it this way but I think it makes the tree easy to follow. Remember,
outcomes are not within your control, so branches off outcome points are never crossed
off. I have shown this crossing off of the branches below on my original, preferred tree:
Perfect information
The value of perfect information is the difference between the expected value of profit
with perfect information and the expected value of profit without perfect information. So,
in our example, let us say that an agency can provide information on whether the launch is
going to be successful and produce high, medium or low profits or whether it is simply
going to fail. The expected value with perfect information can be calculated using a small
table. At this point, it is useful to have calculated the joint probabilities mentioned in the
second decision tree method above because the answer can then be shown like this.
0)
$(400,00
0 No 0) 0.12 Fail and don't sell
$266,00
0
However, it could also be done by using the probabilities from our original tree in the table
below and then multiplying them by the success and failure probabilities of 0.7 and 0.3:
Profit less
EV of development
info Proceed cost Probability Demand level
$120,00
0 Yes $600,000 0.2 High
$200,00
0 Yes $400,000 0.5 Medium
$380,00
0
Profit less
EV of development
info Proceed cost Probability Demand level
0
No $(350,000) 0.6 Fail and sell
$0 Expected value
Imperfect information
In reality, information obtained is rarely perfect and is merely likely to give us more
information about the likelihood of different outcomes rather than perfect information
about them. However, the numbers involved in calculating the values of imperfect
information are rather complex and at Paper F5 level, any numerical question would need
to be relatively simple. You should refer to the recommended text for a worked example on
the value of imperfect information. It is suffice here to say that the value of imperfect
information will always be less than the value of perfect information unless both are zero.
This would occur when the additional information would not change the decision. Note that
the principles that are applied for calculating the value of imperfect information are the
same as those applied for calculating the value of perfect information.
Written by a member of the Paper F5 examining team
Under the new exam structure for Paper F5, learning curves will continue to be examined,
the only change being that questions may vary a little bit more than they have done in the
past.
The December 2013 syllabus reads ‘estimate the learning effect and apply the learning
curve to a budgetary problem, including calculations on steady states.’ Thus far, this has
been interpreted to mean that the learning rate will always be given in a question and
questions will focus purely on 'estimating the learning effect' – that is, calculating the
labor time and usually cost for a given process. Historically, the requirements of questions
have been such it has been necessary for candidates to use the algebraic method, applying
the learning curve formula to problems, rather than the tabular approach.
Both methods, however, have actually been examinable under the syllabus, and for
teaching purposes, the tabular approach is always a good starting point to demonstrate
how the learning curve effect actually works.
In the syllabus for December 2014 onwards, the words ‘learning rate and learning effect’
will be used rather than simply ‘learning effect’. This wording has been changed in order to
make it clear that candidates could now be asked to calculate learning rates too.
The purpose of this article is, however, twofold: first, it is to summarize the history of the
learning curve effect and help candidates understand why it is important. Second, it is to
look at what past learning curve questions have required of candidates and to clarify how
future questions may go beyond this.
The first reported observation of the learning curve goes as far back as 1925 when aircraft
manufacturers observed that the number of man hours taken to assemble planes
decreased as more planes were produced. TP Wright subsequently established from his
research of the aircraft industry in the 1920s and 1930s that the rate at which learning
took place was not random at all and that it was actually possible to accurately predict
how much labor time would be required to build planes in the future. During World War II,
US government contractors then used the learning curve to predict cost and time for ship
and plane construction. Gradually, private sector companies also adopted it after the war.
The specific learning curve effect identified by Wright was that the cumulative average
time per unit decreased by a fixed percentage each time cumulative output doubled. While
in the aircraft industry this rate of learning was generally seen to be around 80%, in
different industries other rates occur. Similarly, depending on the industry in question, it is
often more appropriate for the unit of measurement to be a batch rather than an individual
unit.
The learning process starts as soon as the first unit or batch comes off the production line.
Since a doubling of cumulative production is required in order for the cumulative average
time per unit to decrease, it is clearly the case that the effect of the learning rate on
labour time will become much less significant as production increases. Eventually, the
learning effect will come to an end altogether. You can see this in Figure 1 below. When
output is low, the learning curve is really steep but the curve becomes flatter as
cumulative output increases, with the curve eventually becoming a straight line when the
learning effect ends.
Figure 1
The learning curve effect will not always apply, of course. It flourishes where certain
conditions are present. It is necessary for the process to be a repetitive one, for example.
Also, there needs to be a continuity of workers and they mustn’t be taking prolonged
breaks during the production process.
Let us now consider its importance in planning and control. If standard costing is to be
used, it is important that standard costs provide an accurate basis for the calculation of
variances. If standard costs have been calculated without taking into account the learning
effect, then all the labour usage variances will be favourable because the standard labour
hours that they are based on will be too high. This will make their use for control purposes
pointless.
Finally, it is worth noting that the use of learning curve is not restricted to the assembly
industries it is traditionally associated with. It is also used in other less traditional sectors
such as professional practice, financial services, publishing and travel. In fact, research
has shown that just under half of users are in the service sector.
The learning curve formula, as shown below, is always given on the formula sheet in the
exam.
Y = axb
Where Y = cumulative average time per unit to produce x units
a = the time taken for the first unit of output
x = the cumulative number of units produced
b = the index of learning (log LR/log2)
LR = the learning rate as a decimal
While a value for ‘b’ has usually been given in past exams there is no reason why this
should always be the case. All candidates should know how to use a scientific calculator
and should be sure to take one into the exam hall.
In June 2013, the learning effect was again examined in conjunction with lifetime costing.
Again, as has historically been the case, the learning rate was given in the question, as
was the value for ‘b’.
Back in June 2009, the learning curve effect was examined in conjunction with target
costing. Once again, the learning rate was given, and a value for ‘b’ was given, but this
time, an average cost for the first 128 units made was required. It was after this point that
the learning effect ended, so the question then went on to ask candidates to calculate the
cost for the last unit made, since this was going to be the cost of making one unit going
forward in the business.
It can be seen, just from the examples given above, that learning curve questions have
tended to follow a fairly regular pattern in the past. The problem with this is that
candidates don’t always actually think about the calculations they are performing. They
simply practise past papers, learn how to answer questions, and never really think beyond
this. In the workplace, when faced with calculations involving the learning effect,
candidates may not be able to tackle them. In the workplace, the learning rate will not be
known in advance for a new process and secondly, even if it has been estimated,
differences may well arise between expected learning rates and actual learning rate
experienced. Therefore, it seemed only right that future questions should examine
candidates’ ability to calculate the learning rate itself. This leads us on to the next section
of the article.
Example 1
P Co operates a standard costing system. The standard labour time per batch for its
newest product was estimated to be 200 hours, and resource allocation and cost data
were prepared on this basis.
The actual number of batches produced during the first six months and the actual time
taken to produce them is shown below:
200 1 June
152 1 July
267.52 2 August
470.8 4 September
1,090.32 8 October
2,180.64 16 November
Required
(a) Calculate the monthly learning rate that arose during the period.
(b) Identify when the learning period ended and briefly discuss the implications of this for P
Co.
Solution
(a) Monthly rates of learning
Cumulative
average Cumulative Incremental
hours per Cumulative number of Incremental number of
batch total hours batches total hours batches Month
Septemb
136.29 1090.32 8 470.8 4 er
Novemb
136.29 4361.28 32 2180.64 16 er
Learning rate:
176/200 = 88%
154.88/176 = 88%
136.29/154.88 = 88%
Example 2
The first batch of a new product took six hours to make and the total time for the first 16
units was 42.8 hours, at which point the learning effect came to an end.
Calculate the rate of learning.
Solution
Again, the easiest way to solve this problem and find the actual learning rate is to use a
combination of the tabular approach plus, in this case, a little bit of maths. There is an
alternative method that can be used that would involve some more difficult maths and use
of the inverse log button on the calculator, but this can be quite tricky and candidates
would not be expected to use this method. Should they choose to do so, however, full
marks would be awarded, of course.
Using algebra:
SUMMARY
The above two examples demonstrate the type of requirements that you may find in future
questions on learning curves, together with the old-style requirements that have
traditionally been found in past questions in Paper F5. Remember: nothing new is being
added to the syllabus here. All that we are doing is encouraging you to think a little and, in
some case, perhaps use a little bit of the math's that, as a trainee accountant, you should
be more than capable of applying.
A member of the Paper F5 examining team shares her latest read and how it changed her
views on throughput accounting and the theory of constraints
I’ve just finished reading a book. It was the type of book that you pick up and you cannot
put down (other than to perform the mandatory tasks that running a house and looking
after a family entail!) Even the much-awaited new series of one of my favourite television
programmes couldn’t tempt me away from my book.
Now obviously I’m telling you this for a reason. I love reading and it’s not unusual to find
me glued to a book for several days, if it’s a good one. But you’ve gathered by now that the
book I’ve been reading was not the usual Man Booker or Orange prize fiction novel that you
might ordinarily find tucked away in my handbag. It was in fact The Goal: A Process of
Ongoing Improvement by Eli Goldratt and Jeff Cox. If by now you’ve settled quickly into
the belief that I must conform to society’s expectations of your typical ‘number crunching’
accountant of which – by the way – I’ve met few in reality, you are wrong. So what then,
you may ask, makes this book so different from the image that the title conjures up? Let
me tell you all about it.
The Goal, originally published back in 1984, presents the theory of constraints and
throughput accounting within the context of a novel. It tells the story of Alex Rogo, a plant
manager at a fictional manufacturing company called UniCo, which is facing imminent
closure unless Alex can turn the loss-making plant into a profitable one within three
months. In his attempt to do so, Alex is forced to question the whole belief in the US at the
time that success in manufacturing is represented by a 100% efficient factory (ie everyone
and every machine is busy 100% of the time), which keeps cost per unit as low as possible.
To be honest, before I read the book, I wasn’t really convinced about throughput
accounting – although the theory of constraints has always made perfect sense to me. But,
having read about both in the context of a very believable plant that was representative of
many at the time, my views have changed. It’s easy to stand in a classroom and lecture
about throughput accounting and criticise it for being ‘nothing new’, but what we have to
remember is, back in 1984, this was new, and for those companies that adopted it, it made
a huge difference.
I’m aware that, if I want you to share my renewed interest in throughput accounting, I need
to tell you more about the story that gripped me. If I don’t do this, you’ll just go away
having read yet another article about throughput accounting, and any doubts that you have
about its relevance today will remain the same. On the other hand, I’m also aware that,
when sitting professional exams, you need to have a working knowledge of throughput
accounting that you can apply in the exam hall. Consequently, I’ve decided that, in this
first article, I’ll summarise the story contained in The Goal, bringing out some of the basic
principles of the theory of constraints and throughput accounting. Then, in the second
article, I’ll talk you through a practical approach to questions on throughput accounting.
Alex Rogo’s journey begins with a chance meeting with his old physics teacher, Jonah, at
an airport, after attending a conference about robotics. This is just before Alex finds out
about the threat of closure at the plant. The UniCo factory has been using robotic
machines for some time now and Alex is proudly telling Jonah about the improvements in
efficiency at the factory. Jonah is quick to question whether these improvements in
efficiency have actually led to an improvement in profits. Alex is confused by the way the
conversation is going. This confusion is reflective of the US thinking at the time. There is
so much focus on efficiency and reducing labour costs with increased automation, but
without consideration of whether either of these things are having any impact on profit. In
the case of UniCo – and indeed many other real factories at the time – the so-called
improvements in efficiency are not leading to increased profits. In fact, they seem to be
leading to losses.
Jonah leads Alex to consider what the goal of UniCo really is. Until this point, he – like his
superiors at Head Office – has just assumed that if the factory is producing increasingly
more parts at a lower unit cost, it is increasingly efficient and therefore must be doing
well. All the performance criteria that the business is using support this view; all Alex’s
bosses are concerned about seems to be cost efficiencies.
After some reflection, Alex realises that the overriding goal of an organisation is to make
money. Just because a factory is making more parts does not mean to say that it is making
more money. In fact, UniCo shows that just the opposite is happening. The plant has
become seemingly more efficient, thanks to the use of the robots, but the fact is that
inventory levels are huge and the plant is constantly failing to meet order deadlines. It is
standard practice for orders to be five or six months late. An order at the plant only ever
seems to go out when one of the customers loses patience and complains loudly, resulting
in the order being expedited – ie all other work is put on hold in order to get the one order
out. Customers are becoming increasingly dissatisfied, losses are growing, and crisis point
is reached.
Clearly, the ‘goal’ that the objective of the plant is to make money needs to be more clearly
defined, in order to generate improvements, and Jonah helps Alex do this by explaining
that it will be achieved by ‘increasing throughput whilst simultaneously reducing inventory
and operational expense’. Some definitions are given at this point:
‘throughput’ is the rate at which the system generates money through sales
‘inventory’ is all the money that the system has invested in purchasing things that it
intends to sell
‘operational expense’ is all the money that the system spends in order to turn inventory
into throughput
Having worked out what the goal is, Alex is then left with the difficult task of working out
how that goal can be achieved. The answer begins to present itself to Alex when he takes
his son and some other boys on a 10-mile hike. Given that the average boy walks at two
miles an hour, Alex expects to reach the halfway point on the hike after about two and a
half hours of walking. When this doesn’t happen, and Alex finds that the group is behind
schedule and big gaps are appearing between them, he begins to question what is going
on. He soon realises that the problem is arising because one of the boys is much slower
than the others. This boy is preventing the other boys from going faster and Alex realises
that, if everyone is to stay in one group as they must, the group can only go as fast as their
slowest walker. The slow walker is effectively a bottleneck: the factor that prevents the
group from going faster. It doesn’t matter how fast the quickest walker is; he cannot make
up for the fact that the slowest walker is really slow. While the average speed may be two
miles per hour, the boys can all only really walk at the speed of the slowest boy.
However, Alex also realises that they can increase the boy’s speed by sharing out the
heavy load he is carrying in his bag, enabling him to walk faster. In this way, they can
‘elevate the bottleneck’ – ie increase the capacity of the critical resource. Alex cannot
wait to get back and identify where the bottlenecks are happening in his factory and find
out if they can be elevated in any way, without laying out any capital expenditure.
The other thing that Alex gains a better understanding of on the hike is the relationship
between dependent events and statistical fluctuations. Jonah has already explained to
Alex that the belief that a balanced plant is an efficient plant is a flawed belief. In a
balanced plant, the capacity of each and every resource is balanced exactly with the
demand from the market. In the 1980s, it was deemed to be ideal because, at the time,
manufacturing managers in the Western world believed that, if they had spare capacity,
they were wasting money. Therefore, they tried to trim capacity wherever they could, so
that no resource was idle and everybody always had something to work on. However, as
Jonah explains, when capacity is trimmed exactly to marketing demand, throughput goes
down and inventory goes up. Since inventory goes up, the cost of carrying it – ie
operational expense also goes up. These things happen because of the combination of two
phenomena: dependent events and statistical fluctuations.
The fact that one boy walks at three miles an hour and one boy walks at one mile an hour
on the hike is evidence of statistical fluctuations. But the actual opportunity for the higher
fluctuation of three miles an hour to occur is limited by the constraint of the one mile per
hour walker. The fast boy at the front of the group can only keep on walking ahead if the
other boys are also with him – ie he is dependent on them catching up if he is to reach his
three mile per hour speed. Where there are dependent events, such as this, the opportunity
for higher fluctuations is limited. Alex takes this knowledge back to the factory with him
and sets about rescuing his plant.
IDENTIFYING BOTTLENECKS
Back at the plant, Alex and his team set out to identify which machines at the plant are the
bottleneck resources. After talking to staff and walking around the factory, where there
are big piles of inventory sitting in front of two main machines, the bottlenecks become
obvious. Eighty per cent of parts have to go through these machines, and the team make
sure that all such parts are processed on the non-bottleneck machines in priority to the
other 20% of parts, by marking them up with a red label. The parts that don’t go through
the bottlenecks are marked with a green label. The result? Throughput increases. But the
problem? Unfortunately, it doesn’t increase enough to save the factory.
ELEVATING BOTTLENECKS
The next step is therefore to try and elevate the capacity of the bottlenecks. This is not
easy without spending money, but observation shows that, at times, the bottleneck
machines are sometimes still idle, despite the labelling system giving priority to the parts
that have to be ready to go through the bottleneck machines. This is partly because
workers are taking their breaks before getting the machines running again, and partly
because they have left the machines unmanned because they have been called away to
work on another (non-bottleneck) machine. Both of these absences result in the machines
becoming idle. At this point, Alex learns an important lesson: an hour lost on a bottleneck
machine is an hour lost for the entire system. This hour can never be recouped. It is
pointless to leave a bottleneck machine unmanned in order to go and load up a non-
bottleneck machine because there is spare capacity on the non-bottleneck machine
anyway. It doesn’t matter if it’s not running for a bit. But it does matter in the case of the
bottleneck. From this point onwards, the two bottlenecks are permanently manned and
permanently running. Their capacity is elevated this way, along with another few changes
that are implemented.
At this point, Alex and his team think they have saved the factory, and then suddenly they
find that new bottlenecks seem to be appearing. Parts with green labels on are not being
completed in sufficient quantities, meaning that final assembly of the company’s products
is again not taking place, and orders are being delayed again (because final assembly of
products requires both bottleneck and non-bottleneck parts). Alex calls Jonah in a panic
and asks for help. Jonah soon identifies the problem. Factory workers are still trying to be
as efficient as possible, all of the time. This means that they are getting their machines to
produce as many parts as possible, irrespective of the number of parts that can actually be
processed by the bottleneck.
As for those products that do need to go through X, they may, for example, go from Y to Y
to X to Y (as there are numerous steps involved in the production process). But if the
capacity of the first Y machine is far higher than the capacity of the next Y machine, and it
processes excessive X parts, another bottleneck may look like it has appeared on the
second Y machine because so many red labelled parts are being fed through that it never
gets to process the green ones, which are also necessary for final assembly. Suddenly Alex
realises that all machines must work at the pace set by the bottleneck machines, just like
the boys on the hike that had to walk at the pace of the slowest walker.
Consequently, Alex realises that it is really important to let Y machines and workers sit
idle when they have produced to the capacity of the bottleneck machines. By definition,
they have spare capacity. It’s not only wasteful to produce parts that are not needed or
cannot be processed; it also clogs up the whole system and makes it seem as if new
bottlenecks are appearing. This idea of idle time not only being acceptable but also being
essential flies in the face of everything that is believed at the time and, yet, when you
understand the theory of constraints, it makes perfect sense. A balanced factory is not
efficient at all; it is very inefficient because different machines and processes have
different capacities, and if machines that have spare capacity are working 100% of the
time, they are producing parts that are not needed. This is wasteful, not efficient. As
evidenced in the novel, inventory goes up and throughput goes down. Alex is quick to
resolve the problem and get things running smoothly again.
Given that producing excess inventories both pushes costs up and prevents throughput, it
becomes obvious that throughput accounting and just in time operate very well together.
This becomes clear towards the end of the novel when UniCo secures even more orders by
reducing its delivery time dramatically. It is able to do this by adopting some of the
principles of just-in-time.
First, Alex reduces batch sizes substantially. For those unfamiliar with throughput
accounting and just-in-time, it can be hard to get past the idea that if batch sizes are
halved, financial results may still improve. The novice believes that if batch sizes are
halved, costs must go up, because more orders are needed, more set ups are needed, more
deliveries are needed, and so on... and surely these costs must be high? But the fact is – as
proved in the novel – inventory costs are also halved and, even more importantly, lead time
is halved, which in this case gives UniCo a competitive advantage. Throughput increases
dramatically because of increased sales volumes. These increased sales volumes also led
to a significantly lower operating cost per unit, which, along with the reduced inventory
costs, more than makes up for increase in the other costs. Given that there is spare
capacity for all of the non-bottleneck machines anyway, if the number of set ups for these
is increased, no real additional cost arises because there is idle time. As Jonah says: ‘An
hour saved on a non-bottleneck resource is a mirage.’
CONCLUSION
It is not possible, within the space of a few pages, to convey everything that The Goal has
to say. To think that I could do so would be an insult to the authors of this 273-page novel.
Nor is the theory contained within the novel beyond questioning and criticism; but this
article was not meant as a critique.
Hopefully, however, I have told you enough to convince you that this book is worth reading
should you have a couple of days to spare sometime. I haven’t, after all, told you the
ending... Also, you should now have an understanding of the background to my second
article, which you will find in the next issue of Student Accountant.
In the previous article, a member of the Paper F5 examining team revealed all about The
Goal, the book in which the theory of constraints and throughput accounting were
introduced in the context of a novel. In this second article, she sets out the five focusing
steps of the theory of constraints, briefly explaining each one
Then, I will go through two examples showing you how these steps might be applied in
practice or in exam questions. It’s worth noting at this stage that, while the theory of
constraints and throughput accounting were introduced in The Goal, they were further
developed by Goldratt later.
The total time required to make 50,000 units of the product can be calculated and
compared to the time available in order to identify the bottleneck.
It is clear that the heating process is the bottleneck. The organisation will in fact only be
able to produce 40,000 units (120,000/3) as things stand.
Step 2: Decide how to exploit the system’s bottlenecks This involves making sure that the
bottleneck resource is actively being used as much as possible and is producing as many
units as possible. So, ‘productivity’ and ‘utilisation’ are the key words here. In ‘ The Goal’,
Alex noticed that the NCX 10 was sometimes dormant and immediately changed this by
making sure that set ups took place before workers went on breaks, so that the machines
were always left running. Similarly, the furnaces were sometimes left idle for extended
periods before the completed parts were unloaded and new parts were put in. This was
because workers were being called away to work on non-bottleneck machines, rather than
being left standing idle while waiting for the furnaces to heat the parts. This was
addressed by making sure that there were always workers at the furnaces, even if they
had nothing to do for a while.
Step 3: Subordinate everything else to the decisions made in Step 2
The main point here is that the production capacity of the bottleneck resource should
determine the production schedule for the organisation as a whole. Remember how, in the
previous article, I talked about how new bottlenecks seemed to be appearing at the UniCo
plant, because non-bottleneck machines were producing more parts than the bottleneck
resources could absorb? Idle time is unavoidable and needs to be accepted if the theory of
constraints is to be successfully applied. To push more work into the system than the
constraint can deal with results in excess work-in-progress, extended lead times, and the
appearance of what looks like new bottlenecks, as the whole system becomes clogged up.
By definition, the system does not require the non-bottleneck resources to be used to their
full capacity and therefore they must sit idle for some of the time.
Step 4: Elevate the system’s bottlenecks
In The Goal, Alex was initially convinced that there was no way to elevate the capacities
of the NCX 10 machine and the furnace without investing in new machinery, which was not
an option. Jonah made him and his team think about the fact that, while the NCX 10 alone
performed the job of three of the old machines, and was very efficient at doing that job, the
old machines had still been capable of producing parts. Admittedly, the old machines were
slower but, if used alongside the NCX 10, they were still capable of elevating production
levels. Thus, one of Alex’s staff managed to source some of these old machines from one
of UniCo’s sister plants; they were sitting idle there, taking up factory space, so the
manager was happy not to charge Alex’s plant for the machines. In this way, one of the
system’s bottlenecks was elevated without requiring any capital investment.
This example of elevating a bottleneck without cost is probably unusual. Normally,
elevation will require capital expenditure. However, it is important that an organisation
does not ignore Step 2 and jumps straight to Step 4, and this is what often happens. There
is often untapped production capacity that can be found if you look closely enough.
Elevation should only be considered once exploitation has taken place.
Step 5: If a new constraint is broken in Step 4, go back to Step 1, but do not let inertia
become the system’s new bottleneck
When a bottleneck has been elevated, a new bottleneck will eventually appear. This could
be in the form of another machine that can now process less units than the elevated
bottleneck. Eventually, however, the ultimate constraint on the system is likely to be
market demand. Whatever the new bottleneck is, the message of the theory of constraints
is: never get complacent. The system should be one of ongoing improvement because
nothing ever stands still for long.
I am now going to have a look at an example of how a business can go about exploiting the
system’s bottlenecks – ie using them in a way so as to maximise throughput. In practice,
there may be lots of options open to the organisation such as the ones outlined in The
Goal. In the context of an exam question, however, you are more likely to be asked to
show how a bottleneck can be exploited by maximising throughput via the production of an
optimum production plan. This requires an application of the simple principles of key factor
analysis, otherwise known as limiting factor analysis or principal budget factor.
In key factor analysis, the contribution per unit is first calculated for each product, then a
contribution per unit of scarce resource is calculated by working out how much of the
scarce resource each unit requires in its production. In a throughput accounting context, a
very similar calculation is performed, but this time it is not contribution per unit of scarce
resource which is calculated, but throughput return per unit of bottleneck resource.
Throughput is calculated as ‘selling price less direct material cost.’ This is different from
the calculation of ‘contribution’, in which both labour costs and variable overheads are also
deducted from selling price. It is an important distinction because the fundamental belief
in throughput accounting is that all costs except direct materials costs are largely fixed –
therefore, to work on the basis of maximising contribution is flawed because to do so is to
take into account costs that cannot be controlled in the short term anyway. One cannot
help but agree with this belief really since, in most businesses, it is simply not possible, for
example, to hire workers on a daily basis and lay workers off if they are not busy. A
workforce has to be employed within the business and available for work if there is work to
do. You cannot refuse to pay a worker if he is forced to sit idle by a machine for a while.
Example 1
Beta Co produces 3 products, E, F and G, details of which are shown below:
G F E
$ $ $
Required:
Calculate the optimum product mix each month.
Answer
A few simple steps can be followed:
1. Calculate the throughput per unit for each product.
2. Calculate the throughput return per hour of bottleneck resource.
3. Rank the products in order of the priority in which they should be produced, starting with
the product that generates the highest return per hour first.
4. Calculate the optimum production plan, allocating the bottleneck resource to each one
in order, being sure not to exceed the maximum demand for any of the products.
It is worth noting here that you often see another step carried out between Steps 2 and 3
above. This is the calculation of the throughput accounting ratio for each product. Thus
far, ratios have not been discussed, and while I am planning on mentioning them later, I
have never seen the point of inserting this extra step in when working out the optimum
production plan. The ranking of the products using the return per factory hour will always
produce the same ranking as that produced using the throughput accounting ratio, so it
doesn’t really matter whether you use the return or the ratio.
G F E
$ $ $
1 3 2 Ranking
It is worth noting that, before the time taken on the bottleneck resource was taken into
account, product E appeared to be the most profitable because it generated the highest
throughput per unit. However, applying the theory of constraints, the system’s bottleneck
must be exploited by using it to produce the products that maximise throughput per hour
first (Step 2 of the five focusing steps). This means that product G should be produced in
priority to E.
In practice, Step 3 will be followed by making sure that the optimum production plan is
adhered to throughout the whole system, with no machine making more units than can be
absorbed by the bottleneck, and sticking to the priorities decided.
When answering a question like this in an exam it is useful to draw up a small table, like
the one shown below. This means that the marker can follow your logic and award all
possible marks, even if you have made an error along the way.
$1,800,00 120,00
0 $15 0 3 40,000 G
$1,800,00 150,00
0 $12 0 5 30,000 E
$4,100,00
Each time you allocate time on the bottleneck resource to a product, you have to ask
yourself how many hours you still have available. In this example, there were enough hours
to produce the full quota for G and E. However, when you got to F, you could see that out of
the 320,000 hours available, 270,000 had been used up (120,000 + 150,000), leaving only
50,000 hours spare.
Therefore, the number of units of F that could be produced was a balancing figure – 50,000
hours divided by the four hours each unit requires – ie 12,500 units.
The above example concentrates on Steps 2 and 3 of the five focusing steps. I now want to
look at an example of the application of Steps 4 and 5. I have kept it simple by assuming
that the organisation only makes one product, as it is the principle that is important here,
rather than the numbers. The example also demonstrates once again how to identify the
bottleneck resource (Step 1) and then shows how a bottleneck may be elevated, but will
then be replaced by another. It also shows that it may not always be financially viable to
elevate a bottleneck.
Example 2
Cat Co makes a product using three machines – X, Y and Z. The capacity of each machine
is as follows:
Z Y X Machine
The demand for the product is 1,000 units per week. For every additional unit sold per
week, net present value increases by $50,000. Cat Co is considering the following possible
purchases (they are not mutually exclusive):
Purchase 1 Replace machine X with a newer model. This will increase capacity to 1,100
units per week and costs $6m.
Purchase 2 Invest in a second machine Y, increasing capacity by 550 units per week. The
cost of this machine would be $6.8m.
Purchase 3 Upgrade machine Z at a cost of $7.5m, thereby increasing capacity to 1,050
units.
Required:
Which is Cat Co’s best course of action?
Answer
First, it is necessary to identify the system’s bottleneck resource. Clearly, this is machine
Z, which only has the capacity to produce 500 units per week. Purchase 3 is therefore the
starting point when considering the logical choices that face Cat Co. It would never be
logical to consider either Purchase 1 or 2 in isolation because of the fact that neither
machines X nor machine Y is the starting bottleneck. Let’s have a look at how the capacity
of the business increases with the choices that are available to it.
Demand Z Y X
* = bottleneck resource
From the table above, it can be seen that once a bottleneck is elevated, it is then replaced
by another bottleneck until ultimately market demand constrains production. At this point,
it would be necessary to look beyond production and consider how to increase market
demand by, for example, increasing advertising of the product.
In order to make a decision as to which of the machines should be purchased, if any, the
financial viability of the three options should be calculated.
Buy Z
(7,500) Cost
Buy Z & Y
Buy Z, Y & X
The company should therefore invest in all three machines if it has enough cash to do so.
The example of Cat Co demonstrates the fact that, as one bottleneck is elevated, another
one appears. It also shows that elevating a bottleneck is not always financially viable. If
Cat Co was only able to afford machine Z, it would be better off making no investment at
all because if Z alone is invested in, another bottleneck appears too quickly for the initial
investment cost to be recouped.
RATIOS
I want to finish off by briefly mentioning throughput ratios. There are three main ratios that
are calculated: (1) return per factory hour, (2) cost per factory hour and (3) the throughput
accounting ratio.
(1) Return per factory hour
Throughput per unit/product time on bottleneck resource. As we saw in Example 1, the
return per factory hour needs to be calculated for each product.
(2) Total factory costs/total time available on bottleneck resource.
The ‘total factory cost’ is simply the ‘operational expense’ of the organisation referred to in
the previous article. If the organisation was a service organisation, we would simply call it
‘total operational expense’ or something similar. The cost per factory hour is across the
whole factory and therefore only needs to be calculated once.
(3) Return per factory hour/cost per factory hour.
In any organisation, you would expect the throughput accounting ratio to be greater than
1. This means that the rate at which the organisation is generating cash from sales of this
product is greater than the rate at which it is incurring costs. It follows on, then, that if the
ratio is less than 1, this is not the case, and changes need to be made quickly.
CONCLUSION
At this point, I’m hopeful that you are now looking forward to reading The Goal as soon as
possible and that you have a better understanding of the theory of constraints and
throughput accounting, which you can put into practice by tackling some questions.
The total time required to make 50,000 units of the product can be calculated and
compared to the time available in order to identify the bottleneck.
It is clear that the heating process is the bottleneck. The organisation will in fact only be
able to produce 40,000 units (120,000/3) as things stand.
Step 2: Decide how to exploit the system’s bottlenecks This involves making sure that the
bottleneck resource is actively being used as much as possible and is producing as many
units as possible. So, ‘productivity’ and ‘utilisation’ are the key words here. In ‘ The Goal’,
Alex noticed that the NCX 10 was sometimes dormant and immediately changed this by
making sure that set ups took place before workers went on breaks, so that the machines
were always left running. Similarly, the furnaces were sometimes left idle for extended
periods before the completed parts were unloaded and new parts were put in. This was
because workers were being called away to work on non-bottleneck machines, rather than
being left standing idle while waiting for the furnaces to heat the parts. This was
addressed by making sure that there were always workers at the furnaces, even if they
had nothing to do for a while.
Step 3: Subordinate everything else to the decisions made in Step 2
The main point here is that the production capacity of the bottleneck resource should
determine the production schedule for the organisation as a whole. Remember how, in the
previous article, I talked about how new bottlenecks seemed to be appearing at the UniCo
plant, because non-bottleneck machines were producing more parts than the bottleneck
resources could absorb? Idle time is unavoidable and needs to be accepted if the theory of
constraints is to be successfully applied. To push more work into the system than the
constraint can deal with results in excess work-in-progress, extended lead times, and the
appearance of what looks like new bottlenecks, as the whole system becomes clogged up.
By definition, the system does not require the non-bottleneck resources to be used to their
full capacity and therefore they must sit idle for some of the time.
Step 4: Elevate the system’s bottlenecks
In The Goal, Alex was initially convinced that there was no way to elevate the capacities
of the NCX 10 machine and the furnace without investing in new machinery, which was not
an option. Jonah made him and his team think about the fact that, while the NCX 10 alone
performed the job of three of the old machines, and was very efficient at doing that job, the
old machines had still been capable of producing parts. Admittedly, the old machines were
slower but, if used alongside the NCX 10, they were still capable of elevating production
levels. Thus, one of Alex’s staff managed to source some of these old machines from one
of UniCo’s sister plants; they were sitting idle there, taking up factory space, so the
manager was happy not to charge Alex’s plant for the machines. In this way, one of the
system’s bottlenecks was elevated without requiring any capital investment.
This example of elevating a bottleneck without cost is probably unusual. Normally,
elevation will require capital expenditure. However, it is important that an organisation
does not ignore Step 2 and jumps straight to Step 4, and this is what often happens. There
is often untapped production capacity that can be found if you look closely enough.
Elevation should only be considered once exploitation has taken place.
Step 5: If a new constraint is broken in Step 4, go back to Step 1, but do not let inertia
become the system’s new bottleneck
When a bottleneck has been elevated, a new bottleneck will eventually appear. This could
be in the form of another machine that can now process less units than the elevated
bottleneck. Eventually, however, the ultimate constraint on the system is likely to be
market demand. Whatever the new bottleneck is, the message of the theory of constraints
is: never get complacent. The system should be one of ongoing improvement because
nothing ever stands still for long.
I am now going to have a look at an example of how a business can go about exploiting the
system’s bottlenecks – ie using them in a way so as to maximise throughput. In practice,
there may be lots of options open to the organisation such as the ones outlined in The
Goal. In the context of an exam question, however, you are more likely to be asked to
show how a bottleneck can be exploited by maximising throughput via the production of an
optimum production plan. This requires an application of the simple principles of key factor
analysis, otherwise known as limiting factor analysis or principal budget factor.
In key factor analysis, the contribution per unit is first calculated for each product, then a
contribution per unit of scarce resource is calculated by working out how much of the
scarce resource each unit requires in its production. In a throughput accounting context, a
very similar calculation is performed, but this time it is not contribution per unit of scarce
resource which is calculated, but throughput return per unit of bottleneck resource.
Throughput is calculated as ‘selling price less direct material cost.’ This is different from
the calculation of ‘contribution’, in which both labour costs and variable overheads are also
deducted from selling price. It is an important distinction because the fundamental belief
in throughput accounting is that all costs except direct materials costs are largely fixed –
therefore, to work on the basis of maximising contribution is flawed because to do so is to
take into account costs that cannot be controlled in the short term anyway. One cannot
help but agree with this belief really since, in most businesses, it is simply not possible, for
example, to hire workers on a daily basis and lay workers off if they are not busy. A
workforce has to be employed within the business and available for work if there is work to
do. You cannot refuse to pay a worker if he is forced to sit idle by a machine for a while.
Example 1
Beta Co produces 3 products, E, F and G, details of which are shown below:
G F E
$ $ $
Required:
Calculate the optimum product mix each month.
Answer
A few simple steps can be followed:
1. Calculate the throughput per unit for each product.
2. Calculate the throughput return per hour of bottleneck resource.
3. Rank the products in order of the priority in which they should be produced, starting with
the product that generates the highest return per hour first.
4. Calculate the optimum production plan, allocating the bottleneck resource to each one
in order, being sure not to exceed the maximum demand for any of the products.
It is worth noting here that you often see another step carried out between Steps 2 and 3
above. This is the calculation of the throughput accounting ratio for each product. Thus
far, ratios have not been discussed, and while I am planning on mentioning them later, I
have never seen the point of inserting this extra step in when working out the optimum
production plan. The ranking of the products using the return per factory hour will always
produce the same ranking as that produced using the throughput accounting ratio, so it
doesn’t really matter whether you use the return or the ratio.
G F E
$ $ $
1 3 2 Ranking
It is worth noting that, before the time taken on the bottleneck resource was taken into
account, product E appeared to be the most profitable because it generated the highest
throughput per unit. However, applying the theory of constraints, the system’s bottleneck
must be exploited by using it to produce the products that maximise throughput per hour
first (Step 2 of the five focusing steps). This means that product G should be produced in
priority to E.
In practice, Step 3 will be followed by making sure that the optimum production plan is
adhered to throughout the whole system, with no machine making more units than can be
absorbed by the bottleneck, and sticking to the priorities decided.
When answering a question like this in an exam it is useful to draw up a small table, like
the one shown below. This means that the marker can follow your logic and award all
possible marks, even if you have made an error along the way.
$1,800,00 120,00
0 $15 0 3 40,000 G
$1,800,00 150,00
0 $12 0 5 30,000 E
$5000,00
0 $10 50,000 4 12,500 F
$4,100,00
Each time you allocate time on the bottleneck resource to a product, you have to ask
yourself how many hours you still have available. In this example, there were enough hours
to produce the full quota for G and E. However, when you got to F, you could see that out of
the 320,000 hours available, 270,000 had been used up (120,000 + 150,000), leaving only
50,000 hours spare.
Therefore, the number of units of F that could be produced was a balancing figure – 50,000
hours divided by the four hours each unit requires – ie 12,500 units.
The above example concentrates on Steps 2 and 3 of the five focusing steps. I now want to
look at an example of the application of Steps 4 and 5. I have kept it simple by assuming
that the organisation only makes one product, as it is the principle that is important here,
rather than the numbers. The example also demonstrates once again how to identify the
bottleneck resource (Step 1) and then shows how a bottleneck may be elevated, but will
then be replaced by another. It also shows that it may not always be financially viable to
elevate a bottleneck.
Example 2
Cat Co makes a product using three machines – X, Y and Z. The capacity of each machine
is as follows:
Z Y X Machine
The demand for the product is 1,000 units per week. For every additional unit sold per
week, net present value increases by $50,000. Cat Co is considering the following possible
purchases (they are not mutually exclusive):
Purchase 1 Replace machine X with a newer model. This will increase capacity to 1,100
units per week and costs $6m.
Purchase 2 Invest in a second machine Y, increasing capacity by 550 units per week. The
cost of this machine would be $6.8m.
Purchase 3 Upgrade machine Z at a cost of $7.5m, thereby increasing capacity to 1,050
units.
Required:
Which is Cat Co’s best course of action?
Answer
First, it is necessary to identify the system’s bottleneck resource. Clearly, this is machine
Z, which only has the capacity to produce 500 units per week. Purchase 3 is therefore the
starting point when considering the logical choices that face Cat Co. It would never be
logical to consider either Purchase 1 or 2 in isolation because of the fact that neither
machines X nor machine Y is the starting bottleneck. Let’s have a look at how the capacity
of the business increases with the choices that are available to it.
Demand Z Y X
* = bottleneck resource
From the table above, it can be seen that once a bottleneck is elevated, it is then replaced
by another bottleneck until ultimately market demand constrains production. At this point,
it would be necessary to look beyond production and consider how to increase market
demand by, for example, increasing advertising of the product.
In order to make a decision as to which of the machines should be purchased, if any, the
financial viability of the three options should be calculated.
Buy Z
(7,500) Cost
Buy Z & Y
Buy Z, Y & X
The company should therefore invest in all three machines if it has enough cash to do so.
The example of Cat Co demonstrates the fact that, as one bottleneck is elevated, another
one appears. It also shows that elevating a bottleneck is not always financially viable. If
Cat Co was only able to afford machine Z, it would be better off making no investment at
all because if Z alone is invested in, another bottleneck appears too quickly for the initial
investment cost to be recouped.
RATIOS
I want to finish off by briefly mentioning throughput ratios. There are three main ratios that
are calculated: (1) return per factory hour, (2) cost per factory hour and (3) the throughput
accounting ratio.
(1) Return per factory hour
Throughput per unit/product time on bottleneck resource. As we saw in Example 1, the
return per factory hour needs to be calculated for each product.
(2) Total factory costs/total time available on bottleneck resource.
The ‘total factory cost’ is simply the ‘operational expense’ of the organisation referred to in
the previous article. If the organisation was a service organisation, we would simply call it
‘total operational expense’ or something similar. The cost per factory hour is across the
whole factory and therefore only needs to be calculated once.
(3) Return per factory hour/cost per factory hour.
In any organisation, you would expect the throughput accounting ratio to be greater than
1. This means that the rate at which the organisation is generating cash from sales of this
product is greater than the rate at which it is incurring costs. It follows on, then, that if the
ratio is less than 1, this is not the case, and changes need to be made quickly.
CONCLUSION
At this point, I’m hopeful that you are now looking forward to reading The Goal as soon as
possible and that you have a better understanding of the theory of constraints and
throughput accounting, which you can put into practice by tackling some questions.
TRANSFER PRICING
There is no doubt that transfer pricing is an area that candidates find difficult. It’s not
surprising, then, that when it was examined in June 2014’s F5 exam, answers were not
always very good.
The purpose of this article is to strip transfer pricing back to the basics and consider, first,
why transfer pricing is important; secondly, the general principles that should be applied
when setting a transfer price; and thirdly, an approach to tackle exam questions in this
area, specifically the question from June 2014’s exam paper. We will talk about transfer
pricing here in terms of two divisions trading with each other. However, don’t forget that
these principles apply equally to two companies within the same group trading with each
other.
This article assumes that transfer prices will be negotiated between the two parties. It
does not look at alternative methods such as dual pricing, for example. This is because, in
F5, the primary focus is on working out a sensible transfer price or range of transfer prices,
rather than different techniques to setting transfer prices.
It is essential to understand that transfer prices are only important in so far as they
encourage divisions to trade in a way that maximises profits for the company as a whole.
The fact is that the effects of inter-divisional trading are wiped out on consolidation
anyway. Hence, all that really matters is the total value of external sales compared to the
total costs of the company. So, while getting transfer prices right is important, the actual
transfer price itself doesn’t matter since the selling division’s sales (a credit in the
company accounts) will be cancelled out by the buying division’s purchases (a debit in the
company accounts) and both figures will disappear altogether. All that will be left will be
the profit, which is merely the external selling price less any cost incurred
by both divisions in producing the goods, irrespective of which division they were
incurred in.
As well as transfer prices needing to be set at a level that maximises company profits,
they also need to be set in a way that is compliant with tax laws, allows for performance
evaluation of both divisions and staff/managers, and is fair and therefore motivational. A
little more detail is given on each of these points below:
If your company is based in more than one country and it has divisions in different
countries that are trading with each other, the price that one division charges the
other will affect the profit that each of those divisions makes. In turn, given that tax is
based on profits, a division will pay more or less tax depending on the transfer prices
that have been set. While you don’t need to worry about the detail of this for the F5
exam, it’s such an important point that it’s simply impossible not to mention it when
discussing why transfer pricing is important.
From point 1, you can see that the transfer price set affects the profit that a division
makes. In turn, the profit that a division makes is often a key figure used when
assessing the performance of a division. This will certainly be the case if return on
investment (ROI) or residual income (RI) is used to measure performance.
Consequently, a division may, for example, be told by head office that it has to buy
components from another division, even though that division charges a higher price
than an external company. This will lead to lower profits and make the buying
division’s performance look poorer than it would otherwise be. The selling division, on
the other hand, will appear to be performing better. This may lead to poor decisions
being made by the company.
If this is the case, the manager and staff of that division are going to become unhappy.
Often, their pay will be linked to the performance of the division. If divisional
performance is poor because of something that the manager and staff cannot control,
and they are consequently paid a smaller bonus for example, they are going to become
frustrated and lack the motivation required to do the job well. This will then have a
knock-on effect to the real performance of the division. As well as being seen not to
do well because of the impact of high transfer prices on ROI and RI, the division really
will perform less well.
The impact of transfer prices could be considered further but these points are sufficient for
the level of understanding needed for the F5 exam. Let us now go on to consider the
general principles that you should understand about transfer pricing. Again, more detail
could be given here and these are, to some extent, oversimplified. However, this level of
detail is sufficient for the F5 exam.
Spare capacity
If there is spare capacity, then, for any sales that are made by using that spare capacity,
the opportunity cost is zero. This is because workers and machines are not fully utilised.
So, where a selling division has spare capacity the minimum transfer price is effectively
just marginal cost. However, this minimum transfer price is probably not going to be one
that will make the managers happy as they will want to earn additional profits. So, you
would expect them to try and negotiate a higher price that incorporates an element of
profit.
No spare capacity
If the seller doesn’t have any spare capacity, or it doesn’t have enough spare capacity to
meet all external demand and internal demand, then the next question to consider is: how
can the opportunity cost be calculated? Given that opportunity cost represents
contribution foregone, it will be the amount required in order to put the selling division in
the same position as they would have been in had they sold outside of the group. Rather
than specifically working an 'opportunity cost' figure out, it’s easier just to stand back and
take a logical approach rather than a rule-based one.
Logically, the buying division must be charged the same price as the external buyer would
pay, less any reduction for cost savings that result from supplying internally. These
reductions might reflect, for example, packaging and delivery costs that are not incurred if
the product is supplied internally to another division. It is not really necessary to start
breaking the transfer price down into marginal cost and opportunity cost in this situation.
(i) what price the product could have been sold for outside the group
(ii) establish any cost savings, and
(iii) deduct (ii) from (i) to arrive at the minimum transfer price.
At this point, we could start distinguishing between perfect and imperfect markets, but
this is not necessary in F5. There will be enough information given in a question for you to
work out what the external price is without focusing on the market structure.
We have assumed here that production constraints will result in fewer sales of the same
product to external customers. This may not be the case; perhaps, instead, production
would have to be moved away from producing a different product. If this is the case the
opportunity cost, being the contribution foregone, is simply the shadow price of the scarce
resource.
In situations where there is no spare capacity, the minimum transfer price is such that the
selling division would make just as much profit from selling internally as selling externally.
Therefore, it reflects the price that they would actually be happy to sell at. They shouldn’t
expect to make higher profits on internal sales than on external sales.
Thus far, we have only talked in terms of principles and, while it is important to understand
these, it is equally as important to be able to apply them. The following question came up
in June 2014’s exam. It was actually a 20-mark question with the first 10 marks in Part (a)
examining divisional performance measurement and the second 10 marks in (b) examining
transfer pricing. Parts of the question that were only relevant to Part (a) have been omitted
here. The question read as follows:
W Co is a trading company with two divisions: the design division, which designs wind
turbines and supplies the designs to customers under licences and the Gearbox division,
which manufactures gearboxes for the car industry.
C Co manufactures components for gearboxes. It sells the components globally and also
supplies W Co with components for its Gearbox manufacturing division.
The financial results for the two companies for the year ended 31 May 2014 are as follows:
C Co W Co
Gearbox Design
division division
$'000 $'000 $'000
Sales to
7,550 Gearbox division
15,560
(b) C Co is currently working to full capacity. The Rotech group’s policy is that group
companies and divisions must always make internal sales first before selling outside of the
group. Similarly, purchases must be made from within the group wherever possible.
However, the group divisions and companies are allowed to negotiate their own transfer
prices without interference from head office.
C Co has always charged the same price to the Gearbox division as it does to its external
customers. However, after being offered a 5% lower price for the similar components from
an external supplier, the manager of the Gearbox division feels strongly that the transfer
price is too high and should be reduced. C Co currently satisfies 60% of the external
demand for its components. Its variable costs represent 40% of the total revenue for the
internal sales of the components.
Required:
Advise, using suitable calculations, the total transfer price or prices at which the
components should be supplied to the Gearbox division from C Co. (10 marks)
Approach
1. As always, you should begin by reading the requirement. In this case, it is very
specific as it asks you to ‘advise, using suitable calculations…’ In a question like this,
it would actually be impossible to ‘advise’ without using calculations anyway and your
answer would score very few marks. However, this wording has been added in to
provide assistance. In transfer pricing questions, you will sometimes be asked to
calculate a transfer price/range of transfer prices for one unit of a product. However,
in this case, you are being asked to calculate the total transfer price for the internal
sales. You don’t have enough information to work out a price per unit.
2. Allocate your time. Given that this is a 10-mark question then, since it is a three-hour
exam, the total time that should be spent on this question is 18 minutes.
3. Work through the scenario, highlighting or underlining key points as you go through.
When tackling Part (a) you would already have noted that C Co makes $7.55m of sales
to the Gearbox Division (and you should have noted who the buying division was and
who the selling division was). Then, in Part (b), the first sentence tells you that C Co is
currently working to full capacity. Highlight this; it’s a key point, as you should be able
to tell now. Next, you are told that the two divisions must trade with each other before
trading outside the group. Again, this is a key point as it tells you that, unless the
company is considering changing this policy, C Co is going to meet all of the Gearbox
division’s needs.
Next, you are told that the divisions can negotiate their own transfer prices, so you
know that the price(s) you should suggest will be based purely on negotiation.
Finally, you are given information to help you to work out maximum and minimum
transfer prices. You are told that the Gearbox division can buy the components from
an external supplier for 5% cheaper than C Co sells them for. Therefore, you can work
out the maximum price that the division will want to pay for the components. Then,
you are given information about the marginal cost of making gearboxes, the level of
external demand for them and the price they can be sold for to external customers.
You have to work all of these figures out but the calculations are quite basic. These
figures will enable you to calculate the minimum prices that C Co will want to sell its
gearboxes for; there are two separate prices as, when you work the figures through, it
becomes clear that, if C Co sold purely to the external market, it would still have some
spare capacity to sell to the Gearbox division. So, the opportunity cost for some of the
sales is zero, but not for the other portion of them.
4. Having actively read through the scenario, you are now ready to begin writing your
answer. You should work through in a logical order. Consider the transfer from both C
Co’s perspective (the minimum transfer price), then Gearbox division’s perspective
(the maximum transfer price), although it doesn’t matter which one you deal with first.
Head up your paragraphs so that your answer does not simply become a sea of words.
Also, by heading up each one separately, it helps you to remain focused on fully
discussing that perspective first. Finally, consider the overall position, which in this
case is to suggest a sensible range of transfer prices for the sale. There is no single
definitive answer but, as is often the case, a range of prices that would be acceptable.
The suggested solution is shown below.
Always remember that you should only show calculations that actually have some
relevance to the answer. In this exam, many candidates actually worked out figures
that were of no relevance to anything. Such calculations did not score marks.
REPRODUCTION OF ANSWER
In total, therefore, C Co will want to charge at least $6,224,000 for its sales to the Gearbox
division.
SUMMARY
The level of detail given in this article reflects the level of knowledge required at F5 as
regards transfer pricing questions of this nature. It’s important to understand why transfer
pricing both does and doesn’t matter and it is important to be able to work out a
reasonable transfer price/range of transfer prices.
The thing to remember is that transfer pricing is actually mostly about common sense. You
don’t really need to learn any of the specific principles if you understand what it is trying to
achieve: the trading of divisions with each other for the benefit of the company as a whole.
If the scenario in a question was different, you may have to consider how transfer prices
should be set to optimise the profits of the group overall. Here, it was not an issue as
group policy was that the two divisions had to trade with each other, so whether this was
actually the best thing for the company was not called into question. In some questions,
however it could be, so bear in mind that this would be a slightly different requirement.
Always read the requirement carefully to see exactly what you are being asked to do.
You should note that the Paper F5 syllabus examines 'environmental management
accounting’ rather than ‘environmental accounting’. Environmental accounting is a broader
term that encompasses the provision of environment-related information both externally
and internally. It focuses on reports required for shareholders and other stakeholders, as
well of the provision of management information. Environmental management accounting,
on the other hand, is a subset of environmental accounting. It focuses on information
required for decision making within the organisation, although much of the information it
generates could also be used for external reporting.
The aim of this article is to give a general introduction on the area of environmental
management accounting, followed by a discussion of the first of the two requirements
listed above.
Many of you reading this article still won’t be entirely clear on what environmental
management accounting actually is. You will not be alone! There is no single textbook
definition for it, although there are many long-winded, jargon ridden ones available. Before
we get into the unavoidable jargon, the easiest way to approach it in the first place is to
step back and ask ourselves what management accounting itself is. Management accounts
give us an analysis of the performance of a business and are ideally prepared on a timely
basis so that we get up-to-date management information. They break down each of our
different business segments (in a larger business) in a high level of detail. This information
is then used to assess how the business’ historic performance has been and, moving
forward, how it can be improved in the future.
Environmental management accounting is simply a specialised part of the management
accounts that focuses on things such as the cost of energy and water and the disposal of
waste and effluent. It is important to note at this point that the focus of environmental
management accounting is not all on purely financial costs. It includes consideration of
matters such as the costs vs benefits of buying from suppliers who are more
environmentally aware, or the effect on the public image of the company from failure to
comply with environmental regulations.
Once the costs have been identified and information accumulated on how many customers
are using the gym, it may actually be established that some customers are using more than
one towel on a single visit to the gym. The gym could drive forward change by informing
customers that they need to pay for a second towel if they need one. Given that this
approach will be seen as ‘environmentally-friendly’, most customers would not argue with
its introduction. Nor would most of them want to pay for the cost of a second towel. The
costs to be saved by the company from this new policy would include both the energy
savings from having to run fewer washing machines all the time and the staff costs of
those people collecting the towels and operating the machines. Presumably, since the
towels are being washed less frequently, they will need to be replaced by new ones less
often as well.
In addition to these savings to the company, however, are the all-important savings to the
environment since less power and cotton (or whatever materials the towels are made
from) is now being used, and the scarce resources of our planet are therefore being
conserved. Lastly, the gym is also seen as an environmentally friendly organisation and
this, in turn, may attract more customers and increase revenues. Just a little bit of
management accounting (and common sense!) can achieve all these things. While I always
like to minimise the use of jargon, in order to be fully versed on what environmental
management accounting is really seen by the profession as encompassing today, it is
necessary to consider a couple of the most widely accepted definitions of it.
The UNDSD make what became a widely accepted distinction between two types of
information: physical information and monetary information. Hence, they broadly defined
EMA to be the identification, collection, analysis and use of two types of information for
internal decision making:
To summarise then, for the purposes of clarifying the coverage of the Paper F5 syllabus,
my belief is that EMA is internally not externally focused and the Paper F5 syllabus should,
therefore, focus on information for internal decision making only. It should not be
concerned with how environmental information is reported to stakeholders, although it
could include consideration of how such information could be reported internally. For
example, Hansen and Mendoza (1999) stated that environmental costs are incurred
because of poor quality controls. Therefore, they advocate the use of a periodical
environmental cost report that is produced in the format of a cost of quality report, with
each category of cost being expressed as a percentage of sales revenues or operating
costs so that comparisons can be made between different periods and/or organisations.
The categories of costs would be as follows:
But the management of environmental costs can be a difficult process. This is because
first, just as EMA is difficult to define, so too are the actual costs involved. Second, having
defined them, some of the costs are difficult to separate out and identify. Third, the costs
can need to be controlled but this can only be done if they have been correctly identified in
the first place. Each of these issues is dealt with in turn below.
The UNDSD, on the other hand, described environmental costs as comprising of:
costs incurred to protect the environment, eg measures taken to prevent pollution and
costs of wasted material, capital and labour, ie inefficiencies in the production
process.
Neither of these definitions contradict each other; they just look at the costs from slightly
different angles. As a Paper F5 student, you should be aware that definitions of
environmental costs vary greatly, with some being very narrow and some being far wider.
In 2003, the UNDSD identified four management accounting techniques for the
identification and allocation of environmental costs: input/outflow analysis, flow cost
accounting, activity based costing and lifecycle costing. These are referred to later under
‘different methods of accounting for environmental costs’.
I will therefore use some basic examples of easy-to-understand environmental costs when
considering how an organisation may go about controlling such costs. Let us consider an
organisation whose main environmental costs are as follows:
Waste
There are lots of environmental costs associated with waste. For example, the costs of
unused raw materials and disposal; taxes for landfill; fines for compliance failures such as
pollution. It is possible to identify how much material is wasted in production by using the
‘mass balance’ approach, whereby the weight of materials bought is compared to the
product yield. From this process, potential cost savings may be identified. In addition to
these monetary costs to the organisation, waste has environmental costs in terms of lost
land resources (because waste has been buried) and the generation of greenhouse gases
in the form of methane.
Water
You have probably never thought about it but businesses actually pay for water twice –
first, to buy it and second, to dispose of it. If savings are to be made in terms of reduced
water bills, it is important for organisations to identify where water is used and how
consumption can be decreased.
Energy
Often, energy costs can be reduced significantly at very little cost. Environmental
management accounts may help to identify inefficiencies and wasteful practices and,
therefore, opportunities for cost savings.
Transport and travel
Again, environmental management accounting can often help to identify savings in terms
of business travel and transport of goods and materials. At a simple level, a business can
invest in more fuel-efficient vehicles, for example.
Consumables and raw materials
These costs are usually easy to identify and discussions with senior managers may help to
identify where savings can be made. For example, toner cartridges for printers could be
refilled rather than replaced.
This should produce a saving both in terms of the financial cost for the organisation and a
waste saving for the environment (toner cartridges are difficult to dispose of and less
waste is created this way).
ACTIVITY-BASED COSTING
ABC allocates internal costs to cost centres and cost drivers on the basis of the activities
that give rise to the costs. In an environmental accounting context, it distinguishes
between environment-related costs, which can be attributed to joint cost centres, and
environment-driven costs, which tend to be hidden on general overheads.
LIFECYCLE COSTING
Within the context of environmental accounting, lifecycle costing is a technique which
requires the full environmental consequences, and, therefore, costs, arising from
production of a product to be taken account across its whole lifecycle, literally ‘from
cradle to grave’.
SUMMARY
I hope you now have a clearer idea about exactly what environmental management
accounting is and why it’s important. While I have tried to give some simple, practical
examples and explanations, a certain amount of jargon is unavoidable in this subject area.
Enjoy your further reading.
One danger of decentralisation is that managers may use their decision-making freedom to
make decisions that are not in the best interests of the overall company (so called
dysfunctional decisions). To redress this problem, senior managers generally introduce
systems of performance measurement to ensure – among other things – that decisions
made by junior managers are in the best interests of the company as a whole. Example 1
details different degrees of decentralisation and typical financial performance measures
employed.
EXAMPLE 1
Standard costing
variances Decisions over costs Cost centre
* These two structures are often referred to as divisions – divisionalisation refers to the
delegation of profit-making responsibility.
provide incentive to the divisional manager to make decisions which are in the best
interests of the overall company (goal congruence)
only include factors for which the manager (division) can be held accountable
recognise the long-term objectives as well as short-term objectives of the
organisation.
Cost centres
Standard costing variance analysis is commonly used in the measurement of cost centre
performance. It gives a detailed explanation of why costs may have departed from
standard. Although commonly used, it is not without its problems. It focuses almost
entirely on short-term cost minimisation which may be at odds with other objectives, for
example, quality or delivery time. Also, it is important to be clear about who is responsible
for which variance – is the production manager or the purchasing manager (or both)
responsible for raw material price variances? There is also the problem with setting
standards in the first place – variances can only be as good as the standards on which they
are based.
Profit centres
Controllable profit statements are commonly used in profit centres. A proforma statement
is given in Example 2.
XXX (internal)
XXX
Controllable divisional
(XXX) variable costs
The major issue with such statements is the difficulty in deciding what is controllable or
traceable. When assessing the performance of a manager we should only consider costs
and revenues under the control of that manager, and hence judge the manager on
controllable profit. In assessing the success of the division, our focus should be on costs
and revenues that are traceable to the division and hence judge the division on traceable
profit. For example, depreciation on divisional machinery would not be included as a
controllable cost in a profit centre. This is because the manager has no control over
investment in fixed assets. It would, however, be included as a traceable fixed cost in
assessing the performance of the division.
Investment centres
In an investment centre, managers have the responsibilities of a profit centre plus
responsibility for capital investment. Two measures of divisional performance are
commonly used:
1. Return on investment (ROI) % = controllable (traceable) profit/controllable (traceable)
investment.
2. Residual income = controllable (traceable) profit – an imputed interest charge on
controllable (traceable) investment.
Note: Imputed interest is calculated by multiplying the controllable (traceable) investment
by the cost of capital.
EXAMPLE 3
Division X is a division of XYZ plc. Its net assets are currently $10m and it earns a profit of
$2.2m per annum. Division X's cost of capital is 10% per annum. The division is considering
two proposals.
Proposal 1 involves investing a further $1m in fixed assets to earn an annual profit of
$0.15m.
Proposal 2 involves the disposal of assets at their net book value of $2.3m. This would
lead to a reduction in profits of $0.3m.
Proceeds from the disposal of assets would be credited to head office not Division X.
Required: Calculate the current ROI and residual income for Division X and show how they
would change under each of the two proposals.
Current situation
Return on investment
ROI = $2.2m/$10m = 22%
Residual income
$2.2m Profit
Comment: ROI exceeds the cost of capital and residual income is positive. The division is
performing well.
Proposal 1
Return on investment
ROI = $2.35m/$11m = 21.4%
Residual income
$2.35m Profit
Proposal 2
Return on investment
ROI = $1.9m/$7.7m = 24.7%
Residual income
$1.90m Profit
Comment: In simple terms the disposal is not acceptable to the company. The existing
assets have a rate of return of 13.0% ($0.3m/$2.3m) which is greater than the cost of
capital and hence should not be disposed of. However, divisional ROI rises and this could
lead to the divisional manager accepting Proposal 2. This would be a dysfunctional
decision. Residual income decreases if Proposal 2 is adopted and once again this
performance measure should lead to goal congruent decisions.
Return on investment is a relative measure and hence suffers accordingly. For example,
assume you could borrow unlimited amounts of money from the bank at a cost of 10% per
annum. Would you rather borrow £100 and invest it at a 25% rate of return or borrow $1m
and invest it at a rate of return of 15%?
Although the smaller investment has the higher percentage rate of return, it would only
give you an absolute net return (residual income) of $15 per annum after borrowing costs.
The bigger investment would give a net return of $50,000. Residual income, being an
absolute measure, would lead you to select the project that maximises your wealth.
Residual income also ties in with net present value, theoretically the best way to make
investment decisions. The present value of a project's residual income equals the project's
net present value. In the long run, companies that maximise residual income will also
maximise net present value and in turn shareholder wealth. Residual income does,
however, experience problems in comparing managerial performance in divisions of
different sizes. The manager of the larger division will generally show a higher residual
income because of the size of the division rather than superior managerial performance.
EXAMPLE 4
PQR plc is considering opening a new division to manage a new investment project.
Forecast cashflows of the new project are as follows:
5 4 3 2 1 0 Year
Forecast net
cash flow
1.4 1.4 1.4 1.4 1.4 (5.0) $m
PQR's cost of capital is 10% per annum. Straight line depreciation is used.
Required: Calculate the project's net present value and its projected ROI and residual
income over its five-year life.
NPV
5 4 3 2 1 0 Year
Forecast net
cash flow
1.4 1.4 1.4 1.4 1.4 (5.0) $m
Present
value
factors at
0.62 0.68 0.75 0.83 0.91 1.00 10%
Present
0.87 0.95 1.05 1.16 1.27 (5.0) value
NPV =
NPV
$0.30m
ROI
5 4 3 2 1 Year
1 Opening
investment at
1.0 2.0 3.0 4.0 5.0 net book value
2 Forecast net
1.4 1.4 1.4 1.4 1.4 cash flow $m
3 Straight line
(1.0) (1.0) (1.0) (1.0) (1.0) depreciation
Residual income
5 4 3 2 1 Year
Profit (as
0.4 0.4 0.4 0.4 0.4 above)
10%)
Residual
0.3 0.2 0.1 0.0 (0.1) income
Comment: This example demonstrates two points. Firstly, it illustrates the potential
conflict between NPV and the two divisional performance measures. This project has a
positive NPV and should increase shareholder wealth. However, the poor ROI and residual
income figures in the first year could lead managers to reject the project. Secondly, it
shows the tendency for both ROI and residual income to improve over time. Despite
constant annual cashflows, both measures improve over time as the net book value of
assets falls. This could encourage managers to retain outdated assets.
In recent years, the trend in performance measurement has been towards a broader view
of performance, covering both financial and non-financial indicators. The most well-known
of these approaches is the balanced scorecard proposed by Kaplan and Norton. This
approach attempts to overcome the following weaknesses of traditional performance
measures:
Single factor measures such as ROI and residual income are unlikely to give a full picture
of divisional performance.
Question Perspective
How do we look to
shareholders? Financial
The term 'balanced' is used because managerial performance is assessed under all four
headings. Each organisation has to decide which performance measures to use under each
heading. Areas to measure should relate to an organisation's critical success factors.
Critical success factors (CSFs) are performance requirements which are fundamental to an
organisation's success (for example innovation in a consumer electronics company) and
can usually be identified from an organisation's mission statement, objectives and
strategy. Key performance indicators (KPIs) are measurements of achievement of the
chosen critical success factors. Key performance indicators should be:
specific (ie measure profitability rather than 'financial performance', a term which
could mean different things to different people)
measurable (ie be capable of having a measure placed upon it, for example, number of
customer complaints rather than the 'level of customer satisfaction')
relevant, in that they measure achievement of a critical success factor
Example 5 demonstrates a balanced scorecard approach to performance measurement in a
fictitious private sector college training ACCA students.
EXAMPLE 5
Critical Success
Key Performance Indicators Factor Perspective
Actual v budget
Receivable days Cashflow
(days)
Innovation
products
% of sales from < 1 year old Information Learning and
Number of online enrolments technology growth
Allowing for trade-offs between KPIs can also be problematic. How should the organisation
judge the manager who has improved in every area apart from, say, financial performance?
One solution to this problem is to require managers to improve in all areas, and not allow
trade-offs between the different measures.
COST-VOLUME-PROFIT ANALYSIS
Cost-volume-profit analysis looks primarily at the effects of differing levels of activity on
the financial results of a business
In any business, or, indeed, in life in general, hindsight is a beautiful thing. If only we could
look into a crystal ball and find out exactly how many customers were going to buy our
product, we would be able to make perfect business decisions and maximise profits.
Take a restaurant, for example. If the owners knew exactly how many customers would
come in each evening and the number and type of meals that they would order, they could
ensure that staffing levels were exactly accurate and no waste occurred in the kitchen.
The reality is, of course, that decisions such as staffing and food purchases have to be
made on the basis of estimates, with these estimates being based on past experience.
While management accounting information can’t really help much with the crystal ball, it
can be of use in providing the answers to questions about the consequences of different
courses of action. One of the most important decisions that needs to be made before any
business even starts is ‘how much do we need to sell in order to break-even?’ By ‘break-
even’ we mean simply covering all our costs without making a profit.
This type of analysis is known as ‘cost-volume-profit analysis’ (CVP analysis) and the
purpose of this article is to cover some of the straight forward calculations and graphs
required for this part of the Paper F5 syllabus, while also considering the assumptions
which underlie any such analysis.
It can, therefore, say with some degree of certainty that the contribution per unit (sales
price less variable costs) is $20. Company A may also have fixed costs of $200,000 per
annum, which again, are fairly easy to predict. However, when we ask the question: ‘Will
the company make a profit in that year?’, the answer is ‘We don’t know’. We don’t know
because we don’t know the sales volume for the year. However, we can work out how
many sales the business needs to make in order to make a profit and this is where CVP
analysis begins.
Note: total fixed costs are used rather than unit fixed costs since unit fixed costs will vary
depending on the level of output.
It would, therefore, be inappropriate to use a unit fixed cost since this would vary
depending on output. Sales price and variable costs, on the other hand, are assumed to
remain constant for all levels of output in the short-run, and, therefore, unit costs are
appropriate.
Continuing with our equation, we now set P to zero in order to find out how many items we
need to sell in order to make no profit, ie to break even:
The equation has given us our answer. If Company A sells less than 10,000 units, it will
make a loss; if it sells exactly 10,000 units, it will break-even, and if it sells more than
10,000 units, it will make a profit.
The contribution margin method uses a little bit of algebra to rewrite our equation above,
concentrating on the use of the ‘contribution margin’.
Hence, it is the difference between the variable cost line and the total cost line that
represents fixed costs. The advantage of this is that it emphasises contribution as it is
represented by the gap between the total revenue and the variable cost lines. This is
shown for Company A in Figure 2.
Finally, a profit–volume graph could be drawn, which emphasises the impact of volume
changes on profit (Figure 3). This is key to the Paper F5 syllabus and is discussed in more
detail later in this article.
Example 1
Company A wants to achieve a target profit of $300,000. The sales volume necessary in
order to achieve this profit can be ascertained using any of the three methods outlined
above. If the equation method is used, the profit of $300,000 is put into the equation rather
than the profit of $0:
(50Q) – (30Q) – 200,000 = 300,000
20Q – 200,000 = 300,000
20Q = 500,000
Q = 25,000 units.
Alternatively, the contribution method can be used:
Finally, the answer can be read from the graph, although this method becomes clumsier
than the previous two. The profit will be $300,000 where the gap between the total
revenue and total cost line is $300,000, since the gap represents profit (after the break-
even point) or loss (before the break-even point.)
A contribution graph shows the difference between the variable cost line and the total cost
line that represents fixed costs. An advantage of this is that it emphasises contribution as
it is represented by the gap between the total revenue and variable cost lines.
Margin of safety
The margin of safety indicates by how much sales can decrease before a loss occurs, ie it
is the excess of budgeted revenues over break-even revenues. Using Company A as an
example, let’s assume that budgeted sales are 20,000 units. The margin of safety can be
found, in units, as follows:
Budgeted sales – break-even sales = 20,000 – 10,000 = 10,000 units.
This weighted average C/S ratio can then be used to find CVP information such as break-
even point, margin of safety etc.
Example 2
As well as producing product x described above, Company A also begins producing product
y. The following information is available for both products:
Product y Product x
The weighted average C/S ratio can be once again calculated by dividing the total
expected contribution by the total expected sales:
The C/S ratio is useful in its own right as it tells us what percentage each $ of sales
revenue contributes towards fixed costs; it is also invaluable in helping us to quickly
calculate the break-even point in $ sales revenue, or the sales revenue required to
generate a target profit. The break-even point can now be calculated this way for Company
A:
Of course, such calculations provide only estimated information because they assume that
products x and y are sold in a constant mix of 2x to 1y. In reality, this constant mix is
unlikely to exist and, at times, more y may be sold than x. Such changes in the mix
throughout a period, even if the overall mix for the period is 2:1, will lead to the actual
break-even point being different than anticipated. This point is touched upon again later in
this article.
Contribution to sales ratio is often useful in single product situations, and essential in
multi-product situations, to ascertain how much each $ sold actually contributes towards
the fixed costs.
Product y Product x
(Fixed
0 0 (200) 0 costs)
1,000,00
1,000,000 0 200 400 X
In order to draw a multi-product/volume graph it is necessary to work out the C/S ratio of
each product being sold.
See Table 3.
The graph can then be drawn (Figure 3), showing cumulative sales on the x axis and
cumulative profit/loss on the y axis. It can be observed from the graph that, when the
company sells its most profitable product first (x) it breaks even earlier than when it sells
products in a constant mix. The break-even point is the point where each line cuts the x
axis.
In the public sector, the budgeting process can be even more difficult, since the objectives
of the organisation are more difficult to define in a quantifiable way than the objectives of
a private company. For example, a private company's objectives may be to maximise
profit. The meeting of this objective can then be set out in the budget by aiming for a
percentage increase in sales and perhaps the cutting of various costs. If, on the other
hand, you are budgeting for a public sector organisation such as a hospital, then the
objectives may be largely qualitative, such as ensuring that all outpatients are given an
appointment within eight weeks of being referred to the hospital. This is difficult to define
in a quantifiable way, and how it is actually achieved is even more difficult to define.
This leads onto the next reason why budgeting is particularly difficult in the public sector.
Just as objectives are difficult to define quantifiably, so too are the organisation's outputs.
In a private company the output can be measured in terms of sales revenue, for example.
There is a direct relationship between the expenditure that needs to be input in order to
achieve the desired level of output. In a hospital, on the other hand, it is difficult to define
a quantifiable relationship between inputs and outputs. What is easier to compare is the
relationship between how much cash is available for a particular area and how much cash
is actually needed. Therefore, budgeting naturally focuses on inputs alone, rather than the
relationship between inputs and outputs.
The purpose of this article is to critically evaluate the two main methods for preparing
budgets - the incremental approach and the zero-based approach. Both of these have been
used in both public sector and private sector organisations, with varying degrees of
success.
INCREMENTAL BUDGETING
Incremental budgeting is the traditional budgeting method whereby the budget is prepared
by taking the current period's budget or actual performance as a base, with incremental
amounts then being added for the new budget period. These incremental amounts will
include adjustments for things such as inflation, or planned increases in sales prices and
costs. It is a common misapprehension of students that one of the biggest disadvantages
of incremental budgeting is that it doesn't allow for inflation. Of course it does; by
definition, an 'increment' is an increase of some kind. The current year's budget or actual
performance is a starting point only.
Example
A school will have a sizeable amount in its budget for staff salaries. Let's say that in one
particular year, staff salaries were $1.5m. When the budget is being prepared for the next
year, the headteacher thinks that he will need to employ two new members of staff to
teach languages, who will be paid a salary of $30,000 each (before any pay rises) and also,
that he will need to give all staff members a pay increase of 5%. Therefore, assuming that
the two new staff will receive the increased pay levels, his budget for staff will be $1.638m
[($1.5m +$30k + $30k) x 1.05]
It immediately becomes apparent when using this method in an example like this that,
while being quick and easy, no detailed examination of the salaries already included in the
existing $1.5m has been carried out. This $1.5m has been taken as a given starting point
without questioning it. This brings us onto the reasons why incremental budgeting is not
always seen as a good thing and why, in the 1960s, alternative methods of budgeting
developed. Since I thoroughly believe that Paper F5 students should always go into the
exam with their metaphorical F5 toolbox in their hand, pulling tools out of the box as and
when they need them in order to answer questions, I am going to list the benefits and
drawbacks of both budgeting methods in a easy-to-learn format that should take up less
room in the 'box'. The problem I often find with Paper F5 students is that they think they
can go into the exam without any need for such a toolbox, and while they may be able to
get through some of the numerical questions simply from remembering techniques that
they have learnt in the past, when it comes to written questions, they simply do not have
the depth of knowledge required to answer them properly.
All of these questions are largely answered by breaking the budgeting process down into
three distinct stages, as detailed below.
Benefits of ZBB
The benefits of ZBB are substantial. They would have to be otherwise no organisation
would ever go to the lengths detailed above in order to implement it. These benefits
are set out below:
Since ZBB does not assume that last year's allocation of resources is necessarily
appropriate for the current year, all of the activities of the organisation are re-
evaluated annually from a zero base. Most importantly therefore, inefficient and
obsolete activities are removed, and wasteful spending is curbed. This has got to be
the biggest benefit of zero-based budgeting compared to incremental budgeting and
was the main reason why it was developed in the first place.
By its nature, it encourages a bottom-up approach to budgeting in order for ZBB to be
used in practice. This should encourage motivation of employees.
It challenges the status quo and encourages a questioning attitude among managers.
It responds to changes in the business environment from one year to the next.
Overall, it should result in a more efficient allocation of resources.
Drawbacks of ZBB
Departmental managers may not have the necessary skills to construct decision
packages. They will need training for this and training takes time and money.
In a large organisation, the number of activities will be so large that the amount of
paperwork generated from ZBB will be unmanageable.
Ranking the packages can be difficult, since many activities cannot be compared on
the basis of purely quantitative measures. Qualitative factors need to be incorporated
but this is difficult. Top level management may not have the time or knowledge to rank
what could be thousands of packages. This problem can be somewhat alleviated by
having a hierarchical ranking process, whereby each level of managers rank the
packages of the managers who report to them.
The process of identifying decision packages and determining their purpose, costs and
benefits is massively time consuming and costly. One solution to this problem is to
use incremental budgeting every year and then use ZBB every three to five years, or
when major change occurs. This means that an organisation can benefit from some of
the advantages of ZBB without an annual time and cost implication. Another option is
to use ZBB for some departments but not for others. Certain costs are essential rather
than discretionary and it could be argued that it is pointless to carry out ZBB in
relation to these. For example, heating and lighting costs in a school or hospital are
expenses that will have to be paid, irrespective of the budget amount allocated to
them. Incremental budgeting would seem to be more suitable for costs like these, as
with building repair costs.
Since decisions are made at budget time, managers may feel unable to react to
changes that occur during the year. This could have a detrimental effect on the
business if it fails to react to emerging opportunities and threats.
The organisation's management information systems might be unable to provide the
necessary information.
It could be argued that ZBB is far more suitable for public sector than for private
sector organisations. This is because, firstly, it is far easier to put activities into
decision packages in organisations which undertake set definable activities. Local
government, for example, have set activities including the provision of housing,
schools and local transport. Secondly, it is far more suited to costs that are
discretionary in nature or for support activities. Such costs can be found mostly in not
for profit organisations or the public sector, or in the service department of
commercial operations.
CONCLUSION
Since ZBB requires all costs to be justified, it would seem inappropriate to use it for the
entire budgeting process in a commercial organisation. Why take so much time and
resources justifying costs that must be incurred in order to meet basic production needs?
It makes no sense to use such a long-winded process for costs where no discretion can be
exercised anyway. Incremental budgeting is, by comparison, quick and easy to do and
easily understood. However, the use of incremental budgeting indisputably gives rise to
inefficiency, inertia and budgetary slack.
In conclusion, neither budgeting method provides the perfect tool for planning coordination
and control. However, each method offers something positive to recommend it and one
cannot help but think that the optimal solution lies somewhere between the two.
On the other hand, it may be that changes to the production process have been made, or
that increased quality controls have been introduced, resulting in more items being
rejected. Whatever the cause, it can only be investigated after separate material usage
variances have been calculated for each type of material used and then allocated to a
responsibility centre.
Assuming that the quality of C produced is exactly the same in both instances, the
optimum mix of materials A and B can be decided by looking at the cost of materials A and
B relative to the yield of C.
Therefore, the optimum mix that minimises the cost of the inputs compared to the value of
the outputs is mix 2: 8/20 material A and 12/20 material B. The standard cost per unit of C
is (8 x $20)/19 + (12 x $25)/19 = $24.21. However, if the cost of materials A and B changes
or the selling price for C changes, production managers may deviate from the standard
mix. This would, in these circumstances, be a deliberate act and would result in a
materials mix variance arising. It may be, on the other hand, that the materials mix
changes simply because managers fail to adhere to the standard mix, for whatever reason.
Let us assume now that the standard mix has been set (mix 2) and production of C
commences. 1,850kg of C is produced, using a total of 900kg of material A and 1,100kg of
material B (2,000kg in total). The actual costs of materials A and B were at the standard
costs of $20 and $25 per kg respectively. How do we calculate the materials mix variance?
The variance is worked out by first calculating what the standard cost of our 1,850kg
worth of C would have been if the standard mix had been adhered to, and comparing that
figure to the standard cost of our actual production, using our actual quantities. My
preferred approach has always been to present this information in a table as shown
in Table 1 below. The materials mix variance will be $46,000 – $45,500 = $500 favourable.
Remember: it is essential that, for every variance you calculate, you state whether it is
favourable or adverse. These can be denoted by a clear ‘A’ or ‘F’ but avoid showing an
adverse variance by simply using brackets. This leads to mistakes.
The formula for this is shown below, but if you were to use it, the variance for each type of
material must be calculated separately.
(Actual quantity in standard mix proportions – actual quantity used) x standard cost
As a student, I was never a person to blindly learn formulae and rely on these to get me
through. I truly believe that the key to variance analysis is to understand what is actually
happening. If you understand what the materials mix variance is trying to show, you will
work out how to calculate it. However, for those of you who do prefer to use formulae, the
workings would be as follows:
Why haven’t I considered the fact that although our materials mix variance is $500
favourable, our changed materials mix may have produced less of C than the standard mix?
Because this, of course, is where the materials yield variance comes into play.
The materials mix variance focuses on inputs, irrespective of outputs. The materials yield
variance, on the other hand, focuses on outputs, taking into account inputs.
Whatever the cause, it can only be investigated after separate material usage variances
have been calculated for each type of material used and then allocated to a responsibility
centre.
Assuming that the quality of C produced is exactly the same in both instances, the
optimum mix of materials A and B can be decided by looking at the cost of materials A and
B relative to the yield of C.
Therefore, the optimum mix that minimises the cost of the inputs compared to the value of
the outputs is mix 2: 8/20 material A and 12/20 material B. The standard cost per unit of C
is (8 x $20)/19 + (12 x $25)/19 = $24.21. However, if the cost of materials A and B changes
or the selling price for C changes, production managers may deviate from the standard
mix. This would, in these circumstances, be a deliberate act and would result in a
materials mix variance arising. It may be, on the other hand, that the materials mix
changes simply because managers fail to adhere to the standard mix, for whatever reason.
Let us assume now that the standard mix has been set (mix 2) and production of C
commences. 1,850kg of C is produced, using a total of 900kg of material A and 1,100kg of
material B (2,000kg in total). The actual costs of materials A and B were at the standard
costs of $20 and $25 per kg respectively. How do we calculate the materials mix variance?
The variance is worked out by first calculating what the standard cost of our 1,850kg
worth of C would have been if the standard mix had been adhered to, and comparing that
figure to the standard cost of our actual production, using our actual quantities. My
preferred approach has always been to present this information in a table as shown
in Table 1 below. The materials mix variance will be $46,000 – $45,500 = $500 favourable.
Remember: it is essential that, for every variance you calculate, you state whether it is
favourable or adverse. These can be denoted by a clear ‘A’ or ‘F’ but avoid showing an
adverse variance by simply using brackets. This leads to mistakes.
The formula for this is shown below, but if you were to use it, the variance for each type of
material must be calculated separately.
(Actual quantity in standard mix proportions – actual quantity used) x standard cost
As a student, I was never a person to blindly learn formulae and rely on these to get me
through. I truly believe that the key to variance analysis is to understand what is actually
happening. If you understand what the materials mix variance is trying to show, you will
work out how to calculate it. However, for those of you who do prefer to use formulae, the
workings would be as follows:
Why haven’t I considered the fact that although our materials mix variance is $500
favourable, our changed materials mix may have produced less of C than the standard mix?
Because this, of course, is where the materials yield variance comes into play.
The materials mix variance focuses on inputs, irrespective of outputs. The materials yield
variance, on the other hand, focuses on outputs, taking into account inputs.
$ $ $
A = 800kg
(8/20 x
2,000 A = 900kg x 2,000kg)
A 18,000 $20 16,000 x $20
B = 1,200kg
(12/20 x
2,500 B = 1,100kg x 2,000kg)
F 27,500 $25 30,000 x $25
(Actual yield – standard yield from actual input of material) x standard cost per unit of
output
Using my preferred method of a table, our calculations would look like Table 3.
Actual production of 1,850kg requires an input of 1,947kg (1,850 x 100/95) in total of A and
B
Table 2: Value difference between actual and expected yield at standard cost of C
Standard
yield for
actual
Standard quantities
Var. cost per kg Difference input Actual yield
A= A = 780kg
2,400 900kg x (1,947 x 8/20) x
A 18,000 $20 15,600 $20
B= B =1,168kg
1,700 1,100kg x (1,947 x 12/20) x
F 27,500 $25 29,200 $25
Again, if you like to learn the formula, this is shown below, although it would have to be
applied separately to each type of material.
Similarly, poorer quality materials may be more difficult to work with; this may lead to an
adverse labour efficiency variance as the workforce takes longer than expected to
complete the work. This, in turn, could lead to higher overhead costs, and so on.
Fortunately, consequences such as these will occur in the same period as the mix variance
and are therefore more likely to be identified and the problem resolved. Never
underestimate the extent to which a perceived ‘improvement’ in one area (eg a favourable
materials mix variance) can lead to a real deterioration in another area (eg decreased
yield, poorer quality, higher labour costs, lower sales volumes, and ultimately lower
profitability). Always make sure you mention such interdependencies when discussing your
variances in exam questions. The number crunching is relatively simple once you
understand the principles; the higher skills lie in the discussion that surrounds the
numbers.
Typically, conventional costing attempts to work out the cost of producing an item
incorporating the costs of resources that are currently used or consumed. Therefore, for
each unit made the classical variable costs of material, direct labour and variable
overheads are included (the total of these is the marginal cost of production), together
with a share of the fixed production costs. The fixed production costs can be included
using a conventional overhead absorption rate or they can be accounted for using activity-
based costing (ABC). ABC is more complex but almost certainly more accurate. However,
whether conventional overhead treatment or ABC is used the overheads incorporated are
usually based on the budgeted overheads for the current period.
Once the total absorption cost of units has been calculated, a mark-up (or gross profit
percentage) is used to determine the selling price and the profit per unit. The mark-up is
chosen so that if the budgeted sales are achieved, the organisation should make a profit.
1. The product’s price is based on its cost, but no-one might want to buy at that price.
The product might incorporate features which customers do not value and therefore
do not want to pay for, and competitors’ products might be cheaper, or at least offer
better value for money. This flaw is addressed by target costing.
2. The costs incorporated are the current costs only. They are the marginal costs plus a
share of the fixed costs for the current accounting period. There may be other
important costs which are not part of these categories, but without which the goods
could not have been made. Examples include the research and development costs and
any close down costs incurred at the end of the product’s life. Why have these costs
been excluded, particularly when selling prices have to be high enough to ensure that
the product makes a profit. To make a profit, total revenue must exceed total costs in
the long term. This flaw is addressed by lifecycle costing.
TARGET COSTING
Target costing is very much a marketing approach to costing. The Chartered Institute of
Marketing defines marketing as:
‘The management process responsible for identifying, anticipating and satisfying customer
requirements profitably.’
In marketing, customers rule, and marketing departments attempt to find answers to the
following questions:
Are customers homogeneous or can we identify different segments within the market?
What features does each market segment want in the product?
What price are customers willing to pay?
To what competitor products or services are customers comparing ours?
How will we advertise and distribute our products? (There are costs associated with
those activities too.)
Marketing says that there is no point in management, engineers and accountants sitting in
darkened rooms dreaming up products, putting them into production, adding on, say 50%
for mark-up then hoping those products sell. At best this is corporate arrogance; at worst it
is corporate suicide.
Note that marketing is not a passive approach, and management cannot simply rely on
customers volunteering their ideas. Management should anticipate customer requirements,
perhaps by developing prototypes and using other market research techniques.
Of course, there will probably be a range of products and prices, but the company cannot
dictate to the market, customers or competitors. There are powerful constraints on the
product and its price and the company has to make the required product, sell it at an
acceptable and competitive price and, at the same time, make a profit. If the profit is going
to be adequate, the costs have to be sufficiently low. Therefore, instead of starting with
the cost and working to the selling price by adding on the expected margin, target costing
will start with the selling price of a particular product and work back to the cost by
removing the profit element. This means that the company has to find ways of not
exceeding that cost.
For example, if a company normally expects a mark-up on cost of 50% and estimates that a
new product will sell successfully at a price of $12, then the maximum cost of production
should be $8:
This is a powerful discipline imposed on the company. The main results are:
use value (the ability of the product or service to do what it sets out to do – its
function) and
esteem value (the status that ownership or use confers).
The aim of value engineering is to maximise use and esteem values while reducing costs.
For example, if you are selling perfume, the design of its packaging is important. The
perfume could be held in a plain glass (or plastic) bottle, and although that would not
damage the use value of the product, it would damage the esteem value. The company
would be unwise to try to reduce costs by economising too much on packaging. Similarly,
if a company is trying to reduce the costs of manufacturing a car, there might be many
components that could be satisfactorily replaced by cheaper or simpler ones without
damaging either use or esteem values. However, there will be some components that are
vital to use value (perhaps elements of the suspension system) and others which endow
the product with esteem value (the quality of the paint and the upholstery).
LIFECYCLE COSTING
As mentioned above, target costing places great emphasis on controlling costs by good
product design and production planning, but those up-front activities also cause costs.
There might be other costs incurred after a product is sold such as warranty costs and
plant decommissioning. When seeking to make a profit on a product it is essential that the
total revenue arising from the product exceeds total costs, whether these costs are
incurred before, during or after the product is produced. This is the concept of life cycle
costing, and it is important to realise that target costs can be driven down by attacking
any of the costs that relate to any part of a product’s life. The cost phases of a product can
be identified as:
All costs should be taken into account when working out the cost of a unit and its
profitability.
Attention to all costs will help to reduce the cost per unit and will help an organisation
achieve its target cost.
Many costs will be linked. For example, more attention to design can reduce
manufacturing and warranty costs. More attention to training can machine
maintenance costs. More attention to waste disposal during manufacturing can
reduce end-of life costs.
Costs are committed and incurred at very different times. A committed cost is a cost
that will be incurred in the future because of decisions that have already been made.
Costs are incurred only when a resource is used.
Typically, the following pattern of costs committed and costs incurred is observed:
The diagram shows that by the end of the design phase approximately 80% of costs are
committed. For example, the design will largely dictate material, labour and machine
costs. The company can try to haggle with suppliers over the cost of components but if, for
example, the design specifies 10 units of a certain component, negotiating with suppliers
is likely to have only a small overall effect on costs. A bigger cost decrease would be
obtained if the design had specified only eight units of the component. The design phase
locks the company in to most future costs and it this phase which gives the company its
greatest opportunities to reduce those costs.
Conventional costing records costs only as they are incurred, but recording those costs is
different to controlling those costs and performance management depends on cost control,
not cost measurement.
Required
(a) What is the target cost of the product?
(b) What is the original lifecycle cost per unit and is the product worth making on that
basis?
(c) If the additional amount were spent on design, what is the maximum manufacturing
cost per unit that could be tolerated if the company is to earn its required mark-up?
Solution
The target cost of the product can be calculated as follows:
(a) Cost + Mark-up = Selling price
100% 40% 140%
$15 $6 $21
(b) The original life cycle cost per unit = ($50,000 + (10,000 x $10) + $20,000)/10,000 = $17
This cost/unit is above the target cost per unit, so the product is not worth making.
(c) Maximum total cost per unit = $15. Some of this will be caused by the design and end of
life costs:
Therefore, the maximum manufacturing cost per unit would have to fall from $10 to
($15 – $8.50) = $6.50.
ACTIVITY-BASED COSTING
Ken Garrett demystifies Activity-based costing and provides some tips leading up to the
all-important exams
Conventional costing distinguishes between variable and fixed costs. Typically, it is
assumed that variable costs vary with the number of units of output (and that these costs
are proportional to the output level) whereas fixed costs do not vary with output. This is
often an over-simplification of how costs actually behave. For example, variable costs per
unit often increase at high levels of production where overtime premiums might have to be
paid or when material becomes scarce. Fixed costs are usually fixed only over certain
ranges of activity, often stepping up as additional manufacturing resources are employed
to allow high volumes to be produced.
Variable costs per unit can at least be measured, and the sum of the variable costs per
unit is the marginal cost per unit. These are the extra costs caused when one more unit is
produced. However, there has always been a problem dealing with fixed production costs
such as factory rent, heating, supervision and so on. Making a unit does not cause more
fixed costs, yet production cannot take place without these costs being incurred. To say
that the cost of producing a unit consists of marginal costs only will understate the true
cost of production and this can lead to problems. For example, if the selling price is based
on a mark-up on cost, then the company needs to make sure that all production costs are
covered by the selling price. Additionally, focusing exclusively on marginal costs may
cause companies to overlook important savings that might result from better controlled
fixed costs.
The conventional approach to dealing with fixed overhead production costs is to assume
that the various cost types can be lumped together and a single overhead absorption rate
derived. The absorption rate is usually presented in terms of overhead cost per labour
hour, or cost per machine hour. This approach is likely to be an over-simplification, but it
has the merit of being relatively quick and easy.
EXAMPLE 1
See Table 1 below.
The budgeted labour hours must be 112,000 hours. This is derived from the budgeted
outputs of 20,000 ordinary units which each take five hours (100,000 hours) to produce,
and 2,000 deluxe units which each take six hours (12,000 hours).
Therefore, the fixed overhead absorption rate per labour hour is $224,000/112,000 =
$2/hour.
The costing of the two products can be continued by adding in fixed overhead costs to
obtain the total absorption cost for each of the products.
The conventional approach outlined above is satisfactory if the following conditions apply:
1. Fixed costs are relatively immaterial compared to material and labour costs. This is
the case in manufacturing environments which do not rely on sophisticated and
expensive facilities and machinery.
2. Most fixed costs accrue with time.
3. There are long production runs of identical products with little customisation.
Table 1, Example 1
Deluxe Ordinary
units units Budget
Units
2,000 20,000 produced
Costs per
$ $ unit:
12 10 Material
5
hours
at
6 hours at $12/ho
72 $12/hour 60 ur Labour
5
hours
at
6 hours at $1/hou Variable
6 $1/hour 5 r overhead
Marginal
90 75 costs
Table 2, Example 1
Deluxe Ordinary
units units Budget
90 75 Marginal costs
6 5
hours hours
at at
$2/hou $2/hou Fixed
12 r 10 r overheads
Total
absorption
102 85 Cost/unit
Instead of offering customers the ability to specify products, many companies offer an
extensive range of products, hoping that one member of the range will match the
requirements of a particular market segment. In Example 1, the company offers two
products: ordinary and deluxe. The company knows that demand for the deluxe range will
be low, but hopes that the price premium it can charge will still allow it to make a good
profit, even on a low volume item. However, the deluxe product could consume resources
which are not properly reflected by the time it takes to make those units.
These developments in manufacturing and marketing mean that the conventional way of
treating fixed overheads might not be good enough. Companies need to know the causes of
overheads, and need to realise that many of their ‘fixed costs’ might not be fixed at all.
They need to try to assign costs to products or services on the basis of the resources they
consume.
EXAMPLE 2
An analysis of the fixed overheads of $224,000 shows that they consist of:
224,000 Total
Ordinary units are produced in long production runs, with each batch consisting of 2,000
units.
Deluxe units are produced in short production runs, with each batch consisting of 100
units.
What we want to do is to get a more accurate estimate of what each unit costs to produce,
and to do this we have to examine what activities are necessary to produce each unit,
because activities usually have a cost attached. This is the basis of activity-based costing
(ABC). The old approach of simply pretending that fixed costs are incurred because of the
passage of time, and that they can therefore be accounted for on the basis of labour (or
machine) time spent on each unit, is no longer good enough. Diverse, flexible
manufacturing demands a more accurate approach to costing.
EXAMPLE 3
Applying these steps to the fixed cost breakdown shown in Example 2 results in the
following analysis:
4. Each ordinary unit takes 20 items of material Each deluxe unit takes 30 items of
material
5. Each ordinary unit will cost $0.2 x 20 = $4/unit Each deluxe unit will cost $0.2 x 30 =
$6/unit
Other fixed overheads will have to be absorbed on a labour hour rate because there is no
information provided that would allow a better approach:
The ABC approach to costing therefore results in the figures shown in Table 3 below.
Check: total costs accounted for if all goes according to budget = 20,000 x 82.375 + 128.25
x 2,000 = $1,904,000, as before.
Table 3, Example 3
Deluxe Ordinary
units units Budget
$ $
Marginal costs
90.00 75.00 (as before)
Fixed overheads:
6
hours 5
at hours
0.375 at
2.25 1.875 0.375 Other
Total absorption
128.25 82.375 cost/unit
You will see that the ABC approach substantially increases the cost of making a deluxe
unit. This is primarily because the deluxe units are made in small batches. Each batch
causes an expensive set-up, but that cost is then spread over all the units produced in that
batch – whether few (deluxe) or many (ordinary). It can only be right that the effort and
cost incurred in producing small batches is reflected in the cost per unit produced. There
would, for example, be little point in producing deluxe units at all if their higher selling
price did not justify the higher costs incurred.
In addition to estimating more accurately the true cost of production, ABC will also give a
better indication of where cost savings can be made. Remember, the title of Paper F5
is Performance Management, implying that accountants should be proactive in improving
performance rather than passively measuring costs. For example, it’s clear that a
substantial part of the cost of producing deluxe units is set-up costs (almost 25% of the
deluxe units’ total costs).
Working on the principle that large cost savings are likely to be found in large cost
elements, management’s attention will start to focus on how this cost could be reduced.
For example, is there any reason why deluxe units have to be produced in batches of only
100? A batch size of 200 units would dramatically reduce those set-up costs.
The traditional approach to fixed overhead absorption has the merit of being simple to
calculate and apply. However, simplicity does not justify the production and use of
information that might be wrong or misleading.
ABC undoubtedly requires an organisation to spend time and effort investigating more fully
what causes it to incur costs, and then to use that detailed information for costing
purposes. But understanding the drivers of costs must be an essential part of good
performance management.
TRANSFER PRICING
Transfer prices are almost inevitably needed whenever a business is divided into more
than one department or division
In accounting, many amounts can be legitimately calculated in a number of different ways
and can be correctly represented by a number of different values. For example, both
marginal and total absorption cost can simultaneously give the correct cost of production,
but which version of cost you should use depends on what you are trying to do.
Similarly, the basis on which fixed overheads are apportioned and absorbed into
production can radically change perceived profitability. The danger is that decisions are
often based on accounting figures, and if the figures themselves are somewhat arbitrary,
so too will be the decisions based on them. You should, therefore, always be careful when
using accounting information, not just because information could have been deliberately
manipulated and presented in a way which misleads, but also because the information
depends on the assumptions and the methodology used to create it. Transfer pricing
provides excellent examples of the coexistence of alternative legitimate views, and
illustrates how the use of inappropriate figures can create misconceptions and can lead to
wrong decisions.
Example 1
Take the following scenario shown in Table 1, in which Division A makes components for a
cost of $30, and these are transferred to Division B for $50. Division B buys the
components in at $50, incurs own costs of $20, and then sells to outside customers for
$90.
As things stand, each division makes a profit of $20/unit, and it should be easy to see that
the group will make a profit of $40/unit. You can calculate this either by simply adding the
two divisional profits together ($20 + $20 = $40) or subtracting both own costs from final
revenue ($90 – $30 – $20 = $40).
You will appreciate that for every $1 increase in the transfer price, Division A will make $1
more profit, and Division B will make $1 less. Mathematically, the group will make the
same profit, but these changing profits can result in each division making different
decisions, and as a result of those decisions, group profits might be affected.
Consider the knock-on effects that different transfer prices and different profits might have
on the divisions:
Example 2
See Table 2. The following rules on transfer prices are necessary to get both parties to
trade with one another:
For the transfer-out division, the transfer price must be greater than (or equal to) the
marginal cost of production. This allows the transfer-out division to make a contribution (or
at least not make a negative one). In Example 2, the transfer price must be no lower than
$18. A transfer price of $19, for example, would not be as popular with Division A as would
a transfer price of $50, but at least it offers the prospect of contribution, eventual break-
even and profit.
For the transfer-in division, the transfer in price plus its own marginal costs must be no
greater than the marginal revenue earned from outside sales. This allows that division to
make a contribution (or at least not make a negative one). In Example 2, the transfer price
must be no higher than $80 as:
$80 (transfer-in price) + $10 (own variable cost) = $90 (marginal revenue)
Usually, this rule is restated to say that the transfer price should be no greater than the
net marginal revenue of the receiving division, where the net marginal revenue is marginal
revenue less own marginal costs. Here, net marginal revenues = $80 = $90 – $10.
So, a transfer price of $50 (transfer price ≥ $18, ≤ $80), as set above, will work insofar as
both parties will find it worth trading at that price.
And
As well as permitting interdivisional trade to happen at all, this rule will also give the
correct economic decision because if the final selling price is too low for the group to
make a positive contribution, no operative transfer price is available.
So, in Example 2, if the final selling price were to fall to $25, the group could not make a
contribution because $25 is less than the group’s total variable costs of $18 + $10. The
transfer price that would make both divisions trade must be no less than $18 (for Division
A) but no greater than $15 (net marginal revenue for Division B = $25 – $10), so clearly no
workable transfer price is available.
If, however, the final selling price were to fall to $29, the group could make a $1
contribution per unit. A viable transfer price has to be at least $18 (for Division A) and no
greater than $19 (net marginal revenue for Division B = $29 – $10). A transfer price of
$18.50, say, would work fine.
Therefore, all that head office needs to do is to impose a transfer price within the
appropriate range, confident that both divisions will choose to act in a way that maximises
group profit. Head office therefore gives each division the impression of making
autonomous decisions, but in reality each division has been manipulated into making the
choices head office wants.
Note, however, that although we have established the range of transfer prices that would
work correctly in terms of economic decision making, there is still plenty of scope for
argument, distortion and dissatisfaction. Example 1 suggested a transfer price between
$18 and $80, but exactly where the transfer price is set in that range vastly alters the
perceived profitability and performance of each sub-unit. The higher the transfer price, the
better Division A looks and the worse Division B looks (and vice versa).
In addition, a transfer price range as derived in Example 1 and 2 will often be dynamic. It
will keep changing as both variable production costs and final selling prices change, and
this can be difficult to manage. In practice, management would often prefer to have a
simpler transfer price rule and a more stable transfer price – but this simplicity runs the
risk of poorer decisions being made.
1 Variable cost
A transfer price set equal to the variable cost of the transferring division produces very
good economic decisions. If the transfer price is $18, Division B’s marginal costs would be
$28 (each unit costs $18 to buy in then incurs another $10 of variable cost). The group’s
marginal costs are also $28, so there will be goal congruence between Division B’s wish to
maximise its profits and the group maximising its profits. If marginal revenue exceeds
marginal costs for Division B, it will also do so for the group.
Although good economic decisions are likely to result, a transfer price equal to marginal
cost has certain drawbacks:
Division A will make a loss as its fixed costs cannot be covered. This is demotivating.
There is little incentive for Division A to be efficient if all marginal costs are covered by the
transfer price. Inefficiencies in Division A will be passed up to Division B. Therefore, if
marginal cost is going to be used as a transfer price, at least make it standard marginal
cost, so that efficiencies and inefficiencies stay within the divisions responsible for them.
The difficulty with full cost, full cost plus, variable cost plus, and market price is that they
all result in fixed costs and profits being perceived as marginal costs as goods are
transferred to Division B. Division B therefore has the wrong data to enable it to make good
economic decisions for the group – even if it wanted to. In fact, once you get away from a
transfer price equal to the variable cost in the transferring division, there is always the risk
of dysfunctional decisions being made unless an upper limit – equal to the net marginal
revenue in the receiving division – is also imposed.
VARIATIONS ON VARIABLE COST
There are two approaches to transfer pricing which try to preserve the economic
information inherent in variable costs while permitting the transferring division to make
profits, and allowing better performance valuation. However, both methods are somewhat
complicated.
Variable cost plus lump sum. In this approach, transfers are made at variable cost. Then,
periodically, a transfer is made between the two divisions (Credit Division A, Debit Division
B) to account for fixed costs and profit. It is argued that Division B has the correct
cumulative variable cost data to make good decisions, yet the lump sum transfers allow
the divisions ultimately to be treated fairly with respect to performance measurement. The
size of the periodic transfer would be linked to the quantity or value of goods transferred.
Dual pricing. In this approach, Division A transfers out at cost plus a mark up (perhaps
market price), and Division B transfers in at variable cost. Therefore, Division A can make a
motivating profit, while Division B has good economic data about cumulative group
variable costs. Obviously, the divisional current accounts won’t agree, and some period-
end adjustments will be needed to reconcile those and to eliminate fictitious
interdivisional profits.
Basically, the transfer price must be as good as the outside selling price to get Division B
to transfer inside the group.
And
CONCLUSION
You might have thought that transfer prices were matters of little importance: debits in
one division, matching credits in another, but with no overall effect on group profitability.
Mathematically this might be the case, but only at the most elementary level. Transfer
prices are vitally important when motivation, decision making, performance measurement,
and investment decisions are taken into account – and these are the factors which so
often separate successful from unsuccessful businesses.
NOT-FOR-PROFIT ORGANISATIONS
Relevant to Papers F1, F5, F7, F8, P2, P3 and P5
Several papers in the ACCA Qualification may feature questions on not-for-profit
organisations. At the Fundamentals level, these include Papers F1, F5, F7 and F8. At the
Professional level they include Papers P2, P3 and P5. Although many of the principles of
management and organisation apply to most business models, not-for-profit organisations
have numerous features that distinguish them from the profit maximising organisations
often assumed in conventional economic theory.
This article explains some of these features. The first part of the article broadly describes
the generic characteristics of not-for-profit organisations.
The second part of the article takes a specific and deeper look at charities, which are one
of the more important types of not-for-profit organisations.
CORPORATE FORM
Not-for-profit organisations can be established as
incorporated or unincorporated bodies. The common
business forms include the following:
in the public sector, they may be departments or agents of government
some public sector bodies are established as private companies limited by guarantee,
including the Financial Services Authority (the UK financial services regulator)
in the private sector they may be established as cooperatives, industrial or provident
societies (a specific type of mutual organisation, owned by its members), by trust, as
limited companies or simply as clubs or associations.
A cooperative is a body owned by its members, and usually governed on the basis of ‘one
member, one vote’. A trust is an entity specifically constituted to achieve certain
objectives. The trustees are appointed by the founders to manage the funds and ensure
compliance with the objectives of the trust. Many private foundations (charities that do not
solicit funds from the general public) are set up as trusts.
Not-for-profit organisations are invariably set up with a purpose or set of purposes in mind,
and the organisation will be expected to pursue such objectives beyond the lifetime of the
founders. On establishment, the founders will decide on the type of organisation and put in
place a constitution that will reflect their goals. The constitutional base of the
organisation will be dictated by its legal form.
As with any type of organisation, the objectives of not-for-profit organisations are laid
down by the founders and their successors in management.
The purposes of the latter are most often dictated by the underlying founding principles.
Within these broad objectives, however, the focus of activity may change quite markedly.
For example, during the 1990s the British Know-How Fund, which was established by the
UK government to provide development aid, switched its focus away from the emerging
central European nations in favour of African nations.
MANAGEMENT
The management structure of not-for-profit organisations resembles that of profit
maximisers, though the terms used to describe certain bodies and officers may differ
somewhat.
While limited companies have a board of directors comprising executive and non-executive
directors, many not-for-profit organisations are managed by a Council or Board of
Management whose role is to ensure adherence to the founding objectives. In recent times
there has been some convergence between how companies and not-for-profit organisations
are managed, including increasing reliance on non-executive officers (notably in respect of
the scrutiny or oversight role) and the employment of ‘career’ executives to run the
business on a daily basis.
CHARITABLE ACTIVITIES
In the UK, charities are regulated by the Charities Act 2006, which sets out in very
broad terms what may be considered to be charitable activities, many of which would
be considered as such in other jurisdictions within most other countries. These
include:
the prevention or relief of poverty
the advancement of education
the advancement of religion
the advancement of health or the saving of lives
the advancement of citizenship or community development
the advancement of the arts, culture, heritage or science
the advancement of amateur sport
the advancement of human
rights, conflict resolution or reconciliation or the promotion of religious or racial
harmony or equality and diversity
the advancement of environmental protection or improvement
the relief of those in need, by reason of youth, age, ill-health, disability, financial
hardship or other disadvantage
the advancement of animal welfare
the promotion of the efficiency of the armed forces of the Crown or of the police, fire
and rescue services or ambulance services
other purposes currently recognised as charitable and any new charitable purposes which
are similar to another charitable purpose.
The activities of charities in England and Wales are regulated by the Charity Commission,
itself a not-for-profit organisation, located in Liverpool. The precise definition of what
constitutes charitable activities differs, of course, from country to country. However, most
of the activities listed above would be considered as charitable, as they would seldom be
associated with commercial organisations.
CORPORATE FORM
Charities differ widely in respect of their size, objectives and activities. For example,
Oxfam is a federal international organisation comprising 13 different bodies across all
continents, while many thousands of charities are local organisations managed and staffed
entirely by volunteers. Unsurprisingly, most of the constituent organisations within Oxfam
operate as limited companies, while local charities would find this form inappropriate and
prefer to be established as associations.
A charity is not forbidden from engaging in commercial activities provided that these
activities fully serve the objectives of the charity. For example, charities such as the
British Heart Foundation, the British Red Cross, and Age Concern all raise funds by
operating chains of retail shops. These shops are profitable businesses, but if a company is
formed to operate the shops, the company would be expected to formally covenant its
entire annual profits to the charity.
Charities with high value non-current assets, such as real estate, usually vest the
ownership of such assets to independent guardian trustees, whose role is to ensure that
the assets are deployed in a manner that reflects the objectives of the charity.
The guardian trustees are empowered to lease land, subject to the provisions of the lease
satisfying requirements laid down by the Charity Commission.
The governing constitution of a charity is normally set down in its rules, which expand on
the purposes of the business. Quite often, the constitution dictates what the organisation
cannot do, as well as what it can do. Charities plan and control their activities with
reference to measures of effectiveness, economy and efficiency. They often publish their
performance outcomes in order to convince the giving public that the good causes that
they support ultimately benefit from charitable activities.
MANAGEMENT
Most charities are managed by a Council, made up entirely of volunteers. These are
broadly equivalent to non-executive directors in limited companies. It is the responsibility
of the Council to chart the medium to long-term strategy of the charity and to ensure that
objectives are met.
Objectives may change over time due to changes in the external environment in which the
charity operates. Barnardos is a childrens’ charity that was originally founded as Doctor
Barnado’s Homes, to provide for orphans who could not rely on family support. The
development of welfare services after World War II and the increasing willingness of
families to adopt and foster children resulted in less reliance on the provision of residential
homes for children but greater reliance on other support services. As a result, the
Barnardos charity had to change the way in which it looked at maximising the welfare of
orphaned children.
Local charities are dependent on the support of a more limited population and therefore
have to consider whether their supporters will continue to provide the finance necessary
to operate continuously. For example, a local charity supporting disabled sports could be
profoundly affected by the development of facilities funded by central or local government.
risks, of which
Every charity is confronted by distinctive strategic and operational
For example, many charities staff their shops with the help of unpaid retired people, but
there is some debate as to whether future generations of retired people will be as willing
to do this for nothing. As many charities have to contain operating expenses in order to
ensure that their objectives can be met, it is often difficult or impossible for them to
employ full-time or part-time paid staff to replace volunteer workers. Risks also arise from
the social environment, particularly in times of recession, when members of the public may
be less disposed to give to benefit others as their discretionary household income is
reduced. There is some evidence of ‘charity fatigue’ in the UK. This arises when the public
feel pressurised by so many different competing charities that they feel ill disposed to give
anything to anyone at all.
Risk can take myriad forms – ranging from the specific risks faced by individual companies
(such as financial risk, or the risk of a strike among the workforce), through the current
risks faced by particular industry sectors (such as banking, car manufacturing, or
construction), to more general economic risks resulting from interest rate or currency
fluctuations, and, ultimately, the looming risk of recession. Risk often has negative
connotations, in terms of potential loss, but the potential for greater than expected returns
also often exists.
Clearly, risk is almost always a major variable in real-world corporate decision-making, and
managers ignore its vagaries at their peril. Similarly, trainee accountants require an ability
to identify the presence of risk and incorporate appropriate adjustments into the problem-
solving and decision-making scenarios encountered in the exam hall. While it is unlikely
that the precise probabilities and perfect information, which feature in exam questions can
be transferred to real-world scenarios, a knowledge of the relevance and applicability of
such concepts is necessary.
In this first article, the concepts of risk and uncertainty will be introduced together with
the use of probabilities in calculating both expected values and measures of dispersion. In
addition, the attitude to risk of the decision-maker will be examined by considering various
decision-making criteria, and the usefulness of decision trees will also be discussed. In the
second article, more advanced aspects of risk assessment will be addressed, namely the
value of additional information when making decisions, further probability concepts, the
use of data tables, and the concept of value-at-risk.
The basic definition of risk is that the final outcome of a decision, such as an investment,
may differ from that which was expected when the decision was taken. We tend to
distinguish between risk and uncertainty in terms of the availability of probabilities. Risk is
when the probabilities of the possible outcomes are known (such as when tossing a coin or
throwing a dice); uncertainty is where the randomness of outcomes cannot be expressed
in terms of specific probabilities. However, it has been suggested that in the real world, it
is generally not possible to allocate probabilities to potential outcomes, and therefore the
concept of risk is largely redundant. In the artificial scenarios of exam questions, potential
outcomes and probabilities will generally be provided, therefore a knowledge of the basic
concepts of probability and their use will be expected.
PROBABILITY
The term ‘probability’ refers to the likelihood or chance that a certain event will occur,
with potential values ranging from 0 (the event will not occur) to 1 (the event will definitely
occur). For example, the probability of a tail occurring when tossing a coin is 0.5, and the
probability when rolling a dice that it will show a four is 1/6 (0.166). The total of all the
probabilities from all the possible outcomes must equal 1, ie some outcome must occur.
A real world example could be that of a company forecasting potential future sales from
the introduction of a new product in year one (Table 1).
Table 1: Probability of new product sales
Probabili
0.1 0.2 0. 0.2 0.1 ty
From Table 1, it is clear that the most likely outcome is that the new product generates
sales of £1,000,000, as that value has the highest probability.
In contrast, with a conditional event, the outcomes of two or more events are related, ie
the outcome of the second event depends on the outcome of the first event. For example,
in Table 1, the company is forecasting sales for the first year of the new product. If,
subsequently, the company attempted to predict the sales revenue for the second year,
then it is likely that the predictions made will depend on the outcome for year one. If the
outcome for year one was sales of $1,500,000, then the predictions for year two are likely
to be more optimistic than if the sales in year one were $500,000.
The availability of information regarding the probabilities of potential outcomes allows the
calculation of both an expected value for the outcome, and a measure of the variability (or
dispersion) of the potential outcomes around the expected value (most typically standard
deviation). This provides us with a measure of risk which can be used to assess the likely
outcome.
Expected value
= ($500,000)(0.1) + ($700,000)(0.2)
+ ($1,000,000)(0.4) + ($1,250,000)(0.2)
+ ($1,500,000)(0.1)
= $50,000 + $140,000 + $400,000
+ $250,000 + $150,000
= $990,000
In this example, the expected value is very close to the most likely outcome, but this is not
necessarily always the case. Moreover, it is likely that the expected value does not
correspond to any of the individual potential outcomes. For example, the average score
from throwing a dice is (1 + 2 + 3 + 4 + 5 + 6) / 6 or 3.5, and the average family (in the UK)
supposedly has 2.4 children. A further point regarding the use of expected values is that
the probabilities are based upon the event occurring repeatedly, whereas, in reality, most
events only occur once.
In addition to the expected value, it is also informative to have an idea of the risk or
dispersion of the potential actual outcomes around the expected value. The most common
measure of dispersion is standard deviation (the square root of the variance), which can be
illustrated by the example given in Table 2, concerning the potential returns from two
investments.
INTERPRETATION
The principle of interpretation can be applied to other areas of the syllabus. In Question 3
of the December 2007 exam, candidates were required to interpret sales performance.
Again, it is recommended that you refer to Question 3. Broadly, in this question, the market
was shrinking and the company was struggling a little as a result. It had reduced sales
prices and fought off an 11% fall in the market, losing only 2% of its budgeted sales. This is
a good performance, taking the falling market into consideration.
I would expect candidates to be able to interpret the variances and reach the above
conclusions. So, if you are given the following information:
You should be able to hypothesise as to what has happened, using the information
given in the question and your understanding of the data. Adverse sales price variance
must mean that sales prices have fallen. This could be the result of competitive
pressure. Adverse sales volume variance means that the business hasn’t achieved its
budget, which is likely to disappoint management. However, the favourable market
share variance is encouraging. This shows that business has been won from the
competition, and that the business has also performed well in the areas that it can
control.
The adverse market size variance shows a difficult trading environment, which is
probably outside the control of the business. Performance should be assessed by
taking into account the environment in which a business operates and separating the
controllable from the uncontrollable. Note the link between adverse market size and
adverse sales price. In the shrinking market of paper diaries (the product in the
question), it is likely that the sales prices will fall as sellers scramble to retain as
much share as possible.
The first step in any linear programming problem is to produce the equations for
constraints and the contribution function, which should not be difficult at this level.
In our example, the materials constraint will be 3X + 5Y ≤ 15,000, and the labour constraint
will be 4X + 4Y ≤ 16,000. You should not forget the non-negativity constraint, if needed, of
X,Y ≥ 0.
Plotting the resulting graph (Figure 1, the optimal production plan) will show that by
pushing out the contribution function, the optimal solution will be at point B – the
intersection of materials and labour constraints.
The optimal point is X = 2,500 and Y = 1,500, which generates $135,000 in contribution.
Check this for yourself (see Working 1). The ability to solve simultaneous equations is
assumed in this article.
The point of this calculation is to provide management with a target production plan in
order to maximise contribution and therefore profit. However, things can change and, in
particular, constraints can relax or tighten. Management needs to know the financial
implications of such changes. For example, if new materials are offered, how much should
be paid for them? And how much should be bought? These dynamics are important.
Suppose the shadow price of materials is $5 per kg (this is verifiable by calculation – see
Working 2). The important point is, what does this mean? If management is offered more
materials it should be prepared to pay no more than $5 per kg over the normal price.
Paying less than $13 ($5 + $8) per kg to obtain more materials will make the firm better off
financially. Paying more than $13 per kg would render it worse off in terms of contribution
gained. Management needs to understand this.
There may, of course, be a good reason to buy ‘expensive’ extra materials (those costing
more than $13 per kg). It might enable the business to satisfy the demands of an important
customer who might, in turn, buy more products later. The firm might have to meet a
contractual obligation, and so paying ‘too much’ for more materials might be justifiable if it
will prevent a penalty on the contract. The cost of this is rarely included in shadow price
calculations. Equally, it might be that ‘cheap’ material, priced at under $13 per kg, is not
attractive. Quality is a factor, as is reliability of supply. Accountants should recognise that
‘price’ is not everything.
WORKINGS
Working 1:
The optimal point is at point B, which is at the intersection of:
3X + 5Y = 15,000 and
4X + 4Y = 16,000
Multiplying the first equation by four and the second by three we get:
12X + 20Y = 60,000
12X + 12Y = 48,000
Substituting Y = 1,500 in any of the above equations will give us the X value:
3X + 5 (1,500) = 15,000
3X = 7,500
X = 2,500
The new level of contribution is: (2,499.5 x 30) + (1,500.5 x 40) = $135,005
The increase in contribution from the original optimal is the shadow price:
135,005 – 135,000 = $5 per kg.