see riskstat in statwork for possibly later version
CHAPTER 1-5
DEALING WITH RISKS
People sometimes enjoy the experience of uncertainty. Some
are even willing to pay for the thrill of gambling in a casino.
More commonly, though, uncertainty is a negative consequence that
people are willing to pay insurance premiums to avoid.
There are several reasons why a person may prefer a sure
outcome to a set of uncertain outcomes. You may dislike the
shivery feeling of worry about what will happen. Or you might
imagine yourself feeling regret or unpleasant surprise or
disappointment if a particular outcome should come about which
you could avoid by insuring against it.
In contrast to these subjective matters which differ from
person to person, there is an important objective fact that
influences choice in risky situations: The market will generally
pay you to bear uncertainty in the form of variability of
results. That is, the more variable are the expected returns
from an investment, the greater the payoff. For example, riskier
securities such as stocks pay (on average) a higher return than
do less-risky securities such as government bonds. Looking in
the other direction, insurance companies will reduce the possible
variability in your stream of income by selling you guarantees
that if a catastrophe should occur to you, you will be
recompensed; you pay them a premium to assume risk for you.
Insurance does not enable you to avoid feelings of
disappointment or surprise or regret aside from financial loss.
Sometimes it might be possible to arrange with another individual
to shoulder bad feelings for you, however. For example, you
might arrange in advance for someone else to take responsibility
if a decision should have an undesirable outcome, or for someone
else to take the phone call and deal with the consequences if a
catastrophe should occur. But your feelings are mainly yours to
be borne alone.
The strength of your desire to avoid risk is likely to
depend upon your economic and life circumstances, and upon the
size and nature of the risky outcomes. For example, your desire
to purchase life insurance is likely to be different when you
have young children from when you have children still of school
age. For another example, you may feel no need to insure against
the first $500 of loss in an auto accident because that loss
would not disrupt your life. But to avoid larger possible losses
you are willing to pay an insurance premium which costs more than
the expected value of the loss, because such a loss could disrupt
your life badly. For a third example, you are more likely to
quit your lawyer job and join a partner in hanging out a shingle
after you hear that an unknown Aunt Tillie died and left you a
hefty bundle.
Now that we have in hand the mechanism of expected value for
analysing uncertain choices without consideration of risk, as
described in Chapter 1-4, we are ready to allow for risk when we
do not feel neutral about uncertainty, but instead prefer to
avoid it.
A preference for certainty over uncertainty may be thought
of in various ways: a) You prefer to have a thousand-dollar bill
rather than a 50-50 chance of $2000 or nothing. And you would be
willing to accept a smaller sum for certain than the "expected
value" ($1000) of the alternative whose outcomes are uncertain.
More generally, you are not indifferent between payoffs with
different probabilities but the same expected value. That is,
you prefer a .5 chance of $10 to a .05 chance of $100. b) You
would not pay twice as much for a given probability of winning
twice as much. That is, you might be indifferent between $9 and
a 50-50 chance of $20, but you would prefer $900 to a 50-50
chance of $2000. And c) An idea which often has been intertwined
with the above ideas about uncertainty concerns the different
meanings of sequential increments of the same amount of money.
That is, the second thousand dollars does not seem to give as
much good feeling as does the first thousand dollars; twice as
much money often is not "worth" twice as much to you. This is
the famous idea of "diminishing marginal utility". But though
this idea seems intimately related to the above ideas about
uncertainty, and though it has often been considered
interchangeable with them in theory, the relationship is not
obvious. In practical terms, we want to know which choice we
should make in a particular risky situation, such as purchasing
insurance, or opening a law office, or choosing among investment
opportunities in three countries that differ greatly in political
stability. There are two steps to a sound decision: 1)
Understand the nature of the risk, and how it fits into the rest
of your life [your business]. 2) Use appropriate devices for
allowing for the cost to you of assuming the risk, or the benefit
of avoiding it.
In the simple two-choice yes-or-no examples mentioned above,
you should first try to gain a clear idea of the basic notions of
probability and expected value, next consider the choice in light
of your entire economic and non-economic life circumstances (and
especially your present state of wealth), and then choose
according to your enlightened preferences. The last step sounds
vague, but we will look at some techniques to make it less vague.
Sometimes it helps, for example, to consider a hypothetical
set of other choices and ask yourself what you would do if you
faced them. You can then look for consistent patterns in your
preferences that will help you make your choice. You can also
ask yourself such questions as: How much would it take to make
me twice as happy as an extra $1000? (See Simon, 1975 [?] or XXX
for details on these techniques)
Illusions, paradoxes, and apparent self-contradictions
abound in the risky choices people make even when the choices are
relatively simple. Often people's responses depend upon how the
issues is posed - for example, whether the same amount is seen as
a loss of what you have, or a non-gain, both of which are
objectively identical but subjectively very different. Hence
risky choices have fascinated economists, statisticians, and
psychologists during the last decade or two. But these
peculiarities need not detain us here. It comes down to the fact
that your willingness to accept $900 (or $800 or even $700)
rather than a 50-50 chance of $2000 is of the same nature as your
choice to use public transportation and save money rather than
buy a car.
The $900 (or whatever) that you will accept in exchange for
the 50-50 chance of $2000 or zero is called the "certainty
equivalent" of the uncertain opportunity. It corresponds to the
risk-adjustment portion of the discount factor discussed in
Chapter 2. The extent to which the certainty equivalent is less
than the expected value of the uncertain opportunity -- that is,
the difference between $900 and $1000 in this example -- is a
measure of the extent to which you are risk averse. Scholars in
finance and economics have done a great deal of advanced
theoretical thinking about risk aversion, but as yet no one has
developed convenient ways of applying this work to everyday life
for individuals. So we continue to bumble through these
decisions, often making them differently than we would if we were
to spend the effort necessary to think them through in a
satisfactory fashion.
The techniques can be left aside. I'll simply mention the
competing principles used in these risk analyses, each associated
with a different criterion goal for optimization:
Maximization of "utility"
The utility principle is the oldest and the most widely used
device to allow for risk, especially among finance specialists,
perhaps because it lends itself better to mathematical analyses
than do the others; it aims to maximize your "utility" -- that
is, the supposed satisfaction that you might achieve from the
resulting sums of money. It systematically takes into account
that twice as much money will not give you twice as much
satisfaction, the appropriate adjustment depending upon your
wealth and the extent of your dislike for risk. All this has
little or nothing to do with the concept of utility that Jeremy
Bentham proposed in the 18th Century, and from which the term
originally comes. When you apply this principle, you reduce
somewhat the expected value of the alternative you choose in
order to reduce the variability among possible outcomes.
Minimization of regret or disappointment or unpleasant surprise.
These related principles aim at reducing the chance that you
will end up feeling badly about the outcome. For example, if
someone first tells you that you have won a lottery and then two
minutes later tells you that it was an error, you are likely to
feel worse than if you had never heard either message.
(Similarly, hearing from a doctor that you do not have a disease
that you thought you might have had is likely to send you out in
a particularly pleasant mood.) You therefore choose in a fashion
that you reduce somewhat the expected value of the alternative
you choose in order to reduce the chance that the outcome will be
one about which you will feel regret, disappointment, or
unpleasant surprise.
The mini-max principle.
This principle applies to some situations which are head-to-
head games with one or more other players, in which you expect
them to actively try to out-fox you, and where your loss is their
gain. This is unlike most situations in life (and in business),
in which the relationship you are in with the relevant groups of
individuals (such as impersonal customers) or with nature (for
example, when you are drilling an oil well) is not game-like
because your "opponent" is not actively trying to out-fox you.
The mini-max principle is a very complicated mathematical
strategy for obtaining a combination of relatively large gains
while taking the chance of relatively small losses, by attempting
to avoid the worst situation that your opponent might force you
into. This principle may be appropriate for some very
specialized games, and perhaps occasionally in war. But to my
knowledge it has never been found appropriate in everyday life,
despite all the inflated claims made for it. It is a classic
example of the adepts of a fancy mathematical technique
succeeding in a massive snowjob on people who do not understand
the mathematics but are so insecure about their ignorance that
they take it on faith that there must be something of value
inside the mysterious mathematical black box -- something worth
paying a high fee for. Unfortunately, cases like this are not
rare in the world of "scholarship".
The psychological bases for these principles link up with
the discussion of feelings in Chapter 00. For example, the pain
of a contemplated negative outcome which might yield regret or
disappointment can be understood with the same mechanism of
negative self-comparisons used to understand sadness and
depression.1
The choice among these principles for dealing with risk
depends upon your taste. But if your goal is to maximize profit
in business, you will not make any allowance for risk other than
warranted by market (rather than personal) considerations.
Lending or investing money is a common situation in which
risk is a crucial issue. When a bank lends working capital to an
individual or a firm, the interest rate depends upon how risky
the bank deems the loan -- that is, the bank's estimate of the
chance that the borrower will default. Similarly, the bonds of
firms that are unstable or have little collateral must pay higher
interest rates than do bonds of firms that are more solid. The
stocks of firms whose prospects are very uncertain sell at a
lower price relative to the firm's earnings than do the stocks of
firms whose earnings are stable and seemingly-assured from year
to year. And the rate of return to preferred stocks is on
average higher than the rate of return to bonds, because in case
of a bankruptcy the preferred stocks would lose their value
before the bonds would. (Common stocks are riskiest of all in
this respect.) This structure is equivalent, from the point of
view of the person supplying the funds, to a lower discount
factor for a more risky situation. (Chapter 1-2 discussed the
mechanics of using the discount factor and introduced its
interpretation.)
You can go beyond making a single risky choice by arranging
your "portfolio" of risktaking activities so as to reduce the
total risk. For example, instead of investing all your wealth in
a single stock, you can diversify among a set of stocks. A life
insurance company greatly reduces its risk by selling a great
many insurance policies rather than just one. The prospect that
any single individual will die this year is very risky, but the
rate of death among (say) a thousand people of the same age is
known in advance with high probability, being affected only by
the small chance of a major catastrophe. And the insurance
company forestalls some of that risk with a clause which protects
the company against war loss which will greatly increase overall
risk.
Almost any diversification reduces the overall risk even
while keeping the returns the same. But it is not possible to
eliminate all risk with diversification. In the past few decades
the study of finance has worked out a variety of devices for
"portfolio analysis" to take maximum advantage of
diversification. The most important element in a diversification
program, however, is to remember to do it. More about all this
in Chapter 8-0 on portfolio investment.
about utility analysis. Construct utility function. Can do it
by asking how much it will take to make twice as happy, or how
happy twice as much money will make you. Also maximin, and
regret functions.] When there is a sequence of decisions so
that you need a decision tree, you must put risk-adjusted
quantities into the circles and boxes. The certainty equivalent
serves this purpose. Instead of each uncertain set of outcomes,
you substitute the assured amount that accept instead. This
explanation is too brief for you to fully understand it, but you
can get the details when you need them from a standard text.[fn
to ame]
FOOTNOTE
1 See Bell in Bell et. al. for an interesting formal
analysis which explains people's risk behavior in this fashion,
behavior which otherwise seems inexplicable.
Page # thinking risks15% 3-3-4d