Оценить:
 Рейтинг: 0

Maths on the Back of an Envelope: Clever ways to (roughly) calculate anything

Автор
Год написания книги
2019
<< 1 ... 5 6 7 8 9 10 >>
На страницу:
9 из 10
Настройки чтения
Размер шрифта
Высота строк
Поля

Climate change is perhaps the most important of these. Across the world, there are scientists attempting to model the impact that rising temperatures will have on sea levels, climate, harvests and animal populations. There is an overwhelming consensus that (unless human behaviour changes) global temperatures will rise, but the mathematical models produce a wide range of possible outcomes depending on how you set the assumptions. Despite overall warming, winters in some countries might become colder. Harvests may increase or decrease. The overall impact could be relatively benign or catastrophic. We can guess, we can use our judgement, but we can’t be certain.

In 1952, the science-fiction author Raymond Bradbury wrote a short story called ‘A Sound of Thunder’ in which a time-traveller transported back to the time of the dinosaurs accidentally kills a tiny butterfly, and this apparently innocuous incident has knock-on effects that turn out to have changed the modern world they return to. A couple of decades later, the mathematician Edward Lorenz is thought to have been referencing this story when he coined the phrase ‘the butterfly effect’ as a way to describe the unpredictable and potentially massive impact that small changes in the starting situation can have on what follows.

These butterfly effects are everywhere, and they make confident long-term predictions of any kind of climate change (including political and economic climate) extremely difficult.

MAD COWS AND MAD FORECASTS

In 1995, Stephen Churchill, a 19-year-old from Wiltshire, became the first person to die from Variant Creutzfeldt–Jakob disease (or vCJD). This horrific illness, a rapidly progressing degeneration of the brain, was related to BSE, more commonly known as ‘Mad Cow Disease’, and caused by eating contaminated beef.

As more victims of vCJD emerged over the following months, health scientists began to make forecasts about how big this epidemic would become. At a minimum, they reckoned there would be at least 100 victims. But, at worst, they predicted as many as 500,000 might die – a number of truly nightmare proportions.8 (#litres_trial_promo)

Nearly 25 years on, we are now able to see how the forecasters did. The good news is that their prediction was right – the number of victims was indeed between 100 and 500,000. But this is hardly surprising, given how far apart the goalposts were.

The actual number believed to have died from vCJD is about 250, towards the very bottom end of the forecasts, and about 2,000 times smaller than the upper bound of the prediction.

But why was the predicted range so massive? The reason is that, when the disease was first identified, scientists could make a reasonable guess as to how many people might have eaten contaminated burgers, but they had no idea what proportion of the public was vulnerable to the damaged proteins (known as prions). Nor did they know how long the incubation period was. The worst-case scenario was that the disease would ultimately affect everyone exposed to it – and that we hadn’t seen the full effect because it might be 10 years before the first symptoms appeared. The reality turned out to be that most people were resistant, even if they were carrying the damaged prion.

It’s an interesting case study in how statistical forecasts are only as good as their weakest input. You might know certain details precisely (such as the number of cows diagnosed with BSE), but if the rate of infection could be anywhere between 0.01% and 100%, your predictions will be no more accurate than that factor of 10,000.

At least nobody (that I’m aware of) attempted to predict a number of victims to more than one significant figure. Even a prediction of ‘370,000’ would have implied a degree of accuracy that was wholly unjustified by the data.

DOES THIS NUMBER MAKE SENSE?

One of the most important skills that back-of-envelope maths can give you is the ability to answer the question: ‘Does this number make sense?’ In this case, the back of the envelope and the calculator can operate in harmony: the calculator does the donkey work in producing a numerical answer, and the back of the envelope is used to check that the number makes logical sense, and wasn’t the result of, say, a slip of the finger and pressing the wrong button.

We are inundated with numbers all the time; in particular, financial calculations, offers, and statistics that are being used to influence our opinions or decisions. The assumption is that we will take these figures at face value, and to a large extent we have to. A politician arguing the case for closing a hospital isn’t going to pause while a journalist works through the numbers, though I would be pleased if more journalists were prepared to do this.

Often it is only after the event that the spurious nature of a statistic emerges.

In 2010, the Conservative Party were in opposition, and wanted to highlight social inequalities that had been created by the policies of the Labour government then in power. In a report called ‘Labour’s Two Nations’, they claimed that in Britain’s most deprived areas ‘54% of girls are likely to fall pregnant before the age of 18’. Perhaps this figure was allowed to slip through because the Conservative policy makers wanted it to be true: if half of the girls on these housing estates really were getting pregnant before leaving school, it painted what they felt was a shocking picture of social breakdown in inner-city Britain.

The truth turned out to be far less dramatic. Somebody had stuck the decimal point in the wrong place. Elsewhere in the report, the correct statistic was quoted, that 54.32 out of every 1,000 women aged 15 to 17 in the 10 most deprived areas had fallen pregnant. Fifty-four out of 1,000 is 5.4%, not 54%. Perhaps it was the spurious precision of the 54.32’ figure that had confused the report writers.

Other questionable numbers require a little more thought. The National Survey of Sexual Attitudes has been published every 10 years since 1990. It gives an overview of sexual behaviour across Britain.

One statistic that often draws attention when the report is published is the number of sexual partners that the average man and woman has had in their lifetime.

The figures in the first three reports were as follows:

The figures appear quite revealing, with a surge in the number of partners during the 1990s, while the early 2000s saw a slight decline for men and an increase for women.

But there is something odd about these numbers. When sexual activity happens between two opposite-sex people, the overall ‘tally’ for all men and women increases by one. Some people will be far more promiscuous than others, but across the whole population, it is an incontravertible fact of life that the total number of male partners for women will be the same as the number of women partners for men. In other words, the two averages ought to be the same.

There are ways you can attempt to explain the difference. For example, perhaps the survey is not truly representative – maybe there is a large group of men who have sex with a small group of women that are not covered in the survey.

However, there is a more likely explanation, which is that somebody is lying. The researchers are relying on individuals’ honesty – and memory – to get these statistics, with no way of checking if the numbers are right.

What appears to be happening is that either men are exaggerating, or women are understating, their experience. Possibly both. Or it might just be that the experience was more memorable for the men than for the women. But whatever the explanation, we have some authentic-looking numbers here that under scrutiny don’t add up.

THE CASE FOR BACK-OF-ENVELOPE THINKING

I hope this opening section has demonstrated why, in many situations, quoting a number to more than one or two significant figures is misleading, and can even lull us into a false sense of certainty. Why? Because a number quoted to that precision implies that it is accurate; in other words, that the ‘true’ answer will be very close to that. Calculators and spreadsheets have taken much of the pain out of calculation, but they have also created the illusion that any numerical problem has an answer that can be quoted to several decimal places.

There are, of course, situations where it is important to know a number to more than three significant figures. Here are a few of them:

In financial accounts and reports. If a company has made a profit of £2,407,884, there will be some people for whom that £884 at the end is important.

When trying to detect small changes. Astronomers looking to see if a remote object in the sky has shifted in orbit might find useful information in the tenth significant figure, or even more.

Similarly in the high end of physics there are quantities linked to the atom that are known to at least 10 significant figures.

For precision measurements such as those involved in GPS, which is identifying the location of your car or your destination, and where the fifth significant figure might mean the difference between pulling up outside your friend’s house and driving into a pond.

But take a look at the numbers quoted in the news – they might be in a government announcement, a sports report or a business forecast – and you’ll find remarkably few numbers where there is any value in knowing them to four or more significant figures.

And if we’re mainly dealing with numbers with so few significant figures, the calculations we need to make to find those numbers are going to be simpler. So simple, indeed, that we ought to be able to do most of them on the back of an envelope or even, with practice, in our heads.

2

TOOLS OF THE TRADE

THE ESSENTIAL TOOLS OF ESTIMATION

For most back-of-envelope calculations, the tools of the trade are quite basic.

The first vital tool is the ability to round numbers to one or more significant figures.

The next three tools are ones that require exact answers:

Basic arithmetic (which is built around mental addition, subtraction and a reasonable fluency with times tables up to 10).

The ability to work with percentages and fractions.

Calculating using powers of 10 (10, 100, 1,000 and so on) and hence being able to work out ‘orders of magnitude’; in other words, knowing if the answer is going to be in the hundreds, thousands or millions, for example.

And finally, it is handy to have at your fingertips a few key number facts, such as distances and populations, that crop up in many common calculations.

This section will arm you with a few tips that will help you with your back-of-envelope calculations later on – including a technique you may not have come across that I call Zequals, and how to use it.

ARE YOU AN ARITHMETICIAN?

In the opening section there was a quick arithmetic warm-up. It was a chance to find out to what extent you are an Arithmetician.

Arithmetician is not a word you hear very often.

In past centuries it was a much more familiar term. Here, for example, is a line from Shakespeare’s Othello: ‘Forsooth, a great arithmetician, one Michael Cassio, a Florentine.’ That line is spoken by Iago, the villain of the play, who is angry that he has been passed over for the job of lieutenant by a man called Cassio. It is an amusing coincidence that Shakespeare’s arithmetician Cassio has a name very similar to Casio, the UK’s leading brand of electronic calculator.

Iago scoffs that Cassio might be good with numbers, but he has no practical understanding of the real world. (This rather harsh stereotype of mathematical people as being abstract thinkers who are out of touch with reality is one that lives on today.)
<< 1 ... 5 6 7 8 9 10 >>
На страницу:
9 из 10