Оценить:
 Рейтинг: 0

Maths on the Back of an Envelope: Clever ways to (roughly) calculate anything

Автор
Год написания книги
2019
<< 1 ... 4 5 6 7 8 9 10 >>
На страницу:
8 из 10
Настройки чтения
Размер шрифта
Высота строк
Поля

The margin, however, was tiny. Coad won by just 20 votes, with 16,333 to Borwick’s 16,313.

You might expect that if there is one number of which we can be certain, down to the very last digit, it is the number we get when we have counted something.

Yet the truth is that even something as basic as counting the number of votes is prone to error. The person doing the counting might inadvertently pick up two voting slips that are stuck together. Or when they are getting tired, they might make a slip and count 28, 29, 40, 41 … Or they might reject a voting slip that another counter would have accepted, because they reckon that marks have been made against more than one candidate.

As a rule of thumb, some election officials reckon that manual counts can only be relied on within a margin of about 1 in 5,000 (or 0.02%), so with a vote like the one in Kensington, the result of one count might vary by as many as 10 votes when you do a recount.6 (#litres_trial_promo)

And while each recount will typically produce a slightly different result, there is no guarantee which of these counts is actually the correct figure – if there is a correct figure at all. (In the famously tight US Election of 2000, the result in Florida came down to a ruling on whether voting cards that hadn’t been fully punched through, and had a hanging ‘chad’, counted as legitimate votes or not.)

Re-counting typically stops when it is becoming clear that the error in the count isn’t big enough to affect the result, so the tighter the result, the more recounts there will be. There have twice been UK General Election votes that have had seven recounts, both of them in the 1960s, when the final result was a majority below 10.

All this shows that when it is announced that a candidate such as Coad has received 16,333 votes, it should really be expressed as something vaguer: ‘Almost certainly in the range 16,328 to 16,338’ (or in shorthand, 16,333 ± 5).

If we can’t even trust something as easy to nail down as the number of votes made on physical slips of paper, what hope is there for accurately counting other things that are more fluid?

In 2018, the two Carolina states in the USA were hit by Hurricane Florence, a massive storm that deposited as much as 50 inches of rain in some places. Among the chaos, a vast number of homes lost power for several days. On 18 September, CNN gave this update:

511,000—this was the number of customers without power Monday morning—according to the US Energy Information Administration. Of those, 486,000 were in North Carolina, 15,000 in South Carolina and 15,000 in Virginia. By late Monday, however, the number [of customers without power] in North Carolina had dropped to 342,884.

For most of that short report, numbers were being quoted in thousands. But suddenly, at the end, we were told that the number without power had dropped to 342,884. Even if that number were true, it could only have been true for a period of a few seconds when the figures were collated, because the number of customers without power was changing constantly.

And even the 486,000 figure that was quoted for North Carolina on the Monday morning was a little suspicious – here we had a number being quoted to three significant figures, while the two other states were being quoted as 15,000 – both of which looked suspiciously like they’d been rounded to the nearest 5,000. This is confirmed if you add up the numbers: 15,000 + 15,000 + 486,000 = 516,000, which is 5,000 higher than the total of 511,000 quoted at the start of the story.

So when quoting these figures, there is a choice. They should either be given as a range (‘somewhere between 300,000 and 350,000’) or they should be brutally rounded to just a single significant figure and the qualifying word ‘roughly’ (so, ‘roughly 500,000’). This makes it clear that these are not definitive numbers that could be reproduced if there was a recount.

And, indeed, there are times when even saying ‘roughly’ isn’t enough.

Every month, the Office for National Statistics publishes the latest UK unemployment figures. Of course this is always newsworthy – a move up or down in unemployment is a good indicator of how the economy is doing, and everyone can relate to it. In September 2018, the Office announced that UK unemployment had fallen by 55,000 from the previous month to 1,360,000.

The problem, however, is that the figures published aren’t very reliable – and the ONS know this. When they announced those unemployment figures in 2018, they also added the detail that they had 95% confidence that this figure was correct to within 69,000. In other words, unemployment had fallen by 55,000 plus or minus 69,000. This means unemployment might actually have gone down by as many as 124,000, or it might have gone up by as many as 14,000. And, of course, if the latter turned out to be the correct figure, it would have been a completely different news story.

When the margin of error is larger than the figure you are quoting, there’s barely any justification in quoting the statistic at all, let alone to more than one significant figure. The best they can say is: ‘Unemployment probably fell slightly last month, perhaps by about 50,000.’

It’s another example of how a rounded, less precise figure often gives a fairer impression of the true situation than a precise figure would.

SENSITIVITY

We’ve already seen that the statistics should really carry an indication of how much of a margin of error we should attach to them.

An understanding of the margins of error is even more important when it comes to making predictions and forecasts.

Many of the numbers quoted in the news are predictions: house prices next year, tomorrow’s rainfall, the Chancellor’s forecast of economic growth, the number of people who will be travelling by train … all of these are numbers that have come from somebody feeding numbers into a spreadsheet (or something more advanced) to represent this mathematically, in what is usually known as a mathematical model of the future.

In any model like this, there will be ‘inputs’ (such as prices, number of customers) and ‘outputs’ that are the things you want to predict (profits, for example).

But sometimes a small change in one input variable can have a surprisingly large effect on the number that comes out at the far end.

The link between the price of something and the profit it makes is a good example of this.

Imagine that last year you ran a face-painting stall for three hours at a fair. You paid £50 for the hire of the stand, but the cost of materials was almost zero. You charged £5 to paint a face, and you can paint a face in 15 minutes, so you did 12 faces in your three hours, and made:

£60 income – £50 costs = £10 profit.

There was a long queue last year and you were unable to meet the demand, so this year you increase your charge from £5 to £6. That’s an increase of 20%. Your revenue this year is £6 × 12 = £72, and your profit climbs to:

£72 income – £50 costs = £22 profit.

So, a 20% increase in price means that your profit has more than doubled. In other words, your profit is extremely sensitive to the price. Small percentage increases in the price lead to much larger percentage increases in the profit.

It’s a simplistic example, but it shows that increasing one thing by 10% doesn’t mean that everything else increases by 10% as a result.7 (#litres_trial_promo)

EXPONENTIAL GROWTH

There are some situations when a small change in the value assigned to one of the ‘inputs’ has an effect that grows dramatically as time elapses.

Take chickenpox, for example. It’s an unpleasant disease but rarely a dangerous one so long as you get it when you are young. Most children catch chickenpox at some point unless they have been vaccinated against it, because it is highly infectious. A child infected with chickenpox might typically pass it on to 10 other children during the contagious phase, and those newly infected children might themselves infect 10 more children, meaning there are now 100 cases. If those hundred infected children pass it on to 10 children each, within weeks the original child has infected 1,000 others.

In their early stages, infections spread ‘exponentially’. There is some sophisticated maths that is used to model this, but to illustrate the point let’s pretend that in its early stages, chickenpox just spreads in discrete batches of 10 infections passed on at the end of each week. In other words:

N = 10

,

where N is the number of people infected and T is the number of infection periods (weeks) so far.

After one week: N = 10

= 10.

After two weeks: N = 10

= 100.

After three weeks: N = 10

= 1,000,

and so on.

What if we increase the rate of infection by 20% to N = 12, so that now each child infects 12 others instead of 10? (Such an increase might happen if children are in bigger classes in school or have more playdates, for example.)

After one week, the number of children infected is 12 rather than 10, just a 20% increase. However, after three weeks, N = 12

= 1,728, which is heading towards double what it was for N = 10 at this stage. And this margin continues to grow as time goes on.

CLIMATE CHANGE AND COMPLEXITY

Sometimes the relationship between the numbers you feed into a model and the forecasts that come out are not so direct. There are many situations where the factors involved are inter-connected and extremely complex.
<< 1 ... 4 5 6 7 8 9 10 >>
На страницу:
8 из 10