Оценить:
 Рейтинг: 0

The Tyranny of Numbers: Why Counting Can’t Make Us Happy

Автор
Год написания книги
2018
<< 1 2 3 4 5 6
На страницу:
6 из 6
Настройки чтения
Размер шрифта
Высота строк
Поля

Anyon had arrived in the USA in 1886, to look after the firm of Barrow, Wade, Guthrie and Co – set up three years before by an English accountant who realized that there was a completely vacant gap in the accountancy market in New York City. Unfortunately, the day he arrived, he was threatened with violence by the very large chief assistant, who had been secretly trying to take the enterprise over. The case ended up in the Supreme Court. Anyon survived the ordeal and 30 years later, he was giving advice to young people starting out in what was still a new profession. ‘The well trained and experienced accountant of today … is not a man of figures,’ he explained again.

But Anyon’s successors ignored his advice, and for a very familiar reason. The public, the politicians and their business clients wanted control. They wanted pseudo-scientific precision, and were deeply disturbed to discover that accountancy was not the exact science they thought it was. Every few years, there was the traditional revelation of a major fraud or gigantic crash, and a shocked public could not accept that accounts might ever be drawn up in different ways. How could two accountants come to different conclusions? How could some companies keep a very different secret set of accounts?

This issue was brought up by Pacioli himself, who said that even in his day some people kept two sets of books: one for customers and one for suppliers. In the First World War, Lloyd George once remarked that the War Office kept three sets of casualty figures, one to delude the cabinet, one to delude the public and one to delude itself. Anyone reading the public accounts of some companies will realize this practice has life in it yet. But as the centuries passed, it has become more and more of an issue, and accountants have been on the front line of solving the resulting confusions. The Western world is now awash with consultants and accountants who will accept a large fee to come into your organization, measure the way you work, test your assumptions and your profits, or lack of them, measure the mood of your employees and customers, and tell the public.

The National Audit Office and the Audit Commission arrived in the world in the early 1980s and set to with a vengeance. The British Standards Institute organized a standard of quality, then called BS5750, which auditors could measure accountants’ achievements by. Environmental quality standards followed, and the whole range now available across the world, US, European, global standards, and auditors behind each one – measuring, measuring, measuring. By 1992, environmental consultancy alone was worth $200 billion. Counting things is a lucrative business. Which is one of the reasons the private sector auditing firms, like Arthur Andersen, PricewaterhouseCoopers and KPMG entered such a boom period in the 1980s. By 1987, they were creaming off as many as one in ten university graduates. It is one of the paradoxes of the modern world that the failure of auditors is expected to be solved by employing more auditors. And the trouble with auditors of any kind (accountants or academics) is that they are applying numerical rules to very complex situations. They wear suits and ties and have been examined to within an inch of their lives about their understanding of the professional rules. But their knowledge of life outside the mental laboratory may not be very complete. Sometimes it’s extremely sketchy. And when Western consultants arrive in developing countries with their clipboards, like so many Accidental Tourists, it can do a great deal of damage.

Just how much damage can be done by faulty figures has been revealed in an extraordinary exposé by the development economist Robert Chambers. The number-crunchers he described like innocents abroad, deluded by local elders in distant villages. Sometimes deliberately. As a result of what may have been an elderly insect-damaged cob, consultants convinced themselves during the 1970s that African farmers were losing up to 40 per cent of their harvest every year. The real figure was around 10 per cent, yet American aid planners diverted up to $19 million a year by the early 1980s into building vast grain silos across Africa to tackle a problem that didn’t exist.

Then there was the UN Food and Agriculture Organization’s notorious questionnaires in the early 1980s, which completely ignored mixed farms in developing countries. They only asked about the main crop, anything else was too complicated. As a result, production rates in developing countries seemed so low that multinationals believed they needed genetically-manipulated seeds to help cut famine. But then, as Emerson said, people only see what they want to see. That’s the trouble with questionnaires.

‘Professional methods and values set a trap,’ says Chambers in his book Whose Reality Counts?:

Status, promotion and power come less from direct contact with the confusing complexity of people, families, communities, livelihoods and farming systems, and more from isolation which permits safe and sophisticated analysis of statistics … The methods of modern science then serve to simplify and reframe reality in standard categories, applied from a distance … Those who manipulate these units are empowered and the subjects of analysis disempowered: counting promotes the counter and demotes the counted.

Auditors deal in universal norms, methods of counting, targets, standards – especially in disciplines like psychology and economics that try to improve their standing by measuring. This is how economics transformed itself into econometrics, psychology transformed itself into behavioural science, and both gained status – but all too often lost their grip on reality. Sociologists tackled their perceived lack of ‘scientific’ respectability by organizing bigger and bigger questionnaires to confirm what people knew in their heart of hearts anyway. Even anthropologists, who need a strong dose of interpretation provided by the wisdom, understanding and imagination of a researcher on the ground, began to lose themselves in matrices and figures. Scientists have to simplify in order to separate out the aspect of truth they want to study – and it’s the same with any other discipline that uses figures.

Often it’s only the figures that matter, even when everybody knows they are a little dodgy. One paper on this phenomenon by the economist Gerry Gill – called ‘OK the data’s lousy but it’s all we’ve got’ – was a quote by an unnamed American economics professor explaining his findings at an academic conference. Which is fine, of course, unless the data is wrong – because people’s lives may depend on it. ‘Yet professionals, especially economists and consultants tight for time, have a strongly felt need for statistics,’ says Chambers. ‘At worst, they grub around and grab what numbers they can, feed them into their computers, and print out not just numbers but more and more elegant graphs, bar-charts, pie diagrams and three-dimensional wonders of the graphic myth with which to adorn their reports and justify their plans and proposals.’

Chambers found that there were twenty-two different erosion studies in one catchment area in Sri Lanka, but the figures on how much erosion was going on varied by as much as 8,000-fold. The lowest had been collected by a research institute wanting to show how safe their land management was. The highest came from a Third World development agency showing how much soil erosion was damaging the environment. The scary part is that all the figures were probably correct, but the one thing they failed to provide was objective information. For that you need interpretation, quality, imagination.

‘In power and influence, counting counts,’ he wrote. ‘Quantification brings credibility. But figures and tables can deceive, and numbers construct their own realities. What can be measured and manipulated statistically is then not only seen as real; it comes to be seen as the only or the whole reality.’ Then he ended up with a neat little verse that summed it all up:

Economists have come to feel

What can’t be measured, isn’t real.

The truth is always an amount

Count numbers, only numbers count.

But the distinctions really get blurred when politicians start using numbers. Waiting lists up 40,000, Labour’s £1000 tax bombshell, fertility down to 1.7, 22 Tory tax rises – elections are increasingly a clash between competing statistics. It’s the same all over the world. Figures have a kind of spurious objectivity, and politicians wield them like weapons, swinging them about their heads as they ride into battle. They want to show they have a grasp of the details, and there is something apparently hard-nosed about quoting figures. It sounds tough and unanswerable.

But most of the time, the figures also sound meaningless. The public don’t take them in, and they simply serve as a kind of aggressive decoration to their argument. But, as politicians and pressure groups know very well, a shocking figure can every so often grasp the public’s imagination. In the UK, the best-known election policy for the 1992 general election – repeated in the 1997 election – was the Liberal Democrat pledge to put 1p on income tax for education. It sounded clear and costed, but it was the perfect example of a number being used for symbolic effect. It implied real commitment and risk: the 1p meant almost nothing. ‘If relying on numbers didn’t work,’ said Andrew Dilnot of the Institute of Fiscal Studies in a recent BBC programme, ‘then in the end a whole range of successful number-free politicians would appear.’

They haven’t appeared yet. The problem for politicians is that they have to use figures to raise public consciousness, but find that the public doesn’t trust them – and the resulting cacophony of figures tends to drown out the few that are important. The disputes of political debate have to be measurable, but they get hung up about measurements that only vaguely relate to the real world.

Take rising prices. You can’t see them or smell them, so you need some kind of index to give you a handle on what is a real phenomenon. You can’t hold them still while you get out your ruler, yet the ersatz inflation figures have assumed a tremendous political importance. We think inflation is an objective measure of rising prices, when actually it is a measurement based on a random basket of goods which has changed from generation to generation. In the 1940s, it included the current price of wireless sets, bicycles and custard powder. In the 1950s, rabbits and candles were dropped in favour of brown bread and washing machines. The 1970s added yoghurt and duvets, the 1980s added oven-ready meals and videotapes, and the 1990s microwave ovens and camcorders. It’s a fascinating measure of our changing society, but it isn’t an objective way of measuring rising prices over a long period of time.

II

‘Oh the sad condition of mankind,’ moaned the great Belgian pioneer of statistics, Adolphe Quetelet. ‘We can say in advance how many individuals will sully their hands with the blood of their neighbours, how many of them will commit forgeries, and how many will turn poisoners with almost the same precision as we can predict the number of births and deaths. Society contains within it the germ of all the crimes that will be committed.’

It’s a frightening thought, just as it was frightening for Quetelet’s contemporaries to hear him say it in the 1830s. But he and his contemporaries had been astonished by how regular the suicide statistics were. Year after year, you seemed to be able to predict how many there would be. There were the occasional bumper years, like 1846, 1929 and other economic crash periods, but generally speaking it was there. People didn’t seem to be able to help themselves. Amidst a constant number of individuals, the same number would take it into their heads to murder as much as get married. Statistics were powerful.

Quetelet was among the most influential of the statisticians trying to solve the confusion of politics by ushering in a nice clean, unambiguous world, urging that we count things like the weather, the flowering of plants and suicide rates in exactly the same way. ‘Statistics should be the dryest of all reading,’ Bentham’s young disciple William Farr wrote to Florence Nightingale, explaining that they could predict with some certainty that, of the children he had registered as having been born in 1841, 9,000 would still be alive in 1921.

To help the process along, Quetelet invented the dryest of all people – the monstrous intellectual creation, l’homme moyen or Average Man. Mr A. Man is seriously boring: he has exactly average physical attributes, an average life, an average propensity to commit crime, and an average rather unwieldy number of children – which used to be expressed as the cliché 2.4. But Average Man only exists in the statistical laboratory, measured at constant room temperature by professional men with clipboards and white coats. The whole business of relying on numbers too much goes horribly wrong simply because Mr Average is the Man Who Never Was – counted by people who know a very great deal about their profession or science but precious little about what they are counting. The Man Who Never Was measured by the Men Who Don’t Exist. It’s the first and most important paradox of the whole business of counting:

Counting paradox 1: You can count people, but you can’t count individuals

Average Man belongs to the Industrial Revolution and the Age of the Masses, but we just don’t believe most of that Marxist stuff any more. It belongs in the twentieth-century world of mass production, where people were transformed into cogs in giant machines, as pioneered by the great American industrialist Henry Ford – the man who offered his customers ‘any colour you like as long as it’s black’. Mass production and Average Man had no space for individuality. Figures reduce their complexity, but the truth is complicated.

Now, of course, you can almost have your car tailor-made. You can mass-produce jeans using robots to designs which perfectly match the peculiarities of individual bodies on the other side of the world. The days have gone when clothes issued by the military didn’t fit, when you struggled to keep up with the speed of the production line, with your tasks individually timed for Average Person by the time and motion experts. And we can see more clearly how difficult it is to categorize these widely different individuals who make up the human race. But in the hands of a bureaucratic state, people who don’t conform to the norm get hounded and imprisoned. Or, these days, social workers visit them and remove their children.

And after all that, when you get to know Mr Average, you find he has a bizarre taste in underwear, he has extraordinary dreams about flying through galaxies, and a hidden collection of Abba records. He wasn’t average at all. Counting him in with other people ignores the real picture.

Counting paradox 2: If you count the wrong thing, you go backwards

Because it is so hard to measure what is really important, governments and institutions pin down something else. They have to. But the consequences of pinning down the wrong thing are severe: all your resources will be focussed on achieving something you didn’t mean to.

Take school league tables, for example. When the Thatcher government latched onto the idea of forcing schools to compete with each other by measuring the progress of children at three comparable moments of their lives, they were intending to raise standards. They probably have done in a narrow way. The trouble was that schools concentrated on the test results to improve their position on the tables, which was anyway pretty meaningless. That meant excluding pupils who may drag down the results, concentrating on the D grade pupils – the only ones who could make a difference in exam result league tables – to the detriment of the others. It meant concentrating on subject areas the school could compete in, never mind whether they were the subjects the children needed. And worst of all, it meant squeezing the curriculum to produce children that can read and write but are, according to National Association of Head Teachers general secretary David Hart, ‘unfit philistines’.

Then there was the business of using hospital waiting lists as a way of measuring the success of the health service. Tony Blair’s new government made an ‘interim promise’, which then hung around their neck like an albatross, to reduce waiting lists by 100,000. They did push the lists down, though painfully slowly. But the result was the emergence of new secret waiting lists for people just to get in to see the hospital consultant, before they were even allowed near a real waiting list. Quick simple easy operations were also speeded up to get the numbers down, at the expense of the difficult ones. And when the hospital league tables of deaths came out in 1999, consultants warned it would make administrators shy of taking on difficult complicated cases.

Governments and pressure groups latch onto the wrong solutions and then busily measure progress towards them. They thought that shifting to diesel fuel for cars would clean up polluted air and measured progress towards achieving it. Result: air full of carcinogenic particulate matter. They thought more homework for primary school children was the solution to underachievement and measured progress towards it. Result: miserable dysfunctional kids.

It has all the makings of a fairy tale. If you choose the wrong measure, you sometimes get the opposite of what you wanted. And any measure has to be a generalization that can’t do justice to the individuals that are included.

Counting paradox 3: Numbers replace trust, but make measuring even more untrustworthy

When farmers and merchants didn’t trust each other to provide the right amount of wheat, they used the standard local barrel stuck to the wall of the town hall, which would measure the agreed local bushel. When we don’t trust our corporations, politicians or professionals now, we send in the auditors – and we break down people’s jobs into measurable units so that we can see what they are doing and check it. If doctors hide behind their professional masks, then we measure the number of deaths per number of patients, their treatment record and their success rate, and we hold them accountable. When politicians look out of control, we measure their voting records and their popularity ratings – just as the TV commentators break down a sporting performance into opportunities, misses, aces, broken services and much else besides.

It wasn’t always like that. Previous generations realized that we lose some information every time we do this – information the numbers can’t provide. They realized, like James Anyon, that we could never measure what a doctor does so well that we could do it for them. They still have experience that slips through the measurement, so we still have to accept the word of the professionals to get to the truth. The British establishment used to be quite happy to accept the word of the professionals if they were ‘trustworthy gentlemen of good character’. But from the outside, that trust looked like a cosy nepotistic conspiracy. And probably it was.

It was this kind of political problem which led to the growth of cost-benefit analysis. This was originally used by French officials to work out what tolls to charge for new bridges or railways, but it was taken up with a vengeance in twentieth-century America as a way of deciding which flood control measures to build. After their Flood Control Act of 1936, there would be no more federal money for expensive flood control measures unless the benefits outweighed the costs. Only then would the public be able to see clearly that there was no favouritism for some farmers rather than others. It was all going to be clear, objective, nonpolitical and based on counting.

Even so, the professionals clung to their mystery as long as they could, just as doctors fought the idea of scientific instruments that would make the measurements public and might lead people to question their diagnoses – the stethoscope was acceptable because they were the only ones who could listen to it. Even the US Army Corps of Engineers – in charge of the flood control analysis – tried to keep the mystery alive. ‘It is calculated according to rather a complex formula,’ a Corps official told a Senate committee in 1954. ‘I won’t worry you with the details of that formula.’ It couldn’t last. The more they were faced with angry questioning, the more their calculations had to be public.

But how far do you go? Do you, as they did for some flood control schemes, work out how many seagulls would live in the new reservoir, and how many grasshoppers they would eat, and what the grain was worth which the grasshoppers would have eaten? Do you work out what these values might be in future years? Do you value property when no two estate agents can name the same price? ‘I would not say it was a guess,’ one of their officials told the US Senate about property values. ‘It is an estimate.’ And after all that, it is economists who persuaded the US Secretary of the Interior, Bruce Babbitt, to start demolishing the dams their predecessors laboriously calculated.

So here’s the paradox. Numbers are democratic. We use them to peer into the mysterious worlds of professionals, to take back some kind of control. They are the tools of opposition to arrogant rulers. Yet in another sense they are not democratic at all. Politicians like to pretend that numbers take the decisions out of their hands. ‘Listen to the scientists,’ they say about BSE or genetically-modified food. ‘It’s not us taking the decisions, it’s the facts.’ We are submitting these delicate problems to the men in white coats who will apply general rules about individual peculiarities. It is, in other words, a shift from one kind of professional to another, in the name of democracy – from teachers and doctors to accountants, auditors and number-crunchers. And they have their own secretive rituals that exclude outsiders like a computer instruction manual.

But when it comes to auditing the rest of us in our ordinary jobs, auditing undermines as much trust as it creates – because people have to defend themselves against the auditors. Their lives – usually their working lives – are at stake, and their managers will wonder later why the figures they spent so much to collect are so bizarrely inaccurate. And as we all trust the companies and institutions less, we trust the auditors less too.

Counting paradox 4: When numbers fail, we get more numbers

Because counting and measuring are seen as the antidote to distrust, then any auditing failure must need more auditing. That’s what society demanded the moment the Bank of Credit and Commerce International had collapsed and Robert Maxwell had fallen off his yacht into the Bay of Biscay. Nobody ever blames the system – they just blame the auditors. Had they been too friendly with the fraudsters? Had they taken their eyes off the ball? Send in the auditors to audit the auditors.

If the targets fail, you get more targets. Take the example of a large manufacturer that centralizes its customer care to one Europe-wide call centre. After a while, they find that the customers are not getting the kind of care they were used to before. What does the company do, given that it can’t measure what it really needs to – the humanity and helpfulness of their service to customers? They set more targets – speed answering the telephone, number of calls per operator per day. They measure their achievements against these targets and wonder why customers don’t get any happier. ‘People do what you count, not necessarily what counts,’ said the business psychologist John Seddon.


Вы ознакомились с фрагментом книги.
Приобретайте полный текст книги у нашего партнера:
Полная версия книги
5145 форматов
<< 1 2 3 4 5 6
На страницу:
6 из 6

Другие электронные книги автора David Boyle