Оценить:
 Рейтинг: 0

The Evolution of Everything: How Small Changes Transform Our World

Автор
Год написания книги
2019
<< 1 2 3 4 5 6 7 >>
На страницу:
6 из 7
Настройки чтения
Размер шрифта
Высота строк
Поля

Then in the Industrial (R)evolution an almost inexhaustible supply of energy was harnessed in the form of coal. Coal miners, unlike peasant farmers, produced vastly more energy than they consumed. And the more they dug out, the better they got at it. With the first steam engines, the barrier between heat and work was breached, so that coal’s energy could now amplify the work of people. Suddenly, just as the eukaryotic (r)evolution vastly increased the amount of energy per gene, so the Industrial (R)evolution vastly increased the amount of energy per worker. And that surplus energy, so the energy economist John Constable argues, is what built (and still builds) the houses, machines, software and gadgets – the capital – with which we enrich our lives. Surplus energy is indispensable to modern society, and is the symptom of wealth. An American consumes about ten times as much energy as a Nigerian, which is the same as saying he is ten times richer. ‘With coal almost any feat is possible or easy,’ wrote William Stanley Jevons; ‘without it we are thrown back into the laborious poverty of early times.’ Both the evolution of surplus energy generation by eukaryotes, and the evolution of surplus energy by industrialisation, were emergent, unplanned phenomena.

But I digress. Back to genomes. A genome is a digital computer program of immense complexity. The slightest mistake would alter the pattern, dose or sequence of expression of its 20,000 genes (in human beings), or affect the interaction of its hundreds of thousands of control sequences that switch genes on and off, and result in disastrous deformity or a collapse into illness. In most of us, for an incredible eight or nine decades, the computer program runs smoothly with barely a glitch.

Consider what must happen every second in your body to keep the show on the road. You have maybe ten trillion cells, not counting the bacteria that make up a large part of your body. Each of those cells is at any one time transcribing several thousand genes, a procedure that involves several hundred proteins coming together in a specific way and catalysing tens of chemical reactions for each of millions of base pairs. Each of those transcripts generates a protein molecule, thousands of amino acids long, which it does by entering a ribosome, a machine with tens of moving parts, capable of catalysing a flurry of chemical reactions. The proteins themselves then fan out within and without cells to speed reactions, transport goods, transmit signals and prop up structures. Millions of trillions of these immensely complicated events are occurring every second in your body to keep you alive, very few of which go wrong. It’s like the world economy in miniature, only even more complex.

It is hard to shake the illusion that for such a computer to run such a program, there must be a programmer. Geneticists in the early days of the Human Genome Project would talk of ‘master genes’ that commanded subordinate sequences. Yet no such master gene exists, let alone an intelligent programmer. The entire thing not only emerged piece by piece through evolution, but runs in a democratic fashion too. Each gene plays its little role; no gene comprehends the whole plan. Yet from this multitude of precise interactions results a spontaneous design of unmatched complexity and order. There was never a better illustration of the validity of the Enlightenment dream – that order can emerge where nobody is in charge. The genome, now sequenced, stands as emphatic evidence that there can be order and complexity without any management.

On whose behalf?

Let’s assume for the sake of argument that I have persuaded you that evolution is not directed from above, but is a self-organising process that produces what Daniel Dennett calls ‘free-floating rationales’ for things. That is to say, for example, a baby cuckoo pushes the eggs of its host from the nest in order that it can monopolise its foster parents’ efforts to feed it, but nowhere has that rationale ever existed as a thought either in the mind of the cuckoo or in the mind of a cuckoo’s designer. It exists now in your mind and mine, but only after the fact. Bodies and behaviours teem with apparently purposeful function that was never foreseen or planned. You will surely agree that this model can apply within the human genome, too; your blood-clotting genes are there to make blood-clotting proteins, the better to clot blood at a wound; but that functional design does not imply an intelligent designer who foresaw the need for blood clotting.

I’m now going to tell you that you have not gone far enough. God is not the only skyhook. Even the most atheistic scientist, confronted with facts about the genome, is tempted into command-and-control thinking. Here’s one, right away: the idea that genes are recipes patiently waiting to be used by the cook that is the body. The collective needs of the whole organism are what the genes are there to serve, and they are willing slaves. You find this assumption behind almost any description of genetics – including ones by me – yet it is misleading. For it is just as truthful to turn the image upside down. The body is the plaything and battleground of genes at least as much as it is their purpose. Whenever somebody asks what a certain gene is for, they automatically assume that the question relates to the needs of the body: what is it for, in terms of the body’s needs? But there are plenty of times when the answer to that question is ‘The gene itself.’

The scientist who first saw this is Richard Dawkins. Long before he became well known for his atheism, Dawkins was famous for the ideas set out in his book The Selfish Gene. ‘We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes,’ he wrote. ‘This is a truth that still fills me with astonishment.’ He was saying that the only way to understand organisms was to see them as mortal and temporary vehicles used to perpetuate effectively immortal digital sequences written in DNA. A male deer risks its life in a battle with another stag, or a female deer exhausts her reserves of calcium producing milk for her young, not to help its own body’s survival but to pass the genes to the next generation. Far from preaching selfish behaviour, therefore, this theory explains why we are often altruistic: it is the selfishness of the genes that enables individuals to be selfless. A bee suicidally stinging an animal that threatens the hive is dying for its country (or hive) so that its genes may survive – only in this case the genes are passed on indirectly, through the stinger’s mother, the queen. It makes more sense to see the body as serving the needs of the genes than vice versa. Bottom–up.

One paragraph of Dawkins’s book, little noticed at the time, deserves special attention. It has proved to be the founding text of an extremely important theory. He wrote:

Sex is not the only apparent paradox that becomes less puzzling the moment we learn to think in selfish gene terms. For instance, it appears that the amount of DNA in organisms is more than is strictly necessary for building them: a large fraction of the DNA is never translated into protein. From the point of view of the individual this seems paradoxical. If the ‘purpose’ of DNA is to supervise the building of bodies it is surprising to find a large quantity of DNA which does no such thing. Biologists are racking their brains trying to think what useful task this apparently surplus DNA is doing. From the point of view of the selfish genes themselves, there is no paradox. The true ‘purpose’ of DNA is to survive, no more and no less. The simplest way to explain the surplus DNA is to suppose that it is a parasite, or at best a harmless but useless passenger, hitching a ride in the survival machines created by the other DNA.

One of the people who read that paragraph and began thinking about it was Leslie Orgel, a chemist at the Salk Institute in California. He mentioned it to Francis Crick, who mentioned it in an article about the new and surprising discovery of ‘split genes’ – the fact that most animal and plant genes contain long sequences of DNA called ‘introns’ that are discarded after transcription. Crick and Orgel then wrote a paper expanding on Dawkins’s selfish DNA explanation for all the extra DNA. So, at the same time, did the Canadian molecular biologists Ford Doolittle and Carmen Sapienza. ‘Sequences whose only “function” is self-preservation will inevitably arise and be maintained,’ wrote the latter. The two papers were published simultaneously in 1980.

It turns out that Dawkins was right. What would his theory predict? That the spare DNA would have features that made it good at getting itself duplicated and re-inserted into chromosomes. Bingo. The commonest gene in the human genome is the recipe for reverse transcriptase, an enzyme that the human body has little or no need for, and whose main function is usually to help the spread of retroviruses. Yet there are more copies and half-copies of this gene than of all other human genes combined. Why? Because reverse transcriptase is a key part of any DNA sequence that can copy itself and distribute the copies around the genome. It’s a sign of a digital parasite. Most of the copies are inert these days, and some are even put to good use, helping to regulate real genes or bind proteins. But they are there because they are good at being there.

The skyhook here is a sort of cousin of Locke’s ‘mind-first’ thinking: the assumption that the human good is the only good pursued within our bodies. The alternative view, penetratingly articulated by Dawkins, takes the perspective of the gene itself: how DNA would behave if it could. Close to half of the human genome consists of so-called transposable elements designed to use reverse transcriptase. Some of the commonest are known by names like LINEs (17 per cent of the genome), SINEs (11 per cent) and LTR retrotransposons (8 per cent). Actual genes, by contrast, fill just 2 per cent of the genome. These transposons are sequences that are good at getting themselves copied, and there is no longer a smidgen of doubt that they are (mostly inert) digital parasites. They are not there for the needs of the body at all.

Junk is not the same as garbage

There is a close homology with computer viruses, which did not yet exist when Dawkins suggested the genetic version of the concept of digital parasitism. Some of the transposons, the SINEs, appear to be parasites of parasites, because they use the apparatus of longer, more complete sequences to get themselves disseminated. For all the heroic attempts to see their function in terms of providing variability that might one day lead to a brave new mutation, the truth is that their more immediate and frequent effect is occasionally to disrupt the reading of genes.

Of course, these selfish DNA sequences can thrive only because a small percentage of the genome does something much more constructive – builds a body that grows, learns and adapts sufficiently to its physical and social environment that it can eventually thrive, attract a mate and have babies. At which point the selfish DNA says, ‘Thank you very much, we’ll be making up half the sequence in the children too.’

It is currently impossible to explain the huge proportion of the human genome devoted to these transposons except by reference to the selfish DNA theory. There’s just no other theory that comes close to fitting the facts. Yet it is routinely rejected, vilified and ‘buried’ by commentators on the fringe of genetics. The phrase that really gets their goat is ‘junk DNA’. It’s almost impossible to read an article on the topic without coming across surprisingly passionate denunciations of the ‘discredited’ notion that some of the DNA in a genome is useless. ‘We have long felt that the current disrespectful (in a vernacular sense) terminology of junk DNA and pseudogenes,’ wrote Jürgen Brosius and Stephen Jay Gould in an early salvo in 1992, ‘has been masking the central evolutionary concept that features of no current utility may hold crucial evolutionary importance as recruitable sources of future change.’ Whenever I write about this topic, I am deluged with moralistic denunciations of the ‘arrogance’ of scientists for rejecting unknown functions of DNA sequences. To which I reply: functions for whom? The body or the sequences?

This moral tone to the disapproval of ‘so-called’ junk DNA is common. People seem to be genuinely offended by the phrase. They sound awfully like the defenders of faith confronted with evolution – it’s the bottom–up nature of the story that they dislike. Yet as I shall show, selfish DNA and junk DNA are both about as accurate as metaphors ever can be. And junk is not the same as garbage.

What’s the fuss about? In the 1960s, as I mentioned earlier, molecular biologists began to notice that there seemed to be far more DNA in a cell than was necessary to make all the proteins in the cell. Even with what turned out to be a vastly over-inflated estimate of the number of genes in the human genome – then thought to be more than 100,000, now known to be about 20,000 – genes and their control sequences could account for only a small percentage of the total weight of DNA present in a cell’s chromosomes, at least in mammals. It’s less than 3 per cent in people. Worse, there was emerging evidence that we human beings did not seem to have the heaviest genomes or the most DNA. Humble protozoa, onions and salamanders have far bigger genomes. Grasshoppers have three times as much; lungfish forty times as much. Known by the obscure name of the ‘c-value paradox’, this enigma exercised the minds of some of the most eminent scientists of the day. One of them, Susumu Ohno, coined the term ‘junk DNA’, arguing that much of the DNA might not be under selection – that is to say, might not be being continuously honed by evolution to fit a function of the body.

He was not saying it was garbage. As Sydney Brenner later made plain, people everywhere make the distinction between two kinds of rubbish: ‘garbage’ which has no use and must be disposed of lest it rot and stink, and ‘junk’, which has no immediate use but does no harm and is kept in the attic in case it might one day be put back to use. You put garbage in the rubbish bin; you keep junk in the attic or garage.

Yet the resistance to the idea of junk DNA mounted. As the number of human genes steadily shrank in the 1990s and 2000s, so the desperation to prove that the rest of the genome must have a use (for the organism) grew. The new simplicity of the human genome bothered those who liked to think of the human being as the most complex creature on the planet. Junk DNA was a concept that had to be challenged. The discovery of RNA-coding genes, and of multiple control sequences for adjusting the activity of genes, seemed to offer some straws of hope to grasp. When it became clear that on top of the 5 per cent of the genome that seemed to be specifically protected from change between human beings and related species, another 4 per cent showed some evidence of being under selection, the prestigious journal Science was moved to proclaim ‘no more junk DNA’. What about the other 91 per cent?

In 2012 the anti-junk campaign culminated in a raft of hefty papers from a huge consortium of scientists called ENCODE. These were greeted, as intended, with hype in the media announcing the Death of Junk DNA. By defining non-junk as any DNA that had something biochemical happen to it during normal life, they were able to assert that about 80 per cent of the genome was functional. (And this was in cancer cells, with abnormal patterns of DNA hyperactivity.) That still left 20 per cent with nothing going on. But there are huge problems with this wide definition of ‘function’, because many of the things that happened to the DNA did not imply that the DNA had an actual job to do for the body, merely that it was subject to housekeeping chemical processes. Realising they had gone too far, some of the ENCODE team began to use smaller numbers when interviewed afterwards. One claimed only 20 per cent was functional, before insisting none the less that the term ‘junk DNA’ should be ‘totally expunged from the lexicon’ – which, as Dan Graur of the University of Houston and his colleagues remarked in a splenetic riposte in early 2013, thus invented a new arithmetic according to which 20 per cent is greater than 80 per cent.

If this all seems a bit abstruse, perhaps an analogy will help. The function of the heart, we would surely agree, is to pump blood. That is what natural selection has honed it to do. The heart does other things, such as add to the weight of the body, produce sounds and prevent the pericardium from deflating. Yet to call those the functions of the heart is silly. Likewise, just because junk DNA is sometimes transcribed or altered, that does not mean it has function as far as the body is concerned. In effect, the ENCODE team was arguing that grasshoppers are three times as complex, onions five times and lungfish forty times as complex, as human beings. As the evolutionary biologist Ryan Gregory put it, anyone who thinks he or she can assign a function to every letter in the human genome should be asked why an onion needs a genome that is about five times larger than a person’s.

Who’s resorting to a skyhook here? Not Ohno or Dawkins or Gregory. They are saying the extra DNA just comes about, there not being sufficient selective incentive for the organism to clear out its genomic attic. (Admittedly, the idea of junk in your attic that duplicates itself if you do nothing about it is moderately alarming!) Bacteria, with large populations and brisk competition to grow faster than their rivals, generally do keep their genomes clear of junk. Large organisms do not. Yet there is clearly a yearning that many people have to prefer an explanation that sees the spare DNA as having a purpose for us, not for itself. As Graur puts it, the junk critics have fallen prey to ‘the genomic equivalent of the human propensity to see meaningful patterns in random data’.

Whenever I raised the topic of junk DNA in recent years I was astonished by the vehemence with which I was told by scientists and commentators that I was wrong, that its existence had been disproved. In vain did I point out that on top of the transposons, the genome was littered with ‘pseudogenes’ – rusting hulks of dead genes – not to mention that 96 per cent of the RNA transcribed from genes was discarded before proteins were made from the transcripts (the discards are ‘introns’). Even though some parts of introns and pseudogenes are used in control sequences, it was clear the bulk was just taking up space, its sequence free to change without consequence for the body. Nick Lane argues that even introns are descended from digital parasites, from the period when an archeal cell ingested a bacterium and turned it into the first mitochondrion, only to see its own DNA invaded by selfish DNA sequences from the ingested bacterium: the way introns are spliced out betrays their ancestry as self-splicing introns from bacteria.

Junk DNA reminds us that the genome is built by and for DNA sequences, not by and for the body. The body is an emergent phenomenon consequent upon the competitive survival of DNA sequences, and a means by which the genome perpetuates itself. And though the natural selection that results in evolutionary change is very far from random, the mutations themselves are random. It is a process of blind trial and error.

Red Queen races

Even in the heart of genetics labs there is a long tradition of resistance to the idea that mutation is purely random and comes with no intentionality, even if selection is not random. Theories of directed mutation come and go, and many highly reputable scientists embrace them, though the evidence remains elusive. The molecular biologist Gabby Dover, in his book Dear Mr Darwin, tried to explain the implausible fact that some centipedes have 173 body segments without relying exclusively on natural selection. His argument was basically that it was unlikely that a randomly generated 346-legged centipede survived and bred at the expense of one with slightly fewer legs. He thinks some other explanation is needed for how the centipede got its segments. He finds such an explanation in ‘molecular drive’, an idea that remains frustratingly vague in Dover’s book, but has a strong top–down tinge. In the years since Dover put forward the notion, molecular drive has sunk with little trace, following so many other theories of directed mutation into oblivion. And no wonder: if mutation is directed, then there would have to be a director, and we’re back to the problem of how the director came into existence: who directed the director? Whence came this knowledge of the future that endowed a gene with the capacity to plan a sensible mutation?

In medicine, an understanding of evolution at the genomic level is both the problem and the solution. Bacterial resistance to antibiotics, and chemotherapeutic drug resistance within tumours, are both pure Darwinian evolutionary processes: the emergence of survival mechanisms through selection. The use of antibiotics selects for rare mutations in genes in bacteria that enable them to resist the drugs. The emergence of antibiotic resistance is an evolutionary process, and it can only be combated by an evolutionary process. It is no good expecting somebody to invent the perfect antibiotic, and find some way of using it that does not elicit resistance. We are in an arms race with germs, whether we like it or not. The mantra should always be the Red Queen’s (from Lewis Carroll’s Through the Looking-Glass): ‘Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!’ The search for the next antibiotic must begin long before the last one is ineffective.

That, after all, is how the immune system works. It does not just produce the best antibodies it can find; it sets out to experiment and evolve in real time. Human beings cannot expect to rely upon evolving resistance to parasites quickly enough by the selective death of susceptible people, because our generation times are too long. We have to allow evolution within our bodies within days or hours. And this the immune system is designed to achieve. It contains a system for recombining different forms of proteins to increase their diversity and rapidly multiplying whichever antibody suddenly finds itself in action. Moreover, the genome includes a set of genes whose sole aim seems to be to maintain a huge diversity of forms: the major histocompatibility complex. The job of these 240 or so MHC genes is to present antigens from invading pathogens to the immune system so as to elicit an immune response. They are the most variable genes known, with one – HLA-B – coming in about 1,600 different versions in the human population. There is some evidence that many animals go to some lengths to maintain or enhance the variability further, by, for example, seeking out mates with different MHC genes (detected by smell).

If the battle against microbes is a never-ending, evolutionary arms race, then so is the battle against cancer. A cell that turns cancerous and starts to grow into a tumour, then spreads to other parts of the body, has to evolve by genetic selection as it does so. It has to acquire mutations that encourage it to grow and divide; mutations that ignore the instructions to stop growing or commit suicide; mutations that cause blood vessels to grow into the tumour to supply it with nutrients; and mutations that enable cells to break free and migrate. Few of these mutations will be present in the first cancerous cell, but tumours usually acquire another mutation – one that massively rearranges its genome, thus experimenting on a grand scale, as if unconsciously seeking to find a way by trial and error to acquire these needed mutations.

The whole process looks horribly purposeful, and malign. The tumour is ‘trying’ to grow, ‘trying’ to get a blood supply, ‘trying’ to spread. Yet, of course, the actual explanation is emergent: there is competition for resources and space among the many cells in a tumour, and the one cell that acquires the most helpful mutations will win. It is precisely analogous to evolution in a population of creatures. These days, the cancer cells often need another mutation to thrive: one that will outwit the chemotherapy or radiotherapy to which the cancer is subjected. Somewhere in the body, one of the cancer cells happens to acquire a mutation that defeats the drug. As the rest of the cancer dies away, the descendants of this rogue cell gradually begin to multiply, and the cancer returns. Heartbreakingly, this is what happens all too often in the treatment of cancer: initial success followed by eventual failure. It’s an evolutionary arms race.

The more we understand genomics, the more it confirms evolution.

5 (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)

The Evolution of Culture (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)

And therefore to assume there was one person gave a name

To everything, and that all learned their first words from the same,

Is stuff and nonsense. Why should one human being from among

The rest be able to designate and name things with his tongue

And others not possess the power to do likewise? …

Lucretius, De Rerum Natura, Book 5, lines 1041–5

The development of an embryo into a body is perhaps the most beautiful of all demonstrations of spontaneous order. Our understanding of how it happens grows ever less instructional. As Richard Dawkins writes in his book The Greatest Show on Earth, ‘The key point is that there is no choreographer and no leader. Order, organisation, structure – these all emerge as by-products of rules which are obeyed locally and many times over.’ There is no overall plan, just cells reacting to local effects. It is as if an entire city emerged from chaos just because people responded to local incentives in the way they set up their homes and businesses. (Oh, hang on – that is how cities emerged too.)

Look at a bird’s nest: beautifully engineered to provide protection and camouflage to a family of chicks, made to a consistent (but unique) design for each species, yet constructed by the simplest of instincts with no overall plan in mind, just a string of innate urges. I had a fine demonstration of this one year when a mistle thrush tried to build a nest on the metal fire escape outside my office. The result was a disaster, because each step of the fire escape looked identical, so the poor bird kept getting confused about which step it was building its nest on. Five different steps had partly built nests on them, the middle two being closest to completion, but neither fully built. The bird then laid two eggs in one half-nest and one in another. Clearly it was confused by the local cues provided by the fire-escape steps. Its nest-building program depended on simple rules, like ‘Put more material in corner of metal step.’ The tidy nest of a thrush emerges from the most basic of instincts.

Or look at a tree. Its trunk manages to grow in width and strength just as fast as is necessary to bear the weight of its branches, which are themselves a brilliant compromise between strength and flexibility; its leaves are a magnificent solution to the problem of capturing sunlight while absorbing carbon dioxide and losing as little water as possible: they are wafer-thin, feather-light, shaped for maximum exposure to the light, with their pores on the shady underside. The whole structure can stand for hundreds or even thousands of years without collapsing, yet can also grow continuously throughout that time – a dream that lies far beyond the capabilities of human engineers. All this is achieved without a plan, let alone a planner. The tree does not even have a brain. Its design and implementation emerge from the decisions of its trillions of single cells. Compared with animals, plants dare not rely on brain-directed behaviour, because they cannot run away from grazers, and if a grazer ate the brain, it would mean death. So plants can withstand almost any loss, and regenerate easily. They are utterly decentralised. It is as if an entire country’s economy emerged from just the local incentives and responses of its people. (Oh, hang on …)

Or take a termite mound in the Australian outback. Tall, buttressed, ventilated and oriented with respect to the sun, it is a perfect system for housing a colony of tiny insects in comfort and gentle warmth – as carefully engineered as any cathedral. Yet there is no engineer. The units in this case are whole termites, rather than cells, but the system is no more centralised than in a tree or an embryo. Each grain of sand or mud that is used to construct the mound is carried to its place by a termite acting under no instruction, and with no plan in (no) mind. The insect is reacting to local signals. It is as if a human language, with all its syntax and grammar, were to emerge spontaneously from the actions of its individual speakers, with nobody laying down the rules. (Oh, hang on …)

That is indeed exactly how languages emerged, in just the same fashion that the language of DNA developed – by evolution. Evolution is not confined to systems that run on DNA. One of the great intellectual breakthroughs of recent decades, led by two evolutionary theorists named Rob Boyd and Pete Richerson, is the realisation that Darwin’s mechanism of selective survival resulting in cumulative complexity applies to human culture in all its aspects too. Our habits and our institutions, from language to cities, are constantly changing, and the mechanism of change turns out to be surprisingly Darwinian: it is gradual, undirected, mutational, inexorable, combinatorial, selective and in some vague sense progressive.

Scientists used to object that evolution could not occur in culture because culture did not come in discrete particles, nor did it replicate faithfully or mutate randomly, like DNA. This turns out not to be true. Darwinian change is inevitable in any system of information transmission so long as there is some lumpiness in the things transmitted, some fidelity of transmission and a degree of randomness, or trial and error, in innovation. To say that culture ‘evolves’ is not metaphorical.

The evolution of language

There is an almost perfect parallel between the evolution of DNA sequences and the evolution of written and spoken language. Both consist of linear digital codes. Both evolve by selective survival of sequences generated by at least partly random variation. Both are combinatorial systems capable of generating effectively infinite diversity from a small number of discrete elements. Languages mutate, diversify, evolve by descent with modification and merge in a ballet of unplanned beauty. Yet the end result is structure, and rules of grammar and syntax as rigid and formal as you could want. ‘The formation of different languages, and of distinct species, and the proofs that both have been developed through a gradual process, are curiously parallel,’ wrote Charles Darwin in TheDescent of Man.

This makes it possible to think of language as a designed and rule-based thing. And for generations, this was the way foreign languages were taught. At school I learned Latin and Greek as if they were cricket or chess: you can do this, but not that, to verbs, nouns and plurals. A bishop can move diagonally, a batsman can run a leg bye, and a verb can take the accusative. Eight years of this rule-based stuff, taught by some of the finest teachers in the land for longer hours each week than any other topic, and I was far from fluent – indeed, I quickly forgot what little I had learned once I was allowed to abandon Latin and Greek. Top–down language teaching just does not work well – it’s like learning to ride a bicycle in theory, without ever getting on one. Yet a child of two learns English, which has just as many rules and regulations as Latin, indeed rather more, without ever being taught. An adolescent picks up a foreign language, conventions and all, by immersion. Having a training in grammar does not (I reckon) help prepare you for learning a new language much, if at all. It’s been staring us in the face for years: the only way to learn a language is bottom–up.
<< 1 2 3 4 5 6 7 >>
На страницу:
6 из 7