From a new paper by Greg Laden and Richard Wrangham:
We propose that a key change in the evolution of hominids from the last common ancestor shared with chimpanzees was the substitution of plant underground storage organs (USOs) for herbaceous vegetation as fallback foods. Four kinds of evidence support this hypothesis: (1) dental and masticatory adaptations of hominids in comparison with the African apes; (2) changes in australopith dentition in the fossil record; (3) paleoecological evidence for the expansion of USO-rich habitats in the late Miocene; and (4) the co-occurrence of hominid fossils with root-eating rodents. We suggest that some of the patterning in the early hominid fossil record, such as the existence of gracile and robust australopiths, may be understood in reference to this adaptive shift in the use of fallback foods. Our hypothesis implicates fallback foods as a critical limiting factor with far-reaching evolutionary effects. This complements the more common focus on adaptations to preferred foods, such as fruit and meat, in hominid evolution.
Tubers are not the only kinds of USOs; there are also corms, bulbs, and rhizomes. I tend to use "tuber" as an easier-to-type version of USO, though. I was practically dared to review the paper here (nota bene: I do respond to dares, albeit more carefully and slowly than for most things), and Carl Zimmer has also written a short item on the idea. The mole rats are the lede, but there is much more to it than them, and in many respects they are the least problematic part.
So here is my semi-rambling take.
In 1999, Wrangham and Laden, along with David Pilbeam, James Holland Jones, and NancyLou Conklin Brittain, suggested that tuber cooking was central to the adaptation of early Homo. The evidence for that suggestion was and remains essentially absent. As Henry Bunn put it in his comment to the paper:
Why is there abundant evidence of hunting and some form of scavenging, carcass transport, butchery, and sharing and consumption of meat and fat in the behavioral and dietary adaptations of early Pleistocene Homo (e.g., Oliver, Sikes, and Stewart 1994 and references therin)? Why are the earliest stone tool kits of the Oldowan dominated by sharp-edged cutting tools? Why is there intensive meat polish on the edges of stone flake knives studied for microwear (Keeley and Toth 1981)? Why is there not microwear evidence of grit or sediment damaged on the teeth of supposedly tuber-feeding hominids themselves, including the robust australopithecines (Kay and Grine 1988)? (Bunn 1999:580)
Additionally there is the problem of the complete lack of evidence for cooking and the weakness of evidence for early control of fire, compared to the strong and substantial evidence for both much later in the Pleistocene.
So early Homo just doesn't show any signs of having been a serious tuber-eater. Not to say it is impossible; just that there isn't any particular evidence for the idea.
Now, Australopithecus, that's another story. Robust australopithecine teeth in particular have a lot of pits and scratches on them, as if they were eating some hard, gritty foods. Underground storage organs fit that bill. Eating a lot of dirt along with them might well explain the high rate of dental wear that robust australopithecines clearly had -- many had their first molars worn almost completely flat before the third molars came into occlusion.
In this context the fallback food idea seems like an especially good one. The tooth anatomy and microwear evidence indicate that robust and nonrobust australopithecines probably did not differ in most of their dietary spectra, but instead in the accentuation of different food sources that were shared by both. If food shortages were important in the evolution of these hominids, one way that the difference between them might have been sustained was an ecological difference in fallback food utilization. Hominids like A. afarensis and A. africanus undeniably had teeth adapted to heavy grinding, fracturing off brittle foods, and intensive attrition compared to any other living or fossil primate. So it makes no sense to propose that the difference between these "gracile" australopithecines and later robust australopithecines was that the "gracile" ones lacked the high-chewing element. Rather, it makes considerably more sense to suppose that both kinds of hominids were eating the high-chewing foods, with the robust ones making a more intensive use of them, and possibly lacking some of the tough pliable foods eaten by earlier nonrobust species. A difference in fallback strategies might comprise exactly this kind of dietary prediction.
To me, the coolest thing about the hypothesis is that it explains the postcanine adaptations of australopithecines without reference to the now-well-known carbon isotope data. Indeed, the question of C4 versus C3 foods is entirely irrelevant. I discussed the carbon and other stable isotope data in an earlier post; the short story is that all kinds of australopithecines appear to have included around a 25 to 30 percent component of C4 foods, which include grasses, some sedges, and the animals who ate them.
Peters and Vogel (2005) proposed that the C4 component of the early hominid diet could be explained as a sum of several different plant and animal sources, including around 5 percent each of seeds, roots and pith, insects, small mammals and vertebrates, and large mammal meat. That does a good job of describing a diversified hominid diet without reference to tubers.
But the thing about USOs is that relatively few of them are C4 plants. If hominids did eat tubers, in other words, they still wouldn't account for the C4 fraction of the overall diet.
However, they might account for the postcanine dental adaptations of later hominids, under the assumption that they represent a substantial part of the C3 fraction. And the replacement of C3 fruits by C3 tubers would explain why robust and nonrobust hominids both have approximately the same C4 fraction, while differing so greatly in their dental adaptations and dental microwear.
As far as I can tell, nobody has mentioned this implication, but it should be the next thing to test.
But although I think Laden and Wrangham's study has some interesting possibilities, I think the data is a bit short of where it needs to be. What about the four lines of evidence used by Laden and Wrangham? Are they to be believed?
The first thing to point out is that a reading of the paper finds little detail to go along with two of the lines of evidence. It is true that australopithecine teeth are not like ape teeth, and that robust australopithecines were different from nonrobust ones. The innovative suggestion here, although brief, is that an enlarged oral cavity in australopithecines, particularly robust ones, may be an adaptation to increase the exposure of masticated tuber to salivary digestion.
But the dental discussion appears less as two independent lines of evidence converging to one conclusion, and more as throwing up whatever seems relevant to see what will stick. A review of early hominid dental evidence also reveals plenty that is less consistent with the hypothesis that USOs were an important food for most early hominids.
For one, the comparative dental evidence is questionable. As Laden and Wrangham review the issue, Hatley and Kappelman originated the argument that the early hominid dentition was adapted to tuber-eating:
In 1980, Hatley and Kappelman pointed out parallels in dental morphology that suggested that bears, pigs, and hominids are all adapted to eating significant amounts of plant underground storage organs (USOs). They summarized their argument as follows: "We believe that postcanine similarities evident among ursids, suids, and hominids are in part an adaptation for processing this tough, fibrous, and gritty plant part. Bears, pigs, and humans are adapted to exploiting plant roots and tubers, although their methods of food gathering are functionally rather than morphologically analogous. Convergence upon the resource of belowground plant storage parts appears to make the responses of nonretractable claws, cartilaginous snout, and digging stick equivalent" (Hatley and Kappelman 1980:380, quoted in Laden and Wrangham 2005:1).
This isn't obviously true. For one thing, Pliocene pigs appear to have been mainly grazers (Harris and Cerling 2002 -- not cited by Laden and Wrangham 2005). They increased in molar size and complexity in several different lineages, as a reflection of their increased reliance on C4 vegetation. The diet of current-day suids in particular seems to share little in common with early hominids, at least as far as their stable isotope ratios are concerned. Nor are large and flat early hominid molars particularly analogous to those of most bears -- perhaps the closest are pandas, which are far from dedicated tuber-eaters.
Then there is the problem with the earliest hominids. These, like the later ones, are found alongside mole rats, at sites like Aramis and Lukeino. But they don't have the postcanine adaptations of later hominids. The essential problem with the earliest hominids is not postcanine specialization, but instead the changing role of the canine-premolar complex, and the reduction of the canines. There is no reason (at least that I can think of) to suppose that small canines are adaptive to tuber-eating (and a search of the paper finds no occurrences of the word "canine").
One way to avoid this problem is to suppose that the USO-eating adaptation was simply a feature of later hominids --- say, A. anamensis and later. Perhaps it's true, but if so, the hypothesis loses some of its punch, and possibly one of the converging lines of evidence, since the expansion of USO-rich savanna central to Laden and Wrangham's paper starts in the Miocene.
And the paper would prefer to displace the importance of tubers earlier rather than later in time:
There is growing evidence that middle to late Miocene hominoids, mainly in Europe, exploited relatively open habitats, and may have exhibited dietary adaptations (Teaford and Ungar, 2000, Smith et al., 2003 and Smith et al., 2004) that we claim here to be related to USO consumption. This lends support to our assertions that a USO niche may have emerged during the Miocene, that this niche may have been important for non-fossorial mammals, and that certain features, such as thick enamel and large teeth, can arise in response to this niche. However, we do not wish to make claims beyond the hominid taxon at this time, other than to note that this may be a fertile area of future research (Laden and Wrangham 2005:13).
If you are a student looking for a thesis topic, don't pick this one.
The most original suggestion is that hominid and mole rat remains are significantly coassociated. On the surface, this looks like fairly convincing evidence that the hominids lived in USO-rich environments, which is precisely what Laden and Wrangham conclude. And indeed, the number of sites either possessing both kinds of animals or lacking both (27) is higher than expected considering the small number that have one kind but lack the other (11).
But wait a minute. Neither "mole rats" nor "hominids" are species, they are groups composed of several species. Let's consider the same kind of comparison for other kinds of animals. How many hominid sites lack bovids? Or suids? Or crocodilians? Keep in mind that some groups are rare at early hominid sites because they hadn't diversified yet, like papionins, or hadn't yet appeared in Africa, like equids. But these groups are found at many later hominid sites. And of course, for many sites the total species list may reflect less intensity of sampling rather than the paleohabitat.
In other words, the mole rats may show that hominids had the opportunity to eat USOs -- at least, if they could compete effectively with the mole rats for them. But they don't show that the hominids actually ate USOs. At least not if we aren't equally willing to believe that the presence of crocodiles at hominid sites meant that hominids swam in rivers and ate migrating wildebeest.
The weaknesses NOT mentioned
I see two significant weaknesses in the hypothesis. The first is the simpler of the two: digging up tubers is a lot of work.
For groups like the Hadza who eat a lot of them, this work takes many hours (at least by some group members). That kind of work seems unlikely for australopithecines, even hungry ones. Especially considering the full scenario: australopithecines digging intensively for savanna-living tubers for hours at a stretch would have been highly exposed to predation and heat stress for hours at a stretch.
Might they have done it if they had nothing else to eat? Sure. But could they have done so efficiently enough to get a net return on their effort? There's a question worth answering.
Might they have banded together into large defensive groups? Maybe, but that would seem likely to decrease foraging efficiency -- how many tubers are there in any small patch of ground? However, there is slight evidence for large multimale groups (chiefly AL 333), as well as pretty good evidence that predation was high and survivorship into adulthood low. Another question worth answering.
There may be a solution for this problem: perhaps the plants themselves have evolved under intensive hominid predation. Maybe today they put their roots further underground, or maybe the plants with tougher and more fibrous roots have predominated since the Pliocene. If so, australopithecines might have had an easier time of digging them up.
The other problem is more vexing. How can we demonstrate that an extinct species was adapted to eat a food that it did not eat very often? Bone chemistry must predominantly reflect the foods that make up the majority of the diet, not those that are consumed only intermittently. Microwear also ought to reflect the majority foodstuffs, although perhaps more weakly -- especially if mortality occurs mostly during periods of dietary stress, when animals are eating more of their fallback foods than usual. This is perhaps worth looking into.
Maybe the most promising test would be variability in tooth wear. Presumably the need to rely on fallback foods would vary in accordance with climatic conditions, on a multigenerational timescale. If so, then some individuals might exhibit relatively great amounts of attrition due to their reliance on fallback foods during long periods of resource stress, while other individuals might have lived in times of relative abundance, and therefore not have experienced significant amounts of wear. This kind of heterogeneity would itself have created differences in selection on tooth size, enamel thickness, and occlusal anatomy over time: perhaps in ways that could be differentiated from alternative strategies. But even so, that kind of comparison is relatively far from the direct evidence, and may be impossible with the fossil record we have available.
Looking back at the post, I've written a balance of critical comments and supportive ones. I guess my opinion overall is that the USO hypothesis is certainly worth presenting, but it has a ways to go before it is really testable. I think there is a balance of good ideas here and evidentiary weaknesses, and it is certainly worth talking about them, perhaps with a bit more skepticism and documentation than has yet been done.
And if you are serious about tubers, as Wrangham clearly has shown himself to be, then you are going to have to choose a time when they were important. With this paper, I have now read that tubers were the key adaptation for Miocene apes, the earliest hominids, australopithecines, robust australopithecines, early Homo, and recent humans.
It can't be all of these. If it were, they would all look the same. And there wouldn't have been any reason for one to change into anything else! So you have to pick.
And making a choice means more than saying, "well, Miocene apes tasted tubers, early hominids needed them when the fruit ran out, for australopithecines they were a fallback food, robust australopithecines ate them all the time, early Homo cooked them, and recent humans pickled them with vinegar and caraway seeds. As yet, the many tuber hypotheses have been just-so-storytelling at its most self-contradictory.
If I were picking, I would put the best odds on Laden and Wrangham's current argument: USOs were important fallback foods for nonrobust australopithecines like A. afarensis and A. africanus, and equally or more important for robust australopithecines. In contrast, early Homo was adapted to meat eating, and the earliest hominids -- who lack the postcanine specializations of later hominids -- remain as yet a mystery, although a fundamentally apelike diet is a good first guess.
This post doesn't account for all the details of early hominid diets, but some previous posts review other sources of evidence, including:
Stable isotope analyses
Hatley T, Kappelman J. 1980. Bears, pigs, and Plio-Pleistocene hominids: a case for the exploitation of belowground food resources. Hum Ecol 8:371Ã387.
Laden G and Wrangham R. 2005. The rise of the hominids as an adaptive shift in fallback foods: plant underground storage organs (USOs) and australopith origins. J Hum Evol in press (online)
Wrangham RW, Jones JH, Laden G, Pilbeam D, Conklin-Brittain N. 1999. The raw and the stolen: cooking and the ecology of human origins. Curr Anthropol 40:567-594.