Tuesday, December 24, 2013

Book review: An Appetite for Wonder: The Making of a Scientist

Capping off our semester at Oxford, I thought it appropriate to read Richard Dawkins’ autobiography.

Dawkins is a product, in part, of the University of Oxford, but also Oxfordshire. A bit of a local icon these days, he spent much of his youth on the family farm just outside of Oxford, went to Oxford for his studies, and soon after became part of the faculty.

An autobiography, I suppose, can serve many purposes for the writer. For Dawkins, it was an exploration of how he became who he is.

Be forewarned, that means a lot of stories from his childhood. And apparently his dissertation.

Still, I can respect the exploration. Even if I have to read a little more quickly through another story about nursery rhymes he once knew or him writing another computer program.

Even with (despite of?) that, the book travels quickly.

A few interesting points stand out.

He credits an amazing amount of his scientific development to the tutorial system at Oxford. Briefly, for some classes at Oxford, students a paired with a fellow. The reader will tutor a small number of students each term and meet with them individually every week. Each week is devoted to a new topic within a general theme. A student is expected to become an expert on the topic and write a report on it by the end of the week. Dawkins holds an education based on lectures and memorization in some disdain.

Another is his respect for his advisor and the intellectual environment of his graduate work. Of note, for seminars, his advisor “set the tone by interrupting almost before the speaker could complete his first sentence: ‘Ja, ja, but what do you mean by…?’ This wasn’t as irritating as it sounds, because [his advisor's] interventions always aimed at clarification and it was usually necessary.” I sat through a number of talks in Dawkins’ Zoology at Oxford and, unfortunately, never heard an interruption.

Lastly, his description of life as a fellow in an Oxford college is well described “The life of a tutorial fellow of an oxford college is in many ways a charmed one. I got a room in a glowing oolitic limestone medieval building surrounded by famously beautiful gardens; a book allowance, a housing allowance, a research allowance; and free meals (though not free wine, contrary to envious rumours) in the stimulating and entertaining company of leading scholars of every subject except my own.

How the Selfish Gene came about is also recorded well for posterity, at which the autobiography ends.

To be honest, most people’s lives likely would not warrant a biography, auto- or allo-. This one could have used an editor to draw more stories (and opinions) out of him.


Still, empirically, a good companion for a train across France.

Tuesday, December 17, 2013

Greenhouse gas emissions from livestock




A follow-up to FAO’s Livestock’s long shadow, a new document from the FAO revisits the contribution of the livestock sector to global GHG emissions. LLS had stated that 18% of GHG emissions could be attributed to livestock. Here they examine patterns of GHG emissions more carefully.

A couple of interesting points in the document.

First, there is a good summary of the drivers of our livestock systems have changed:

“Traditionally, livestock was supply driven, converting waste material and other resources of limited alternative use into edible products and other goods and services. Its size was relatively limited and so were the environmental impacts. However, since the livestock sector has become increasingly demand-driven, growth has been faster and the sector now competes for natural resources with other sectors.”

In short, cattle use to graze marginal lands. Pigs were fed scraps. Now they compete with people for food.

They also have good summaries of the intensities of emissions at the global scale. Industrial agricultural is often thought to be intensive, but their efficiencies can be high.

The authors state that “High intensity of emissions are caused by low feed digestibility, poorer animal husbandry, and higher ages at slaughter. When feed digestibility is high and animals are brought to market quicker, intensity of emissions can be lower. Hence, industrial production of livestock tends to be associated with low intensity of emissions per unit protein produced. “

In short, when using marginal resources, efficiencies are lower. Graze animals on low-protein grass, and they gain weight slower and release more methane.

Still, the authors do not pull apart the relative contributions of different components of the supply chain. For example, what is the relative efficiency of grazing in North America vs. feeding cattle grain? Almost half of the emissions with cattle production come from feed production and processing.


The document also provides recommendations for reducing GHG emissions. Mostly, they say use “first-world” practices everywhere. The first-world systems are left with managing their manure better. 

No mention of producing or demanding less meat, or relying less on grain, as a mitigation strategy as far as I read.

Gerber, P. J., H. Steinfeld, B. Henderson, A. Mottet, C. Opio, J. Dijkman, A. Falcucci, and G. Tempio. 2013. . Tackling climate change through livestock – A global assessment of emissions and mitigation opportunities. Food and Agriculture Organization of the United Nations FAO, Rome.

Monday, December 16, 2013

Trees in colder environments are less tolerant of shade


Nice paper from Chris Lusk et al. They grew 17 NZ tree species in the glasshouse. Species that grew in places with colder winters had narrower vessels and lower conductance of water.

The mechanism that makes cold-tolerant species less tolerant species was unclear though.

Whether there is a direct tradeoff between cold-tolerance and shade-tolerance, or an indirect set of relationships is still to be worked out.

Lusk, C. H., T. Kaneko, E. Grierson, M. Clearwater, and F. Gilliam. 2013. Correlates of tree species sorting along a temperature gradient in New Zealand rain forests: seedling functional traits, growth and shade tolerance. Journal of Ecology 101:1531-1541.

Sunday, December 15, 2013

Model species sets



About three years ago I started to think hard about how to advance plant trait research. It was suffering from a lot of shortcomings.

One of these was that we couldn't compare many different traits for the same species. Researchers were not measuring many key traits. If they were, it wasn't on the same species in a manner that could be comparable. We needed more overlap.

The answer to these problems was model species sets. Just like model organisms generate synergy by encouraging multiple researchers to ask different questions on the same plant, model species sets can do the same for functional ecology.

I just put a letter together with a few others for New Phytologist. In it, we argue that model species sets are as important to functional trait research as model organisms were to comparative molecular biology.

We work through a few issues in the manuscript.

How do you pick what broader pool of species the model species sets should represent?

How do you assemble a model species set?

How do different researchers use the same species sets in synergistic manners?

As we discussed these issues, what was exciting was how quickly some fundamental questions about the evolution and ecology of plant species could be answered once model species sets are in place.

With just 100 species, the leaf economic spectrum would have been apparent. It would not take long to test for the wood or root economic spectra. How long would it take to find other sets of correlations in defense, life-history traits, or low-resource tolerance?

Once these traits are measured, comparisons among traits becomes simple. Knowledge stacks up on knowledge.

Another exciting part is really separating genotype by environment. Current trait relationships at the global scale mix genotype and environment. Different species are growing in different environments. Low fertility species are growing in low fertility areas. High fertility species are growing in high fertility areas.

By growing species under the same conditions, differences in traits arise from the genotype. But growing the sets across a range of environments can show how trait relationships respond to changes in environment.

That will be exciting.

One constraint on the approach is comparing across model species sets. Comparing leaf traits of the species in a grass model species set with those in a tree model species sets could suffer from confounding factors.

I guess once we get to the point where we understand the patterns within a model species set, new experiments would be necessary that grew them under comparable conditions. But after repeatedly growing 100 species, growing 200 or 300 doesn't sound too daunting.

Friday, December 13, 2013

Seasonal cycle of submissions to Nature Geoscience


Somehow, I had missed this last January in Nature Geoscience.

Funniest quote: "A few wrinkles in the record illustrate how our submission rates reflect the world of geoscientists (and sometimes, the world at large). We attribute the sharp peak around the end of July 2012 to the deadline for submission of manuscripts to be considered for the fifth assessment report of Working Group I of the Intergovernmental Panel on Climate Change. And the number of incoming papers dropped during fall meetings of the American Geophysical Union — and during football world cups."


Saturday, December 7, 2013

Reproducibility: standardized vs. heterogenized replicates



Here's a paper that caught Hendrik Poorter's attention awhile ago.

In ecology, we almost never run the same experiment twice. We reason that resources are just too thin to try to reproduce someone's results.

Even if we did try the same experiment twice, there are enough differences among ecosystems to explain any differences.

But, if we did try the same experiment twice, how do we set up our experiment to maximize reproducibility?

Reproducibility is simply the ability to generate the same results twice. If we want to affirm generalization in our results, experiments should be reproducible.

The Richter et al. paper from 2009 addressed this question for the biomedical world. There they noted that experiments with animals are costly, so there should be an impetus to make results be as reproducible as possible.

The natural approach to this is to make experiments as controlled and uniform as possible.

This reduces variance among subjects of a given experiment and maximizes the likelihood of a significant result, but actually works to reduce reproducibility.

In short, if you want reproducibility, experiments need to incorporate all the variation that one is likely to encounter when trying to reproduce experiments.

Differences in users.
Differences in noise.
Differences in cage types.

Instead of controlling for all of these, they argue that the experiments should allow these to vary. In doing so, any positive result is more likely to be generalizable.

Instead of using "standardized" replicates, replicates should be "heterogenized".


What is interesting is that the same holds for many ecological experiments. If we want to make sure we do not have a false positive, we should incorporate variation into our experiments.

It's risky for the investigator, because the approach reduces the likelihood of a positive result.

But for the discipline, it makes it more likely that any one result is not only reproducible, but also generalizable. 

Friday, December 6, 2013

And But Therefore


The idea of the triad of thesis-antithesis-synthesis goes back to Socrates.

The modern-day version is now apparently "and-but-therefore".

I've spent a lot of time working with people to sharpen their presentations.

If anything resonates is that many presentations have the structure of AND AND AND.

A simple pivot to AND BUT THEREFORE makes for a more compelling story. And a more interesting, impactful talk.

It's the same tension as Socrates, but modernized.

There are always pressures to show as much data as possible, but a scientific story is not a series of "ands".

Like any story it should start with and, but quickly pivot to the "But", which is the competing hypothesis or the antithesis.

The synthesis is the "therefore".

Great short piece in science and accompanying TEDMED talk.

If you watch the TEDMED talk, you'll see why Cartman gets a cameo.

http://www.sciencemag.org/content/342/6163/1168.1.short#ref-1
http://www.youtube.com/watch?v=ERB7ITvabA4

Wednesday, December 4, 2013

How many species for a model species set?

Angela Moles and I were discussing how many species would be necessary for a model species set.

Think of the model species set as the Arabidopsis of plant functional traits. Just like with a model organism, if we restrict the species that we work with, we can build synergy faster for different types of measurements. If I measure drought tolerance on my species and you measure tannin concentrations on your species, we don't learn much. But if we measure the same species, we learn more than we did before.

So, if we select just 100 species to represent the global diversity of grasses, herbaceous eudicots, or trees, would that be enough species to capture the global diversity of the functional group?

There are 100,000 tree species in the world. Can we possibly capture enough of the global diversity with just 100 randomly-selected species.

There is no way to really test the idea, but we talked about whether 100 species would have been enough to delineate the leaf economic spectrum? That paper had 2548 species. If we had just measured leaf traits like leaf longevity and N concentrations on 100, would we have discovered the leaf economic spectrum.

To explore this, I took the data from the paper and downsampled down to 100 species for some of the major relationships.

The LES looked like this with all the species:



What if the Amass-Nmass relationship just had 100 species? If I randomly select just 100 species from that pool...pattern is just as strong.


What about one of the weaker relationships, like LMA vs. Leaf Longevity?


Still there.

Some caveats here...the LES is not built on a random subset of the world's flora. I'm subsampling a stratified sampling of the global flora. Also, I'm working across a number of functional groups. Relationships might be weaker if we sampled just 100 species of a functional group. 

There are philosophical points to work through about the nature of plant strategies, between vs. within functional or phylogenetic groups for example. 

But, the key is that growing and measuring 100 species is not too hard. If we randomly select them so that there is a broad diversity of functional traits, we should be able to represent the global patterns of functional trait relationships.

The leaf economic spectrum has already been described (though there is still more to learn), but the other spectra have not. 

It might just take growing the right 100 species to quantify the root economic spectrum, the wood economic spectrum, or some other strategy we just aren't even aware of. 

Should we worry about the randomly selecting 100 species that end up in the same genus or all just come from Madagascar? We can test for representativity of the randomly selected species set to make sure that chance didn't screw things up too much. Odds are against it, but it is possible. No one would quibble too much if we had to randomly select a second time (20 times, maybe).

In all, it's encouraging.



Monday, December 2, 2013

Reading Krugman try to struggle describing whether someone was wrong

I remember going through these exact set of thought processes when it came to plant resource competition.

Interesting to read Krugman try to do the same thing.