I’m travelling towards Trento in Italy, where I am to give a plenary talk at a symposium on the Use of Algae for Monitoring Rivers (and comparable habitats). This series of meetings has been organised at roughly two or three year intervals since 1991 although I have not been to any for the last ten years. There is a variety of reasons for this but I often felt that the meetings were some way divorced from reality, failing my “thirty-three percent rule” which suggests the ideal ratio of academics and end-users necessary to generate fertile discussions. There will, undoubtedly, be some interesting papers, and some that are good science but not practicable from the point of view of end users. That is one of the big dilemmas in working in the “applied sciences”. I approach conferences with Lord Lever’s maxim to hand: “I know half my advertising isn’t working, I just don’t know which half.” Replace “advertising” with “scientific papers” and you can, at least, approach a scientific meeting with a suitably stoical mind-set. And, of course, my audience might well be applying exactly the same test to my talk.
One topic that I want to address in my talk is how we ensure that the results from ecological assessments are as effective as possible. I will, I am sure, not be the only person addressing this topic, though I hope that I will open out the debate beyond the technical aspects of our science. The figure below, for example, is a simple flow diagram showing how a sample of diatoms that is collected from a river is converted into the information that a “customer” needs. I don’t like using the word “customer” in this context but it helps to remind ourselves that we are working towards objectives that go beyond simply “curiosity-driven science”. I have used diatoms as my example as half the papers being presented at the meeting deal with this group, to the exclusion of all other types of algae.
What I wanted to illustrate with this diagram was that much applied ecology is, by necessity, a set of very standardised steps. Otherwise, it would not be possible to make comparisons between different water bodies, or the same water body at different times. These steps are, in turn, supported by a much wider knowledge base that gives us the taxonomy that we need to identify diatoms, as well as ecological understanding that helps us to interpret data. Much of what happens at the Use of Algae Meeting will contribute to the knowledge base, which is great, but what is the broader impact of our work?
Diatom analysis as a process that converts the complex reality of organisms growing in the field to useful information for customers (e.g. catchment managers, regulators).
One of the points I wanted to make in my talk was that we must not consider the algae in isolation but, instead, remember that they are one part of a larger ecosystem, with many interactions between the different components. This, then, led me to start thinking about the “human ecosystem” within which the work that we do is situated. The next figure, therefore, shows this human ecosystem, expressed in the style of a “trophic pyramid” but with the traditional categories of primary producers, herbivores and carnivores replaced by different levels of organisation within a bureaucracy. In ecology, we think in terms of how energy flows through the different trophic levels; in the human ecosystem, energy is replaced by information.
This visualisation is also a useful way of reminding ourselves that we are just one small part of a much larger process: the data we collect about algae has to be considered alongside chemical measurements, data concerning other groups of organisms (invertebrates, fish, higher plants) and so on. As a result, the effects of any changes we make to our methods will be damped down, when we consider them in terms of the system as a whole. Very few of the papers that I read which argue for more detailed taxonomy, for example, step back and consider what effect (if any) their proposed changes will have, when viewed in this broader context.
However, just as energy has to be first converted to carbon-based compounds in order to be stored (and, later, transferred to a higher trophic level), so the data we collect is not, itself, a very useful property and needs to be synthesised into information if it is to be useful. Much as we specialists like to dig into the minutiae of what species lives where, the reality is that this level of detail is not very “digestible” (to continue my food chain metaphors) to the higher levels in organisations. Even the indices that we use to summarise our data are not especially useful unless the outcomes can be expressed in a very generic manner. In the UK, this process has been taken quite a long way as part of a “weight of evidence” approach that expresses many different biological and chemical elements of water bodies in a simple, readily comparable manner. It means that we quickly lose a sense of the biological reality (as described in my previous post, for example) but this is offset to some extent by the ability of this pared down nugget of information to move smoothly through the “human ecosystem”. It is possible for the field scientists to invest the information with some “added value” in their reports, providing context, but we should probably accept that this level of detail is not going to move far beyond the first tier of management. I think that we should also ask questions about the extent to which we compromise our insights into the state of the natural world as soon as we drop our samples into strong oxidising agents and digest away everything except the empty silica shells of diatoms.
The “human ecosystem” of environmental regulation, expressed as a trophic pyramid. Diatom analysis (circled) is one small part of the lowest level of this ecosystem and the information produced then needs to flow upwards in order to influence decision-making whilst, at the same time, it is influenced by “top down” factors such as policy and resources.
The final point to make is that this “human ecosystem” is subject to top-down as well as bottom-up controls. A naïve assumption lies behind many papers on ecological assessment that I read: that there is a smooth and logical flow from an ecologist’s data to a diagnosis of a problem to the implementation of a solution. The reality is more complicated: the situation I have faced over the past five years or more is that resources have been limited (a top-down decision) which has meant that they have been thinly spread and, in many cases, the outcome is not enough replication to ensure certainty in outcomes. The challenge to the research scientists then becomes very specific: can we get greater confidence in outcomes at the same or lower cost? But that, in turn, raises a different set of very human challenges, because the scientists whose career depends on producing “cutting edge” science in high impact journals are unlikely to be enthusiastic about re-shaping what we already know in order to fit a bureaucrat’s unrealistic budget.
More about this in my next post.