Copy
This is the TIAS Newsletter
 
View this email in your browser
TIAS Logo
TIAS Quarterly

No. 03/2017 (September)
The Newsletter of
The Integrated Assessment Society (TIAS)

 
Visit TIAS
Ulli Meissner ©

In this Issue

The Society

The Integrated Assessment Society is a not-for-profit entity created to promote the community of inter-disciplinary and disciplinary scientists, analysts and practitioners who develop and use Integrated Assessment (IA). The goals of the society are to nurture this community, to promote the development of IA and to encourage its wise application.

Integrated Assessment can be defined as the interdisciplinary process of integrating knowledge from various disciplines and stakeholder groups in order to evaluate a problem situation from a variety of perspectives and provide support for its solution. IA supports learning and decision processes and helps to identify desirable and possible options for addressing the problem. It therefore builds on two major methodological pillars: approaches to integrating knowledge about a problem domain, and understanding policy and decision making processes. IA has been developed to address issues of acid rain, climate change, land degradation, water and air quality management, forest and fisheries management and public health.

 

Feature


Values and Accountability in Integrated Assessment: The Case of the IPCC
Arthur Petersen, Professor of Science, Technology & Public Policy, University College London


Introduction

In this feature article, I address concerns that I have about the accountability of integrated assessment. The example of integrated assessment that I am referring to here in particular is the Intergovernmental Panel on Climate Change (IPCC) and the way it deals with scientific and political values and accountability (I have been a Dutch government delegate to the IPCC from 2001–2014). For a fuller exploration of values in science advice, I refer the reader to my inaugural lecture at University College London earlier this year.

I focus on the example of how the causes of climate change are assessed by the IPCC (Working Group I, which deals with ‘attribution’). I make the argument that if you want to assess integrated assessments, you will have to engage with an ‘extended’ peer community (see, for instance, Petersen et al. 2011). Reflection on assumptions should lead experts to give an account of the epistemic underpinnings of their expertise. I argue that IPCC reports do not do that enough (Netherlands Environmental Assessment Agency 2010). In pushing scientists to give such accounts, one must realise that scientists often do not like to hear that in that sense their expertise should be considered to be on tap and they are not free to decide how transparent they will be.


Humans causing climate change: How reliable are the models?  

In the 2001 report of the IPCC, a figure was included that has become iconic at the science–policy interface for attributing climate change to human influences. The figure contains three panels, each showing, on the one hand, the same line with measurements of the global mean surface temperature since 1850 (rising in the beginning of the 20th century and rising at the end of the 20th century) and, on the other hand, each showing a different band of model results (the bands representing the ‘internal’ variability of the climate system, that is, sensitivity to initial conditions): one for only natural external influences on the climate (volcanoes, sun), one for only human external influences on climate (greenhouse gases, particles) and one that combines natural and human factors. The latter panel presents a beautiful match of measurement and model, giving rise to the suggestion that we know everything, that there is no room left for any doubt that humans are causing the recent change in climate (e.g., from the Chair of the IPCC at a press conference in 2001).

Of course, experienced integrated assessment practitioners understand that the number of degrees of freedom in climate models is high. And they will not be surprised to hear that indeed virtually all climate modelling groups in the world are able to present the same final panel with a match. This is not to say that that is untrue. But how should one communicate the fact that the bands are ‘just’ model results, whose match with the measurement cannot establish their reliability. So how do we know how reliable the models are? And in which senses can we say that they are reliable?


.
Lack of traceable account

The IPCC has developed a methodology, through three subsequent guidances, for assessing and communicating the uncertainties in the findings of its assessments. This methodology includes calibrated terms for communicating probabilities. In the case of the attribution of climate change to human influences, the IPCC did not communicate in 2001 that it was 100% certain that humans are causing climate change, even though the picture is beautiful and the line and band match. It said it was ‘likely’ that most of the warming of the last 50 years has been caused by human greenhouse gases. ‘Likely’ here means a 2/3 (66%) chance, according to the experts, that the finding is true.

I was sitting at the table at the time (in Shanghai, on 20 January 2001), while I observed the people who were negotiating in an IPCC contact group what I think became one of the most important statements of the IPCC ever, that most of the warming is likely due to human influences. But I could not understand why they said ‘likely’. If you believed the models, the likelihood was already estimated to be much higher than 90% (that is, ‘very likely’, the next likelihood category). I had to dig deep to determine how the lead authors had reached their judgment (through interviews, looking at internal e-mails, etc.). The reason they didn’t choose ‘very likely’ was that they did not trust the model enough. So, they picked the next lower likelihood category. Nowhere could this reasoning be found in the IPCC report; there was no traceable account of how they had arrived at this crucial judgment.


Representing methodological reliability

Six years later the IPCC panel assessed the same question. The 2007 report features a similar figure as the 2001 report, but now the calculations are shown for every continent and the authors are willing to say ‘very likely’ (90%). And again, I could ask the question: why not the next likelihood category (of 99% or ‘virtually certain’, in that assessment round; in the most recent assessment 95% or ‘extremely likely’ was added in the methodology)? The narrative could have been: we still don’t fully trust the models, but there have been more warm years, there have been more model runs, there have been different types of model experiments, and there was a belief that the models had become more reliable. I do think that this belief is problematic. The IPCC has demonstrated, in my view, a weak practice of assessing the reliability and the quality of models.

So what I argue has been missing from the Third and Fourth Assessment Reports of the IPCC (2001 and 2007, respectively) is sufficient attention to ‘methodological reliability’ in addition to ‘statistical reliability’ (see Smith and Petersen 2014). Assessment of methodological reliability requires a qualitative discussion and a qualitative assessment of the underpinning of results. Additionally, after ‘Climategate’ it has been realised that ‘public reliability’ needs attention too; how to regain trust and be relied upon by the public is a difficult question for climate scientists in particular, but, I would argue, also more generally for integrated assessment modellers. I do not have simple answers here. In this feature article, I am really focused on the second type of reliability, methodological reliability.



Blockage in Paris (2007)

Let me give one example from the negotiations on representing methodological reliability in the Summary for Policymakers in Paris (2007). This is the sentence that was under negotiation: ‘Most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.’ We have to get a bit into the politics now. Because these IPCC sentences are transferred from the sphere of knowledge assessment to the sphere of political negotiations (in the climate framework convention), there is always a country that does not want a stronger statement than the last time. A stronger statement would highlight that there is more scientific certainty, which would increase the likelihood of agreements. And the IPCC meeting in January 2007 in Paris was less than two years from – what turned out to be the failure of – the Copenhagen Summit at the end of 2009. So, some countries used all kinds of ways to prevent this sentence from being included. But there is an order of speech within the IPCC, which is: the chapters have been written by independent authors – hands off, governments cannot touch those chapters! – but government delegates can comment, making use of a set of criteria (such as clarity and representativeness), on sentences in the Summary for Policymakers. Governments obviously will have different views. And the authors have a veto right on any change that is made to their summaries. You can imagine how hard it sometimes becomes to negotiate the summary line by line, as is the case in the IPCC. But it works.

Still, I argue that it can be done in a more productive way if both parties, authors and governments, would behave more diplomatically towards each other, would understand better where they are both coming from: that one group of actors in these meetings is there for their social, ethical, political and economic values – that is their role, they are representing their publics – and another group of actors is there for their scientific value – they have a role to represent very well the papers they have assessed and they have to provide good ‘reference’ (see Kouw and Petersen, forthcoming).

Back now to the sentence that was under discussion in the final hours of the Paris meeting. After days of negotiations and that continued very deep into the night, finally we are in agreement: all the countries of the world can agree on the sentence by inserting this footnote: ‘Consideration of remaining uncertainty is based on current methodologies.’ Of course, we were all tired. But it is interesting: why would the opposing country agree with this sentence? What is the spin they could give? They could say: ‘The methodologies used are based on models. It is just models. It is not reality.’ Indeed, models are used, but that does not imply that there is no reference to reality; still that is indeed the typical argument they would make. How would another country that always wants to dramatize climate change and typically wants to downplay uncertainty spin this sentence? ‘Next time the likelihood will go up further, from the original likely (66%) it went up to very likely (90%), and it will go up again.’ Indeed, in Stockholm, nearly seven years ago in September 2013, it became ‘extremely likely’ (95%).


Expert judgment

One issue with the IPCC methodology of likelihood statements I did already address: the methodological unreliability of models has been used to ‘downgrade’ likelihood statement without saying so. Another issue, which is related to insufficient transparency of expert judgment in the IPCC is that there is scarcely any reflection on the nature of expert judgment. ‘Very likely’ means more than 90% (chance that a particular statement is true). But what does it really mean? What do these probabilities mean? And how reflexive can the IPCC be about what is actually happening and what is behind these statements? The ‘90%’ only means that the few authors who have been selected to do the assessment in a particular chapter have a collective expert judgment. Nothing more and nothing less. And it carries a lot of weight, because these authors have had the training, have the skills, have the experience; they bring all these things to the table, and they are the experts. And we choose them for that and other experts are asked to thoroughly review their statements. These experts, however, in the end when they write down their conclusions, no longer visibly rely on ‘expert judgment’. Suddenly their conclusions are made to flow directly from the underlying science. ‘It is not us.’ Incredible! I find it incredible!

Twice we, as a Dutch government delegation, have had to intervene to make the Summary for Policymakers more explicit about expert judgment. In Paris in 2007, for example, the authors when defining their uncertainty terminology referred in the final draft to the ‘assessed likelihood of an outcome or a result’. We added ‘using expert judgment’ to that phrase. And in Stockholm, in 2013, the same problem happened again. This was the definition of what probabilities meant: ‘Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or expert judgment.’ We looked at it and saw that this was going in the wrong direction. We thus changed ‘or expert judgment’ into ‘and expert judgment’. I think this is important. It is worrisome that scientists and integrated assessment modellers who act as science advisers are often not able to reflexively say what they are doing.



Lessons

I conclude with four lessons that I took from my 14 years of being a science adviser engaged with integrated assessment (see Petersen 2014):
  1. Explicit reflection on uncertainty and values. Take ‘normal science’ seriously, but also ensure reflection on its uncertainties and value-ladenness.
  2. Addressing methodological and public reliability. Alongside the statistical reliability of results (expressed in terms of probability), devote due attention to their methodological reliability (expressed in terms of strengths and weaknesses) and their public reliability (expressed as the degree of public confidence in the scientists who produce them).
  3. Extended peer review. Involve a larger group of specialists and non-specialists who hold diverse values in monitoring the quality of scientific assessments.
  4. Acknowledging social complexity. Be wary of accepting the conclusions of actors and practitioners at face value: try to delve deeper through the layers of complexity by means of narrative methods.
First, I have bought into the discourse of ‘post-normal science’, while I must emphasise that there never was really a period where there wasn’t post-normal science. So ‘post-’ should perhaps read ‘extra-’: ‘extra-normal science’. With ‘normal science’ I really mean those proceedings where it is the scientific community that is doing whatever they are doing, they are modelling, they are publishing, they are peer reviewing, et cetera. So, when I say that we need to open up and we need to look at ways to bring out the different epistemic and non-epistemic values in this discussion, I mean that you need to organise reflection on uncertainty and value-ladenness also within normal science, without throwing it away. Don’t throw away the baby with the bathwater!

Second, as I have already extensively belaboured in this feature article: don’t focus only on statistics, also focus on qualitative dimensions of reliability.

Third, ‘extended peer review’, which also comes out of this literature of post-normal science, concerns the ways in which you can engage a wide a group of people who can provide comments that are sensible enough so that they can be processed and responded to, for instance in the IPCC. Everybody – on the basis of a very minimal claim to expertise – can sign up to be an expert reviewer of the IPCC and can submit comments. And it is very important that not only a very small group of, for instance, climate modellers is commenting on the climate modelling chapter but also neighbouring disciplines and people who work, for example, for Greenpeace, who have a stake: they have a very valuable contribution to make because they can highlight particular risks to climate that may not yet have become mainstream in the scientific community.

Fourth, the final point (looking at deeper dimensions and different things that are happening at the same time) is related to the notion of ‘social complexity’: scientists often have a self-image of what they are doing and the country delegates have a self-image of what they are doing, and they are both too simplistic in what they say, because they mix both epistemic and non-epistemic values. Also in terms of how to understand this, it is important to delve deeper. Then the big question still remains: are there improvements that we can suggest to this mess? It is a mess, but now a good and interesting mess.


References

Kouw, M. and A.C. Petersen. forthcoming. Diplomacy in Action: Latourian Politics and the Intergovernmental Panel on Climate Change. In Science & Technology Studies.

Netherlands Environmental Assessment Agency (PBL). 2010. Assessing an IPCC assessment. An analysis of statements on projected regional impacts in the 2007 report. The Hague/Bilthoven: PBL.

Petersen, A.C., 2014. The ethos of scientific advice: A pragmatist approach to uncertainty and ignorance in science and public policy. In H. de Regt and C. Kwa, eds. (2014), Building Bridges: Connecting Science, Technology and Philosophy – Essays presented to Hans Radder. Amsterdam: VU Amsterdam University Press. Pp. 53-62.

Petersen, A.C., A. Cath, M. Hage, E. Kunseler, and J.P. van der Sluijs. 2011. Post-normal science in practice at the Netherlands Environmental Assessment Agency. Science, Technology, & Human Values. 36(3) 362-388.

Smith, L.A. and A.C. Petersen. 2014. Variations on Reliability: Connecting Climate Predictions to Climate Policy. In M. Boumans, G. Hon and A.C. Petersen (eds.), Error and Uncertainty in Scientific Practice. London: Pickering & Chatto, pp. 137–156.

 
Photo: Ulli Meissner, U.Meissner@gmail.com

TIAS News


2nd Webinar of the Learning Community: Social learning and the design of interventions
Joanne Vinke-de Kruijf and Romina Rodela

On June 27th, 2017, the webinar hosted by TIAS entitled “How a social learning approach can support the design and implementation of interventions” took place. The webinar addressed the following questions: 1. How can social learning concepts and theories inform the design and implementation of interventions? 2. How do choices of intervention design, coupled with contextual factors, influence social learning processes? The audio-visual recording, presentations and synthesis report of the webinar can be downloaded from the TIAS website: http://www.tias-web.info/tias-activities/webinars/#10th 

The webinar featured two presentations. First, Prof. Heila Lotz-Sisitka of the Environmental Learning Research Centre, Rhodes University (South Africa) provided a presentation entitled “Expansive social learning: the work and role of the formative interventionist researcher”. Then Blane Harvey of the Department of Integrated Studies in Education at McGill University (Canada) presented “Social learning in development interventions: A reflection on the limits and opportunities”. Both presentations were succeeded by questions and a lively discussion regarding, for example, the context-specific nature of social learning processes, capacity building needs as well as the application of specific methods and tools.

From this discussion we concluded that social learning concepts and theories can inform interventions as well as research designs. Social learning helps to shift attention from outcomes to processes and allows us to take into account contextual aspects that are at play and influence the way an intervention is received. Also, a social learning perspective draws attention to the role of reflexivity and skills that are required to achieve transformative change. Contextual factors, such as socio-cultural aspects, influence social learning processes as well as the research into these processes. During the discussion it was noted that research focusing on learning over longer timescales which requires the analysis of interventions over longer periods of time, is limited. Furthermore, we concluded that a multi-level framework could find a place in social learning research that might give space to an inquiry that considers changes at multiple levels. 

This webinar was the 2nd in a series of webinars that are organized by the TIAS Learning Community. The Learning Community is an international, online community set up in 2016 to enhance the learning capacity of those interested in examining, organizing or stimulating learning for sustainable development. The community provides its members opportunities to learn and exchange. The webinar described here was organized by two members of the Learning Community: Romina Rodela (Södertörn University, Sweden) and Joanne Vinke-de Kruijf (University of Twente, Netherlands). The Learning Community welcomes ideas for webinars and organizational support. For more information please contact Joanne Vinke-de Kruijf: learningcommunity@tias-web.info.
 
Copyright: Ulli Meissner


IA News

 

The World in 2050

The World in 2050 (TWI2050) is a global research initiative in support of a successful implementation of the United Nations’ 2030 Agenda. The goal is to provide the fact-based knowledge to support the policy process and implementation of the SDGs and develop science-based transformational and equitable pathways to sustainable development that can provide much needed information and guidance for policy makers responsible for the implementation of the SDGs.  Read more….     

The Blue Planet Prize Winners

The commemorative lectures of the 2017 Blue Planet Prize winners will take place on October 19th in Tokyo. This year’s winners are Hans J. Schellnhuber (Germany), Founder and Director of the Potsdam Institute for Climate Impact Research (PIK) and Gretchen C. Daily (USA), Bing Professor of Environmental Science in the Department of Biology, Director of the Center for Conservation Biology, and Senior Fellow at the Stanford Woods Institute, at Stanford University, Co-Founder and Faculty Director of the Natural Capital Project. The Blue Planet prize is an international environmental award sponsored by the Asahi Glass Foundation. The prizes are awarded to individuals or organizations that make outstanding achievements in scientific research and its application, and in so doing help to solve global environmental problems.
 
In the meantime, three former winners of the Blue Planet Prize, Tom Lovejoy (2012), Jane Lubchenco (2011) and Bob Watson (2010) released a joint press statement on September 7th, 2017 for the 25 years Commemorative Lecture of the prize.  In their statement, they stressed that we are experiencing unprecedented rates of biodiversity loss, that 2017 is already the warmest year on record, and that the Earth is at a crossroads. They pointed out that “policies and technologies exist to safeguard the environment that are cost effective and socially acceptable…” If we do not act now, “future generations…will wonder why we pillaged the environment with no forethought for them”. Read their press statement: The Earth’s Environment is at a Crossroads. Solutions Exist. The Time for Action is Now, and view the press meeting (on Youtube).
 
 
Photo: Marten van den Heuvel (adapted)


Recent Publications of Members


Anthony J. Jakeman, Olivier Barreteau, Randall J. Hunt,Jean-Daniel Rinaudo and Andrew Ross. Integrated Groundwater Management: Concepts, Approaches and Challenges.  The book can be downloaded as a pdf (free of charge).

This book is the first of its kind in that it comprehensively covers the concepts of and tools for integrated groundwater management. It has contributions from 74 international researchers, practitioners and water resource managers, and provides an overview of integration and problem settings. The themes covered include governance, socioeconomic aspects, biophysical aspects, and modelling and decision support.
_______________________________
______________________________________________________

Netherlands Environmental Assessment Agency (PBL). 2017. Exploring future changes in land use and land condition and the impacts on food, water, climate change and biodiversity: Scenarios for the UNCCD Global Land Outlook. The Hague: PBL.

The pressure on land is growing in many regions of the world, due to the increasing demand for arable crops, meat and dairy products, bio-energy and timber, and is exacerbated by land degradation and climate change. This policy report provides scenario projections for the UNCCD Global Land Outlook, exploring future changes to the use and condition of land and the resulting impacts on food, water, climate change and biodiversity. See also press release and publication page.
____________________________________________________________________________________

Martin Kowarsch, Jason Jabbour, Christian Flachsland, Marcel T. J. Kok, Robert Watson, Peter M. Haas, Jan C. Minx, Joseph Alcamo, Jennifer Garard, Pauline Riousset, László Pintér, Cameron Langford, Yulia Yamineva, Christoph von Stechow, Jessica O'Reilly & Ottmar Edenhofer. 2017. A road map for global environmental assessments.  Nature Climate Change. 7, 379–382. doi:10.1038/nclimate3307

(Abstract) Increasing demand for solution-oriented environmental assessments brings significant opportunities and challenges at the science–policy–society interface. Solution-oriented assessments should enable inclusive deliberative learning processes about policy alternatives and their practical consequences.

 

Events

October 23-25, 2017. G-STIC 2017: The first Global Science, Technology and Innovation Conference series. Brussels, Belgium
The G-STIC 2017 conference aims to accelerate the development, dissemination and deployment of technological innovations that enable the achievement of the Sustainable Development Goals (SDGs). The SDGs are 17 internationally agreed ambitious goals to move the world to a more sustainable future by 2030.
 
 
11-15 December 2017. The 9th World Conference of the Ecosystem Services Partnership (ESP) in Shenzhen, China. The overarching theme this year will be "Ecosystem Services for Eco-civilisation: restoring connections between people and landscapes through Nature Based Solutions". Deadline for early bird registration is 11 October 2017. Link to information on the conference: Link to information on ESP.
 


 
6 - 8 June 2018. Call for papers: Measuring Behavior 2018. Manchester, UK.
Measuring Behaviour is an interdisciplinary event for scientists and practitioners concerned with the study of human or animal behaviour. This dynamic community and its biennial conference focuses on methods, techniques, and tools in behavioural research in the widest sense. The eleventh Measuring Behavior conference is organized by Manchester Metropolitan University.
The Scientific Program Committee now invites you to submit your abstract: Submission page.

 

 

TIAS Quarterly

TIAS Quarterly is the newsletter of The Integrated Assessment Society.
ISSN: 2077-2130
Editor: Caroline van Bers
Associate editors: Anna-Lena Guske, Caroline Lumosi, Joanne Vinke-de Kruijf,
Layout: Worldshaper photography & design - Fabian Heitmann, Caroline van Bers


TIAS Secretariat
49076 Osnabrück, Germany

E-Mail: info[at]tias-web.info
Internet: http://www.tias-web.info/

Become a TIAS member

TIAS Membership fees

Individuals: € 50 / US$ 65 annually
Developing country € 35 / US$ 40

Students: €15 / US$ 20 annually
Developing country €10 / US$10

Institutions: € 200 / US$ 250 annually;
Developing country € 150 / US$165
Copyright © 2017 The Integrated Assessment Society e. V., All rights reserved.