Copy
This is the TIAS Newsletter
 
View this email in your browser
TIAS Logo
TIAS Quarterly

No. 04/2019 (December)
The Newsletter of
The Integrated Assessment Society (TIAS)

Wishing our members and friends
a happy and sustainable 2020!
Visit TIAS

In this Issue


Feature: A proposal for improving the evaluation of complex IA models, by Rich Rosen
Reflections on Rich Rosen’s proposal for improving the evaluation of complex IA models, by Jan Bakkes
IA News: EU Conference on Modelling for Policy support: Experiences, challenges and the way ahead
TIAS News
Publications

Photo: M. Spiske on Unsplash

The Society

The Integrated Assessment Society is a not-for-profit entity created to promote the community of inter-disciplinary and disciplinary scientists, analysts and practitioners who develop and use Integrated Assessment (IA). The goals of the society are to nurture this community, to promote the development of IA and to encourage its wise application.

Integrated Assessment can be defined as the interdisciplinary process of integrating knowledge from various disciplines and stakeholder groups in order to evaluate a problem situation from a variety of perspectives and provide support for its solution. IA supports learning and decision processes and helps to identify desirable and possible options for addressing the problem. It therefore builds on two major methodological pillars: approaches to integrating knowledge about a problem domain, and understanding policy and decision making processes. IA has been developed to address issues of acid rain, climate change, land degradation, water and air quality management, forest and fisheries management and public health.

 

Feature
A proposal for improving the evaluation of complex IA models
By Dr. Richard A. Rosen, Tellus Institute (retired)


As computer models of complex integrated systems have grown more complex, the relevant scientific research community has allowed their standards for doing good science to deteriorate. One reason is that scientists experience increasing pressure to publish research results, and less pressure to make sure that the scientific basis for their results is well justified and transparent.

New multi-stakeholder processes need to be established immediately to put the analysis of complex socio-ecological systems through Integrated Assessment on a firmer scientific foundation.  This probably should start at the regional level with representatives of relevant groups of scientists (both natural and social), journal editors, NGOs, government agencies and research laboratories, and science policy institutes.  Currently, most purported scientific analysis of complex systems and policy problems used for input to public policy making is done using complex computer models, which are usually far from transparent and understandable to the relevant research community, not to mention incomprehensible to policy makers and the public.  Thus, it is not surprising that much of civil society and relevant public policy activists have grown increasingly skeptical of the usefulness of much of the published scientific literature on complex systems.


How has this situation developed?

With more and more papers submitted for publication to scientific journals each year, the peer review process seems to have fallen apart in many fields of inquiry.  For example, in the field of the economic modeling of climate change mitigation, it is well known that most of the complex integrated assessment models relied on in that field have themselves never been peer-reviewed, even if the resulting research papers do receive peer-review.  But from a scientific perspective, what good is it for scientists to peer review a colleague’s write-up of their research results if they cannot and, therefore, do not sufficiently review the underlying equations and assumptions buried in the computer models that yield the reported results?  In my view, such limited peer reviews are almost worthless, and by definition, cannot detect problems in the underlying science.  Basically, this approach, which has been allowed if not encouraged by most journal editors, amounts to doing science on a “trust me” basis. 

As both logic and the history of science should teach us, science cannot be properly pursued based simply on trust.  Most great scientists have made mistakes in their careers, which have been detected by other scientists.  But this hoped for self-correcting process of a scientific community cannot occur in cases where the details of models are kept secret from that community. This leads to what I call the “black-box” approach to doing policy analysis, which will tend to undermine the confidence of the public in the basis for much policy making.  Namely, many research articles simply report the results of many models, which are effectively incomprehensible “black boxes” even to the scientists in the same field who did not develop a particular model themselves. In most physical sciences this does not occur, and all equations, assumptions and parameter values used in models are debated rigorously, and at great length.


What can be done?

Adequate scientific review and discussion of models, model components, and assumptions for any particular policy application must be required prior to journals or research institutes allowing the publication of any reports that depend on such models.  In fact, today, it appears that journals actively discourage scientific review and discussion of policy and other types of research papers by severely limiting the number of comments they will publish on past publications. This may not be so surprising, however, since journals and research institutes probably do not want to publicize errors in their prior publications. But free, adequate, and open discussion is the only way in which science can be done properly.  Even if only one out of ten scientists or peer reviewers believes that a scientific analysis is wrong, they might be right.  Good science cannot be done by achieving a consensus of peer reviews or scientific opinions within a small group of like-minded researchers.  All critical viewpoints must be able to be argued in public. This is particularly true the more complex integrated assessments of various real-world complex systems become.

Some model developers may think that their models should be considered proprietary private property and not revealed to the public even for scientific review, but this is not appropriate, particularly in cases where these model developers have been funded by public agencies which should never sign model development contracts allowing models to be kept secret.  Similarly, any model developer who wants to provide input to public policy decisions, even if the model is privately funded, must be required to open the model to public review and discussion.  Finally, model developers probably resist adequate scientific review of their models because such a review would probably require a lot of staff time and budget, and no one wants others to find errors in or problems with their models, potentially undermining their reputations.


What kind of agency can be established to help restore the value of scientific input to public policy making?

Such agencies would probably have to be structured a little differently in different regions of the world, but the basic requirements of such an agency are clear.  Starting with a tentative title for such an agency, we could call it a “Commission for Policy Model Documentation and Review”. This title conveys the need for two very important functions – documentation and review.  Starting with the issue of model documentation, this commission should establish a baseline set of requirements so that, at the very least, analysts in the same research area could understand the purpose and general structure of the model.  But, in addition, all the key substantive equations that determine how the model functions would have to be provided along with the numerical values of all internal coefficients in each equation.  Furthermore, the documentation would have to include a precise description of how these numerical values were derived, or what other sources of relevant research were relied on. For example, if the numerical values of coefficients within equations were determined statistically from historical data, as is often the case, then both the database and it sources would have to be described, as well as the regression or statistical techniques used to derive these coefficients including all relevant statistical equations and parameters.

While this level of documentation for all policy-relevant models is essential from a scientific perspective, a thorough discussion of how the model being documented differs from other similar models and why it differs, would help provide insight to policy makers as to why a new or different model was created in the first place.  Such a discussion would naturally lead to a review of the new model’s strengths and weakness from the perspective of the model developer.  All this material would provide a good basis for other model developers and critics to create their own reviews of the model at issue.  Having all this information and discussion available to policy makers would facilitate their choice of policy models and research teams when their agencies fund a next round of research projects.  But, perhaps, this is one of the fears that research teams have, namely that the existence of comprehensive reviews of their models might endanger funding for the next round of research projects that they propose, if their models are not highly rated.  Of course, this is a real-life issue that will need to be faced. Policy makers and taxpayers want to know that funds invested in projects are used for relevant, useful models.


Who should participate in such a commission?

Probably, each regional commission should be broken down into sub-entities that review policy-relevant models in different research areas so that each participant’s expertise can be best focused on the kinds of models for which they have the most expertise.  Within each sub-commission for each major type of model, the directors and senior research staff of all relevant research institutes, funding agencies, NGOs, universities, and, perhaps, corporations that develop such models should be represented.  In addition, there should be representation from the groups of journals and their editors that publish research in each sub-area on each sub-commission so that they are up-to-date on the key substantive and modeling issues that are relevant to the research articles that they will be asked to publish.  In this way, peer reviewers for major journals will also be included, most likely from the research institutes and universities represented, so that they will be able to help journal editors and research scientists better evaluate the quality of the research being done and published based on complex integrated assessment models.
 
 

 

Reflections on Rich Rosen’s proposal for improving the evaluation of complex IA models


By Jan Bakkes, Netherlands Environmental Assessment Agency (retired) and Vice-president, TIAS

Rich Rosen’s article continues a plea recently delivered at the Joint Research Centre (JRC) conference on modelling for policy support in Brussels in November. Details of the conference are in the short piece by Giulia Listorti presented in the next article. Because Rich could not attend, I delivered his presentation. Here are my reflections on the case he makes.


1. In my opinion, Rich is too negative on transparence in practice, on progress and especially on motives. For example, in the discussion in Brussels following the presentation, Leen Hordijk told of the three successive reviews of the IMAGE model which he led, each requiring extensive documentation over and above the usual disclosure. It appears that Rich is mostly reflecting impressions. However, impressions are important and on that level, there is an issue. The demand for transparency (or opposition cloaked as demand for transparency) is on the rise. This reflects two simultaneous trends. First, the use of models for policy advice and policy implementation is becoming increasingly normal. Second, on some environmental issues we are beyond easy solutions and the next phases require painful decisions.

2. Ultimately, what matters to policy are the conclusions of the assessments for which the models are used, for example, the global assessments by IPCC, IPBES and UNEP. Computational models are but one category of tools for these assessments, albeit an important category. Limitations, bias and uncertainty concerning policy-oriented conclusions can best be accounted for in terms of these assessments, rather than the models and other tools used. Would the conclusions have been significantly different if different models, a different time horizon, a more detailed societal breakdown, or different indicators had been used? I have conducted this sort of robustness analysis for two global assessments and based on this experience, I can say that it is a lot easier to do this than to provide deep transparency for a complex model. Moreover, such a robustness analysis incorporated in the assessments speak directly to the policy audience.

3. Rich emphasizes disclosure.This is important, and intuitively the first thing you want to see properly done. But for improving relevance and trust from a policy perspective, disclosure is certainly not the only line of work in coping with model peculiarities in the context of complex assessments. For example, additional efforts needed include: ensemble modelling; robustness analysis of assessment conclusions, as already pointed out; proper scoping of an assessment in order to focus on what matters, rather than what can be conventionally quantified; and, balanced use of quantitative and non-quantitative information.

Different models often bring different logic. Therefore, broad-based assessments such as those of IPCC, IPBES and UNEP generally benefit from the use of multiple, alternative models. If these models contradict each other, the reader is best informed by explaining the contradiction. Implicitly, this is to say that models cannot deliver ‘truth’.
For example, tools for the OECD Environmental Outlook to 2030 comprised two models capable of projecting the time needed for the mitigation of carbon dioxide emissions and associated costs of intervention. Each followed its own logic: one prioritizing gradual technological improvement and decreasing unit cost. The other prioritized the minimization of the extra buildup of carbon dioxide in the atmosphere. Obviously, the two models generated different optimal mitigation paths. The important thing, highlighted in the official presentation of the outlook, was to explain the different paths and illuminate the political decision to be taken.

Comparing the stories generated by different models can be at least as productive as setting up a comparison of equations, price elasticities or learning curves. This is in any case the approach to be taken for those global information systems that rely largely on reported expectations by governments, rather than equations.As artificial intelligence, with its data-based pattern recognition, penetrates the world of environment-related modelling, classical disclosure of equations only may tell the reviewer less and less.


4. Perhaps most importantly, the challenge is how to deliver relevant information on the trustworthiness of the model or assessment (uncertainty, limitations, biases, alternative views) to the policy user. That must be done in a way that is extremely succinct and to the point. In my view, that is a bigger challenge than disclosing all of the underlying equations, sources and numerical values.
 
The suggestion of ‘a Commission’ may be shorthand for various forms of institutionalized minimum demands for transparency, with the format depending on the context. Three useful building blocks that come to mind are of EU origin and some of them were profiled at the JRC conference.
  • EU Horizon 2020 projects such as LIAISE and ADVANCE that stimulated uniform formats for disclosure of model information: At least one ongoing exercise is applying this requirement to integrated assessment models: see https://www.iamcdocumentation.eu/index.php/IAMC_wiki. This is voluntary harmonization, i.e. softer than an institutionalized minimum in the spirit of the Commission suggested by Rich, but it goes in the same direction.
  • MIDAS, the JRC catalogue of models: comprises all models sponsored by the EU and/or operated by the European Commission.
  • The new mood in Europe which influences the way in which EU Impact Assessments are scrutinized: The new mood means short shrift is given to assessments that quantify things that do not matter, just because the standard model happens to produce these things. ‘A Commission’ overseeing model transparency might well fit in this framework and make a useful, if partial, contribution.
These European examples are just several that can be drawn on for a more institutionalized approach to reviewing models, but are also signs that rigor is considered important in the scientific and policy communities.

Editor's Invitation: Further reflections on this theme are welcome and be considered for publication in the next issue Mail to: info[at]tias-web.info

   

EU Conference on Modelling for Policy support: Experiences, challenges and the way ahead


By Giulia Listorti, Competence Centre on Modelling of the Joint Research Centre of the European Commission

Around 200 modellers and policy makers from the European Commission, international institutions and Member States, universities, research institutes and representatives of the private sector travelled to Brussels to attend the European Conference on Modelling for Policy support held on 26 – 27 November 2019. The Conference was organised by the Competence Centre on modelling (CC-MOD)* of the Joint Research Centre (JRC) of the European Commission.

Models are important analytical tools that contribute to a strengthened evidence base throughout the policy cycle. It is thus essential that they are used accurately. This implies big challenges for responsibility, transparency, quality and coherence in their development and use, as well as in the communication of results to various audiences.

Over two days, model developers and users from all policy areas where models are used to support policy making in the EU (such as agriculture, economics, energy, environment, transport, climate and risk) had the opportunity to share and exchange knowledge and best practices to identify common challenges and solutions.

The renewed commitment by the European Commission and institutions to a strengthened culture of transparent and evidence-based policymaking marked the opening and closing of the event, given respectively by Veronica Gaffey, Chair of the Regulatory Scrutiny Board (the independent body that scrutinises the quality of impact assessments and evaluations), and Charlina Vitcheva, Acting Director General of the JRC.

The keynote speeches addressed the ‘Use of global models for policy recommendations’ (Mechthild Wörsdörfer, International Energy Agency), and ‘Assessing social impacts of policies: indicators and methods’ (Klaus Jacob, Freie Universität Berlin and TIAS President).

The parallel sessions, organised around cross-cutting issues (such as model quality and transparency, uncertainty and sensitivity analysis, model linkages, complex system modelling and multi-criteria decision making) provided an occasion for dialogue between the various modelling communities and between modellers and policymakers. These discussions showed that exchange of information, collaboration and best practices yield concrete benefits to ensure a coherent approach across policy fields and all phases of the policy cycle. Some highlights include the contributions that models can make from very early stages in the shaping of policies; an adequate treatment of uncertainty; the variety of ongoing experiences to involve stakeholders in modelling exercises; and the importance of innovative approaches. The programme also offered further insights on the role of modelling in the broader context, namely on the integration of qualitative and quantitative approaches, and on the role of modelling in foresight exercises.

The full recordings of the event as well as the conference materials are available on the Conference webpage.

____________________
* The Competence Centre on Modelling, launched in 2017, promotes a responsible, coherent and transparent use of modelling to support the evidence base for EU policies. CC-MOD helps in identifying common approaches to quality and transparency of model use, and to facilitate dialogue between policy makers and modelling teams across the Commission. The main CC-MOD activities include corporate modelling inventory and knowledge management; sensitivity analysis of models; peer review of models; transparency and coherence in science for policy; and social multi-criteria evaluation of policy options.
 

TIAS News


TIAS welcomes new vice-president, Marcela Brugnach


As announced to members at the end of October, elections were held for the positions of President and the two Vice-Presidents. We are very happy to welcome back Klaus Jacob as president and Jan Bakkes as Vice-president. A warm welcome was extended to Marcela Brugnach who has taken on the position of the second vice-president. We look forward to working with Marcela who moves from the advisory board to the executive board. She recently took up a position as Ikerbasque research professor at the Basque Centre for Climate Change in Bilbao, Spain, which we trust will provide new perspectives and horizons for our relationship.
 
Marcela replaces Claudia Pahl-Wostl who steps back after serving on the TIAS executive board since its inception in 2003. Claudia continues to support TIAS as a member of the advisory board. We much appreciate the 16 years that she devoted to leading the association first as president and then as vice-president, and for the time and effort she dedicated to initiating and planning TIAS events such as training courses for young researchers and webinars.

Our newest member

Karina Vink is a postdoctoral researcher in the Water Engineering and Management Group of the Department of Civil Engineering and Management, University of Twente in the Netherlands. Karina is developing methods to quantify the net impacts of green infrastructure on urban water and energy resources, and is interested in how environmental ethics and the sense of responsibility of various stakeholders shape the current climate and energy transitions that the Netherlands and other countries are facing. Achieving positive impacts requires that researchers and planners go beyond infrastructure, into the realms of data management, behavioral science, communication, and planning future policies and costs under uncertain conditions. Therefore, Karina applies a transdisciplinary approach and a commitment to sustainable management to develop novel methods of environmental quantification for green infrastructure.

 


Adapted from photo by Aleksi Tappura on Unsplash

 

Recent Publications


Publications of our members

Felix Schenuit, Larissa Koch and Michael Jakob (2019): Markets for Public Attention at the Interface of Climate Science and Policy Making. Environmental Communication. DOI: 10.1080/17524032.2019.1688370

This recently published paper examines biased incentives for the production and use of climate change research and proposes ways to restructure the science-policy interface to better deal with these biases. We argue that policy-makers and the media have a tendency to pay more attention to extreme results because, for example, they confirm their ideological position or make a good story. This “adverse selection” of scientific results therefore conveys the impression of more uncertainty than there actually is. In addition, this effect may also pose a “moral hazard” for scientists engaging in research that receives substantial public attention, for example, by presenting point estimates instead of thoroughly discussing the uncertainties and sensitivities associated with their results. To avoid turning the market for public attention into a “market for lemons”, we recommend that scientists instead adopt the logic of assessment-making and rely more on meta-studies. We also highlight the importance of providing best-practice guidelines for the treatment of scientific uncertainty, incorporating the communication of uncertainty in university curricula and establishing face-to-face dialogue forums between researchers and policy makers.
________________________

Carnell, Edward John, Massimo Vieno, Sotiris Vardoulakis, Rachel C. Beck, Clare Heaviside, Samuel Tomlinson, Ulrike Dragosits, Mathew R. Heal, and Stefan Reis. Modelling public health improvements as a result of air pollution control policies in the UK over four decades–1970 to 2010. Environmental Research Letters (2019).

This study reviews 40 years of air pollution control policies in the UK and how they have affected human health by reducing attributable mortality across the whole population. Over this period, UK attributable mortality due to exposure to PM2.5 and NO2 have declined by 56% and 44% respectively, while ozone attributable respiratory mortality increased by 17% over the same period.
 ________________________

Other Publications

George van Voorn, Gert Jan Hofstede, Bruce Edmonds, Gary Polhill (eds.) Agent-based modelling to study resilience in socio-ecological systems. Ecological Complexity. Vol. 40, December 2019.

 

Events


3-5 June 2020, Dresden Nexus Conference 2020 (DNC 2020): Circular Economy in a Sustainable Society, in Dresden, Germany. The conference is an international event series dedicated to advancing research and the implementation of a Nexus Approach to resource management.

15-18 June 2020, Food for the Future 30th World Conference. Rotterdam.

21-23 September 2020. PNS 5 Symposium, Knowledge, Science Practices and Integrity: Quality through Post-Normal Science Lenses. Palazzo Fenzi-Marucelli, University of Florence, Italy. Deadline for abstracts: 31 January, 2020.

10-12 November 2020, Urban Transitions 2020: Integrating urban and transport planning, environment and health for healthier urban living. In Sitges, Barcelona, Spain. Abstracts are invited by 5 June 2020 for concurrent oral presentations and posters on the following topics: Cities Ÿ Land use and transport Ÿ Planning, environment and health Ÿ Nature-based solutions / green cities Ÿ Justice and inequality Ÿ Engagement, impacts and education.

25-27 November, 2020. International conference: "Sustainable & Resilient Urban-Rural Partnerships – URP2020" Hosted by the City of Leipzig.  URP2020 will be embedded in the German EU Council presidency in 2020 and is held under the auspices of the German Minister of Education and Research. Sponsored by, among others, UN-HABITAT, UrbAct, ISOCARP, JPI URBAN EUROPE.

18-21 August 2020. 11th International Sustainability Transitions (IST) Conference: Governance in an Era of Change – Making Sustainability Transitions Happen. Vienna, Austria. The event is organised by AIT Austrian Institute of Technology, Center for Innovation Systems and Policy and Vienna University of Economics and Business, Institute for Law and Governance/ Research Institute for Urban Management and Governance. Abstract submission deadline: 31st January 2020.

24-27 June 2020. The 25th Annual Conference of the European Association of Enrivonmental and Resource Economists (EAERE)will be held in Berlin, Germany. The Conference is organised by Technische Universität Berlin (TUB), Humboldt-Universität zu Berlin (HUB), German Institute for Economic Research (DIW), and Mercator Research Institute on Global Commons and Climate Change (MCC). Submission deadlines for papers and thematic sessions: January 31, 2020
 
Research positions

 

Research Associate in Sustainable Science with a focus on water quality & security

The University of Luxembourg invites applications for the position of Research Associate (Postdoctoral Researcher)in Sustainable Science with a focus on water quality and security in its Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE). The successful candidate will join the Sustainability Science research group in the Research Unit ‘Education, Cognition, Culture and Society’. The position is created within the framework of the NEXUS FUTURES project and is co-financed by the Luxembourg Ministry of Environment, Climate and Sustainable Development. Application deadline: 15 January 2020.   For more details are available from http://emea3.mrted.ly/2cmhh
 

Alexander von Humboldt Foundation International Climate Protection Fellowship

The Alexander von Humboldt Foundation grants up to 20 fellowships to prospective leaders from non-European transition or developing countries, who are active in any of the following areas: scientific, engineering-based, legal, economic, health-related or social aspects of climate change. The young climate experts will come to Germany for a year to work alongside a host of their own choosing on a research-related project. Application deadline: 1 March 2020.  Further information, a list of all application requirements and a link to the online application form are available at www.humboldt-foundation.de/web/ICF.
 

H2020 Marie Curie Innovative Training Network Interdisciplinary connectivity: Understanding and managing complex systems using connectivity science

Call for applications for two Doctoral Research Fellowships (PhD) positions:
1) Catastrophic transitions: Regime shifts in network topology resulting in novel
systems;
2) Critical nodes in economic connectivity: A multi-method application to facilitate
structural transitions.
Location: Faculty of Social Studies, Masaryk University, Czech Republic
Application deadline: 31 January 2020
More information and applications: https://www.muni.cz/en/about-us/careers/vacancies/51431


 

 

TIAS Quarterly Newsletter

TIAS Quarterly is the newsletter of The Integrated Assessment Society.
ISSN: 2077-2130
Editor: Caroline van Bers
Associate editors: Caroline Lumosi, Anna-Lena Guske
Photos: Ulli Meissner 
© (http://www.ullimeissner.com/) (unless otherwise indicated)
Layout: Worldshaper design - Fabian Heitmann, Caroline van Bers
TIAS President: Klaus Jacob
TIAS Vice-presidents: Jan Bakkes, Marcela Brugnach


TIAS Secretariat, Germany

E-Mail: info[at]tias-web.info
Web:   http://www.tias-web.info/

Become a TIAS member

TIAS Membership fees

Individuals: € 50 / US$ 65 annually
Developing country € 35 / US$ 40

Students: €15 / US$ 20 annually
Developing country €10 / US$10

Institutions: € 200 / US$ 250 annually;
Developing country € 150 / US$165
Website
Email
Copyright © 2019 The Integrated Assessment Society, All rights reserved.


Do you want to change how you receive these emails?
You can update your preferences or unsubscribe from this list
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
The Integrated Assessment Society e. V. · Augustenburgerstr. 28 · Osnabruck 49078 · Germany

Email Marketing Powered by Mailchimp