View this email in your browser
ECHOES   The Newsletter of the EoCoE project
Vol. 1, Issue 1
                       10th February 2017

The EoCoE Project

The Energy oriented Centre of Excellence in computing applications (EoCoE: read as “Echo”) is one of the 8 centres of excellence in computing applications (read more) recently established within the Horizon 2020 programme of the European Commission.  The primary objective of all the newly formed Centres of Excellence is to help strengthen Europe’s leadership in HPC applications by tackling challenges in topical areas such as renewable energy, materials modelling and design, molecular and atomic modelling, climate change, Global Systems Science, bio-molecular research, and tools to improve HPC applications performance.
EoCoE uses the tremendous potential offered by the ever-growing computing infrastructure to foster and accelerate the European transition to a reliable low carbon energy supply using HPC (High Performance Computing). The centre will aid the transition to sustainable energy sources and usage via targeted support for numerical modelling in four pillars: Meteorology, Materials, Water and Fusion. These pillars are embedded within a transversal multidisciplinary effort providing high-end expertise in applied mathematics and High Performance Computing (HPC). EoCoE, led by Maison de la Simulation (France), is structured around the Franco-German hub Maison de la Simulation - Jülich Research Centre. The project coordinates a pan-European network covering 21 teams in 8 countries with a total budget of 5.5 M€. Its partners are experts in HPC as well as on energy research, ensuring that EoCoE is fully integrated in the overall European strategy for HPC.

eoCoE  launches its services web-page 

The overall aim of project EoCoE is to address the “skill gap” in expertise in computational sciences in the Energy research field, in particular the mismatch of skills between academia and industry in this field.
To this effect, EoCoE offers an ever-expanding range of HPC services to end-users involved in Sustainable Energies from Academia, Industry and the Public Sector. Potential end-users from Industry, Government and other stakeholders can now browse the list of HPC services that EoCoE offers to the Energy Community, which are aimed at applications in Meteorology (DNI forecasting etc.), Water for Energy, Fusion, Materials and cross-cutting activities. You can now:

  • Improve your own applications using EoCoE code auditing, optimization and enhancement services.
  • Utilise EoCoE cutting edge numerical tools for Material for energy, Wind for energy, Geothermal energy, Hydropower, Short-term weather forecast and nowcast
  • Benefit from EoCoE consultancy and training services in modelling energy oriented processes, in running simulation to answer industrial questions and in high-performance computing related topics
These services can come in many forms, such as placements within the academic partners, mentoring, workshops, online courses, live virtual support, access to code libraries, and finished products.
More details about these services can be found on the EoCoE web page


Figure depicting multiscale features of the simulated water table depth across the EuroCordex domain, snapshot taken from 10th August 2003
Reference: Keune, J., F. Gasper, K. Goergen, A. Hense, P. Shrestha, M. Sulis, and S. Kollet (2016), Studying the influence of groundwater representations on land surface-atmosphere feedbacks during the European heat wave in 2003, J. Geophys. Res. Atmos., 121, doi:10.1002/2016JD025426

The software package ParFlow, an integrated parallel watershed model which simulates surface and subsurface flow, has undergone new developments this year which have paved the way for petascale performance. The integration of ParFlow into JuBE2, the Jülich Benchmarking Environment, a project undertaken by Wendy Sharples at the Terrestrial Systems Simulation Laboratory, within the Jülich Supercomputing Centre, has meant that in depth profiling of ParFlow has taken place with a whole range of the latest HPC profiling tools, such as Scalasca, Vampir, Extrae and TAU. Analysis of the profiling results has identified bottlenecks and has led to strategies to remove them. This article will focus on two strategies that were employed to remove bottlenecks towards exascale, namely, 1. implementation of parallel I/O with NetCDF-4 to reduce data storage and post-processing overheads, thereby tackling some of the big-data challenges of very large data volumes in geosciences and 2. the implementation of domain decomposition with the p4est library to reduce memory overheads, with a view to use it for adaptive mesh refinement.
Nearly all geophysical and atmospheric science software read in and write out data which is either in NetCDF format (Network Common Data Format) or a NetCDF interoperable format. The NetCDF libraries create model data which is of a self-describing, machine-independent data format that supports the creation, access, and sharing of array-oriented scientific data and is used with infrastructures such as the Earth System Grid Federation Nodes for example. The NetCDF-4 library allows the creation of said data in a parallel fashion using HDF5 and optionally produces compressed data using the multidimensional tiling feature and compression on the fly via the zlib library, albeit in serial. Implementation of parallel I/O with NetCDF-4 to replace the ParFlow Binary Format and Silo formats I/O, has meant that ParFlow becomes more big data capable due to the much reduced storage space required and the readability of model data by existing visualization and graphical analysis toolkits such as Paraview and VisIt. The conversion of I/O to the NetCDF format, a project undertaken by Klaus Görgen and Lukas Poorthuis at the Terrestrial Systems Simulation Laboratory within the Jülich Supercomputing Centre, has meant that ParFlow model data is now highly portable across platforms and applications and more efficient on parallel filesystems. In addition, the use of NetCDF meta-data standards allow for a much more efficient provenance tracking.
The adaptive mesh refinement software p4est takes a forest of octrees approach to storing, refining and coarsening a distributed mesh. Octrees are used to represent a three dimensional mesh by recursively subdividing a cubic mesh element into eight child elements. What sets p4est apart from the octree partitioning algorithm originally implemented in ParFlow is that the mesh storage is fully distributed in parallel and that its implementation scales to the full size of contemporary supercomputers. A project undertaken by Carsten Burstedde and Jose A. Fonseca at the Institite for Numerical Simulation at the University of Bonn and Stefan Kollet at FZ Juelich, funded through the DFG's TR32 initiative, has lead to the integration of ParFlow with p4est. As a main result, there is no longer any need to store the entire mesh connectivity information on each processor. This has significantly reduced ParFlow's memory footprint, effectively removing the memory bottleneck that had previously prevented ParFlow from running at extreme scales. This work will be merged into the public version of ParFlow in the near future, making the extended speed and scalability available to the greater public.
The above developments have improved the scalability of ParFlow such that it is becoming a true petascale application, with successful demonstrations running on the entire 28-rack IBM BlueGene/Q system at the Jülich Supercomputing Centre.


Figure: Map of Cyprus and model forecasts. Daily means of long-term observations (black) of the global horizontal irradiance (GHI) obtained at The Cyprus Institute's Solar Research Facility “PROTEAS”, Pentakomo, and model calculations with the EMAC atmospheric chemistry-climate model (red). Overlaid (green) are the EMAC results for the Aerosol Optical Depth (AOD) − the AOD peaks nicely coincide with the GHI/DNI troughs. The high AOD peaks in spring 2015 are caused by strong outflow events of mineral dust, which is captured rather well by this high-resolution version of EMAC, which has been nudged to ERA-Interim reanalysis data. Our EMAC aerosol version is developed and maintained at the Cyprus Institute,, in close collaboration with the Max Planck Institute for Chemistry (
Within the European Commission HORIZON 2020 project,, task WP2.3, Optimal Operation of concentrated Solar Power (CSP) under Weather Uncertainty, researches of The Cyprus Insitute, have developed beyond the state-of-the art localized day-ahead forecasts of the Global Horizontal Irradiation (GHI), which were observed at the CyI owned CSP facility site in Pentakomo, Cyprus.
The augmented model results are obtained with a “solar” version of EMAC — a high resolution Earth System Model with fully coupled aerosol-chemistry-cloud-radiation feedbacks, which has been developed and applied to DNI/GHI forecasting by S. Metzger, CyI. Considering these feedbacks is essential for forecasting, since the GHI and the related direct normal irradiance (DNI) are strongly influenced by aerosols, haze and clouds, whereby the aerosol hygroscopic growth and the associated aerosol water (AW) mass controls the atmospheric visibility. For our augmented model version we have reduced the complexity of the required aerosol chemistry and AW thermodynamics to a minimum, so that the numerical forecasts can be efficiently obtained, while they are still in superior agreement with the observations at the PROTEAS facility due to the fully coupled aerosol-chemistry-cloud-radiation feedbacks.
The PROTEAS facility in Cyprus will be further utilized to optimize predictions for local atmospheric conditions (turbidity / visibility, humidity), as these affect the (nontrivial) irradiation attenuation relevant to GHI/DNI forecasting. In turn, the aerosol and 4-dimensional radiative model output is used to enhance predictions of the optimal energy storage schedules, subject also to market spot prices (by J. Cumptson, A. Mitsos, in order to minimize costs for grid and utility operators, as well as for the general public. This work also forms the first attribution of uncertainty in plant operation to a physical cause, in this case to minimize the mirror surface error.


The first EoCoE-POP workshop on benchmarking and performance analysis brought together code developers of community codes associated with the various thematic areas of EoCoE, with HPC experts associated with WP 1 and HPC experts from the “POP” Centre of Excellence. The goal was to familiarise the developers from the EoCoE consortium with state-of-the-art HPC performance analysis tools, enabling the teams to make a preliminary identification of bottlenecks, and to initiate the standardisation of benchmark procedures for these codes within the EoCoE project.
As an initial step, all code developers were instructed on how to perform benchmarking within the JUBE workflow environment, which will permit measurements to be documented, shared and rigorously reproduced over the project lifetime and beyond. Developers were then able to begin analysing their applications using specific HPC tools under the guidance of HPC experts (Score-P, Scalasca, Vampir, Paraver, Extrae, among others). Based on this face-to-face collaboration and common training, small teams of code developers and HPC experts were established, whose role was to follow-up on the promising initial work to provide comprehensive benchmarks and performance data by the time the next workshop was held.
A very valuable outcome was the exchange of respective ideas and needs between code developers and HPC experts, as this helped clarifying the issues from either perspective and enabled both sides to interact more smoothly with a well-defined focus on the next actions to be taken.  
The second EoCoE-POP workshop at Maison de la Simulation on benchmarking and performance analysis brought together, in a similar way as the first workshop, code developers of community codes associated the thematic work packages of EoCoE with HPC experts associated with transversal activities and HPC experts from the “POP” Centre of Excellence. In addition, this time two external partners (EDF and BRGM) joined the event and brought codes and code developers.
The format and the means were also very similar to the first workshop. The only difference for code developers is that they could start their JUBE integration from a first template this time rather than from scratch, and therefore could progress even faster than during the first workshop.
Contact us

Maison de la Simulation
CEA - Centre d’études de Saclay
Digiteo Labs - Bâtiment 565 - PC 190
F-91191 Gif-sur-Yvette Cedex

Click to unsubscribe
The project EoCoE receives support from the European Commission “Horizon 2020” Framework Programme under grant number 676629

This email was sent to
why did I get this?    unsubscribe from this list    update subscription preferences
EOCOE Project · Maison de la Simulation - CEA - Centre d’études de Saclay · Digiteo Labs - Bâtiment 565 - PC 190 F-91191 Gif-sur-Yvette Cedex · Paris F-91191 · France

Email Marketing Powered by Mailchimp