Synchronisation: how can this help weather forecasts in the future?

Current numerical modelling and data assimilation methods still face problems in strongly nonlinear cases, like in convective scales. A different, but interesting tool to help overcome these issues can be found in the synchronisation theory.

It all started in 1665, when Christiaan Huygens, a Dutch scientist, discovered that his two pendulum clocks were suddenly oscillating in opposite directions, but in a synchronised way. He tried to desynchronise them, by perturbing randomly one of the clocks, but surprisingly, after some time, both devices were synchronised again. He has attributed the phenomenon to the frame both clocks were sharing and after that, synchronisation field was opened to the world.

figure1

Figure 1: A drawing by Christiaan Huygens of his experiment in 1665.

Nowadays, researchers use these synchronisation concepts to reach a main goal: synchronise a model (any) with the true evolution of a system, using measurements. And even when only a reduced part of this system is observed, synchronisation between models and the true state can still be achieved. This is quite similar to what data assimilation looks for, as it aims to synchronise a model evolution with the truth by using observations, finding the best estimate of the state evolution and its uncertainty.

So why not investigate the benefits of recent synchronisation findings and combine these concepts with a data assimilation methodology?

At the start of this project, the first noticeable step that should be taken was to open up the synchronisation field to higher-dimension systems, as the experiments performed in the area were all focused on low-dimension, non-realistic systems. To this end, a first new idea was proposed:  an ensemble version of a synchronisation scheme, what we are calling EnSynch (Ensemble Synchronisation). Tests with a partly observed 1000-dimension chaotic model show a very efficient correspondence between the model and the true trajectories, both for estimation and prediction periods. Figures 2 and 3 show how our estimates and the truth are on top of each other, i.e. synchronised. Note that we do not have observations for all of the variables in our system. So, it is amazing to obtain the same successful results for the observed and also for the unobserved variables in this system!

figure2

Figure 2: Trajectories of 2 variables (top:observed and bottom: unobserved). Blue lines: truth. Green lines: estimates/predictions. (Predictions start after the red lines, i.e. no data assimilation is used.)

figure3

Figure 3: Zoom in the trajectory of a variable, showing how the model matches with the truth. Blue line: truth. Red line: our model. Yellow dots: observations.

The second and main idea is to test a combination of this successful EnSynch scheme with a data assimilation method called Particle Filter. As a proper data assimilation methodology, a particle filter provides us the best estimation of the state evolution and its uncertainty. Just to illustrate the importance of data assimilation in following the truth, figure 4 compares the case of only counting on an ensemble of models running freely in a chaotic nonlinear system, with the case of a data assimilation method applied to it.

figure4

Figure 4: Trajectories of ensemble members. Blue: with data assimilation. Red: without data assimilation. Truth is in black.

Efficient results are found with the combination between the new EnSynch and the particle filters. An example is shown in figure 5, where particles (ensemble members) of an unobserved variable nicely follow the truth during the assimilation period and also during the forecast stage (after t=100).

figure5

Figure 5: Trajectory for an unobserved variable in a 1000-dimension system. Observations occur at every 10 time steps until t=100. Predictions start after t=100.

These results are motivating and the next and big step is to implement this combined system in a bigger atmospheric model.  This methodology has been shown to be a promising solution for strongly nonlinear problems and potential benefits are expected for numerical weather prediction in the near future.

References:

Rey, D., M. Eldridge, M. Kostuk, H. Abarbanel, J. Schumann-Bischoff, and U. Parlitz, 2014a: Accurate state and parameter estimation in nonlinear systems with sparse observations. Physics Letters A, 378, 869-873, doi:10.1016/j.physleta.2014.01.027.

Zhu, M., P. J. van Leeuwen, and J. Amezcua, 2016: Implicit equal-weights particle filter. Quart. J. Roy. Meteorol. Soc., 142, 1904-1919, doi:10.1002/qj.2784.

 

Should we be ‘Leaf’-ing out vegetation when parameterising the aerodynamic properties of urban areas?

Email: C.W.Kent@pgr.reading.ac.uk

When modelling urban areas, vegetation is often ignored in attempt to simplify an already complex problem. However, vegetation is present in all urban environments and it is not going anywhere… For reasons ranging from sustainability to improvements in human well-being, green spaces are increasingly becoming part of urban planning agendas. Incorporating vegetation is therefore a key part of modelling urban climates. Vegetation provides numerous (dis)services in the urban environment, each of which requires individual attention (Salmond et al. 2016). However, one of my research interests is how vegetation influences the aerodynamic properties of urban areas.

Two aerodynamic parameters can be used to represent the aerodynamic properties of a surface: the zero-plane displacement (zd) and aerodynamic roughness length (z0). The zero-plane displacement is the vertical displacement of the wind-speed profile due to the presence of surface roughness elements. The aerodynamic roughness length is a length scale which describes the magnitude of surface roughness. Together they help define the shape and form of the wind-speed profile which is expected above a surface (Fig. 1).

blogpostpic

Figure 1: Representation of the wind-speed profile above a group of roughness elements. The black dots represent an idealised logarithmic wind-speed profile which is determined using the zero-plane displacement (zd) and aerodynamic roughness length (z0) (lines) of the surface.

For an urban site, zd and z0 may be determined using three categories of methods: reference-based, morphometric and anemometric. Reference-based methods require a comparison of the site to previously published pictures or look up tables (e.g. Grimmond and Oke 1999); morphometric methods describe zd and z0 as a function of roughness-element geometry; and, anemometric methods use in-situ observations. The aerodynamic parameters of a site may vary considerably depending upon which of these methods are used, but efforts are being made to understand which parameters are most appropriate to use for accurate wind-speed estimations (Kent et al. 2017a).

Within the morphometric category (i.e. using roughness-element geometry) sophisticated methods have been developed for buildings or vegetation only. However, until recently no method existed to describe the effects of both buildings and vegetation in combination. A recent development overcomes this, whereby the heights of all roughness elements are considered alongside a porosity correction for vegetation (Kent et al. 2017b). Specifically, the porosity correction is applied to the space occupied and drag exerted by vegetation.

The development is assessed across several areas typical of a European city, ranging from a densely-built city centre to an urban park. The results demonstrate that where buildings are the dominant roughness elements (i.e. taller and occupying more space), vegetation does not obviously influence the calculated geometry of the surface, nor the aerodynamic parameters and the estimated wind speed. However, as vegetation begins to occupy a greater amount of space and becomes as tall as (or larger) than buildings, the influence of vegetation is obvious. Expectedly, the implications are greatest in an urban park, where overlooking vegetation means that wind speeds may be slowed by up to a factor of three.

Up to now, experiments such as those in the wind tunnel focus upon buildings or trees in isolation. Certainly, future experiments which consider both buildings and vegetation will be valuable to continue to understand the interaction within and between these roughness elements, in addition to assessing the parameterisation.

References

Grimmond CSB, Oke TR (1999) Aerodynamic properties of urban areas derived from analysis of surface form. J Appl Meteorol and Clim 38:1262-1292.

Kent CW, Grimmond CSB, Barlow J, Gatey D, Kotthaus S, Lindberg F, Halios CH (2017a) Evaluation of Urban Local-Scale Aerodynamic Parameters: Implications for the Vertical Profile of Wind Speed and for Source Areas. Boundary-Layer Meteorology 164: 183-213.

Kent CW, Grimmond CSB, Gatey D (2017b) Aerodynamic roughness parameters in cities: Inclusion of vegetation. Journal of Wind Engineering and Industrial Aerodynamics 169: 168-176.

Salmond JA, Tadaki M, Vardoulakis S, Arbuthnott K, Coutts A, Demuzere M, Dirks KN, Heaviside C, Lim S, Macintyre H (2016) Health and climate related ecosystem services provided by street trees in the urban environment. Environ Health 15:95.

Future of Cumulus Parametrization conference, Delft, July 10-14, 2017

Email: m.muetzelfeldt@pgr.reading.ac.uk

For a small city, Delft punches above its weight. It is famous for many things, including its celebrated Delftware (Figure 1). It was also the birthplace of one of the Dutch masters, Johannes Vermeer, who coincidentally painted some fine cityscapes with cumulus clouds in them (Figure 2). There is a university of technology with some impressive architecture (Figure 3). It holds the dubious honour of being the location of the first assassination using a pistol (or so we were told by our tour guide), when William of Orange was shot in 1584. To this list, it can now add hosting a one-week conference on the future of cumulus parametrization, and hopefully bringing about more of these conferences in the future.

Delftware_display

Figure 1: Delftware.

Vermeer-view-of-delft

Figure 2: Delft with canopy of cumulus clouds. By Johannes Vermeer, 1661.

Delft_AULA

Figure 3: AULA conference centre at Delft University of Technology – where we were based for the duration of the conference.

So what is a cumulus parametrization scheme? The key idea is as follows. Numerical weather and climate models work by splitting the atmosphere into a grid, with a corresponding grid length representing the length of each of the grid cells. By solving equations that govern how the wind, pressure and heating interact, models can then be used to predict what the weather will be like days in advance in the case of weather modelling. Or a model can predict how the climate will react to any forcings over longer timescales. However, any phenomena that are substantially smaller than this grid scale will not be “seen” by the models. For example, a large cumulonimbus cloud may have a horizontal extent of around 2km, whereas individual grid cells could be 50km in the case of a climate model. A cumulonimbus cloud will therefore not be explicitly modelled, but it will still have an effect on the grid cell in which it is located – in terms of how much heating and moistening it produces at different levels. To capture this effect, the clouds are parametrized, that is, the vertical profile of the heating and moistening due to the clouds are calculated based on the conditions in the grid cell, and this then affects the grid-scale values of these variables. A similar idea applies for shallow cumulus clouds, such as the cumulus humilis in Vermeer’s painting (Figure 2), or present-day Delft (Figure 3).

These cumulus parametrization schemes are a large source of uncertainty in current weather and climate models. The conference was aimed at bringing together the community of modellers working on these schemes, and working out which might be the best directions to go in to improve these schemes, and consequently weather and climate models.

Each day was a mixture of listening to presentations, looking at posters and breakout discussion groups in the afternoon, as well as plenty of time for coffee and meeting new people. The presentations covered a lot of ground: from presenting work on state-of-the-art parametrization schemes, to looking at how the schemes perform in operational models, to focusing on one small aspect of a scheme and modelling how that behaves in a high resolution model (50m resolution) that can explicitly model individual clouds. The posters were a great chance to see the in-depth work that had been done, and to talk to and exchange ideas with other scientists.

Certain ideas for improving the parametrization schemes resurfaced repeatedly. The need for scale-awareness, where the response of the parametrization scheme takes into account the model resolution, was discussed. One idea for doing this was the use of stochastic schemes to represent the uncertainty of the number of clouds in a given grid cell. The concept of memory also cropped up – where the scheme remembers if it had been active at a given grid cell in a previous point in time. This also ties into the idea of transitions between cloud regimes, e.g. when a stratocumulus layer splits up into individual cumulus clouds. Many other, sometimes esoteric, concepts were discussed, such as the role of cold pools, how much tuning of climate models is desirable and acceptable, how we should test our schemes, and what the process of developing the schemes should look like.

In the breakout groups, everyone was encouraged to contribute, which made for an inclusive atmosphere in which all points of view were taken on board. Some of the key points of agreement from these were that it was a good idea to have these conferences, and we should do it more often! Hopefully, in two years’ time, another PhD student will write a post on how the next meeting has gone. We also agreed that it would be beneficial to be able to share data from our different high resolution runs, as well as to be able to compare code for the different schemes.

The conference provided a picture of what the current thinking on cumulus parametrization is, as well as which directions people think are promising for the future. It also provided a means for the community to come together and discuss ideas for how to improve these schemes, and how to collaborate more closely with future projects such as ParaCon and HD(CP)2.

4th ICOS Summer School

Email: R.Braghiere@pgr.reading.ac.uk

The 4th ICOS Summer School on challenges in greenhouse gases measurements and modelling was held at Hyytiälä field station in Finland from 24th May to 2nd June, 2017. It was an amazing week of ecosystem fluxes and measurements, atmospheric composition with in situ and remote sensing measurements, global climate modelling and carbon cycle, atmospheric transport and chemistry, and data management and cloud (‘big data’) methods. We also spent some time in the extremely hot Finnish sauna followed by jumps into a very cold lake, and many highly enjoyable evenings by the fire with sunsets that seemed to never come.

sunset_Martijn Pallandt
Figure 1. Sunset in Hyytiälä, Finland at 22:49 local time. Credits: Martijn Pallandt

Our journey started in Helsinki, where a group of about 35 PhD students, with a number of postdocs and master students took a 3 hours coach trip to Hyytiälä.  The group was very diverse and international with people from different backgrounds; from plant physiologists to meteorologists. The school started with Prof. Dr. Martin Heimann  introducing us to the climate system and the global carbon cycle, and Dr. Alex Vermeulen highlighted the importance of good metadata practices and showed us more about ICOS research infrastructure. Dr. Christoph Gerbig joined us via Skype from Germany and talked about how atmospheric measurements methods with aircrafts (including how private air companies) can help scientists.

Hyytiala_main_tower_truls_Andersen_2
Figure 2. Hyytiälä flux tower site, Finland. Credits: Truls Andersen

On Saturday we visited the Hyytiälä flux tower site, as well as a peatland field station nearby, where we learned more about all the flux data they collect and the importance of peatlands globally. Peatlands store significant amounts of carbon that have been accumulating for millennia and they might have a strong response to climate change in the future. On Sunday, we were divided in two groups to collect data on temperature gradients from the lake to the Hyytiälä main flux tower, as well as on carbon fluxes with dark (respiration only) and transparent (photosynthesis + respiration) CO2 chambers.

chamber_measurements_renato
Figure 3: Dark chamber for CO2 measurements being used by a group of students in the Boreal forest. Credits: Renato Braghiere

On the following day it was time to play with some atmospheric modelling with Dr. Maarten Krol and Dr. Wouter Peters. We prepared presentations with our observation and modelling results and shared our findings and experiences with the new data sets.

The last two days have focused on learning how to measure ecosystem fluxes with Prof. Dr. Timo Vesala, and insights on COS measurements and applications with Dr. Kadmiel Maseyk. Timo also shared with us his passion for cinema with a brilliant talk entitled “From Vertigo to Blue Velvet: Connotations between Movies and Climate change” and we watched a really nice Finnish movie “The Happiest Day in the Life of Olli Mäki“.

4th_icos_summer_school_group_photo
Figure 4: 4th ICOS Summer School on Challenges in greenhouse gases measurements and modelling group photo. Credits: Wouter Peters

Lastly, it was a fantastic week where we were introduced to several topics and methods related to the global carbon budget and how it might impact the future climate. No doubt all information gained in this Summer School will be highly valuable for our careers and how we do science. A massive ‘cheers’ to Olli Peltola, Alex Vermeulen, Martin Heimann, Christoph Gerbig, Greet Maenhout, Wouter Peters, Maarten Krol, Anders Lindroth , Kadmiel Maseyk, Timo Vesala, and all the staff at the Hyytiälä field station.

This post only scratches the surface of all of the incredible material we were able to cover in the 4th ICOS Summer School, not to mention the amazing group of scientists that we met in Finland, who I really look forward to keeping in touch over the course of the years!

 

The impact of vegetation structure on global photosynthesis

Email: R.Braghiere@pgr.reading.ac.uk

Twitter: @renatobraghiere

The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most biogeophysical processes including photosynthesis. The most commonly used radiative transfer scheme in climate models does not explicitly account for vegetation architectural effects on shortwave radiation partitioning, and even though detailed 3D radiative transfer schemes have been developed, they are often too computationally expensive and require a large number of parameters.

Using a simple parameterisation, we modified a 1D radiative transfer scheme to simulate the radiative balance consistently with 3D representations. Canopy structure is typically treated via a so called “clumping” factor which acts to reduce the effective leaf area index (LAI) and hence fAPAR (fraction of absorbed photosynthetically radiation, 400-700 nm). Consequently from a production efficiency standpoint it seems intuitive that any consideration of clumping can only lead to reduce GPP (Gross Primary Productivity).  We show, to the contrary, that the dominant effect of clumping in more complex models should be to increase photosynthesis on global scales.

difference_gpp_clump_default_jules
Figure 1. Difference in GPP estimated by JULES including clumping and default JULES GL4.0. Global difference is 5.5 PgC.

The Joint UK Land Environment Simulator (JULES) has recently been modified to include clumping information on a per-plant functional type (PFT) basis (Williams et al., 2017). Here we further modify JULES to read in clumping for each PFT in each grid cell independently. We used a global clumping map derived from MODIS data (He et al., 2012) and ran JULES 4.6 for the year 2008 both with and without clumping using the GL4.0 configuration forced with the WATCH-Forcing-Data-ERA-Interim data set (Weedon et al., 2014). We compare our results against the MTE (Model Tree Ensemble) GPP global data set (Beer et al., 2010).

erro_bar_boxes_v2
Figure 2. Regionally averaged GPP compared to the MTE GPP data set. In all areas except Africa there is an overall improvement.

Fig. 1 shows an almost ubiquitous increase in GPP globally when clumping is included in JULES. In general this improves agreement against the MTE data set (Fig. 2). Spatially the only significant areas where the performance is degraded are some tropical grasslands and savannas (not shown). This is likely due to other model problems, in particular the limited number of PFTs used to represent all vegetation globally. The explanation for the increase in GPP and its spatial pattern is shown in Fig 3. JULES uses a multi-layered canopy scheme coupled to the Farquhar photosynthesis scheme (Farquhar et al., 1980). Changing fAPAR (by including clumping in this case) has largest impacts where GPP is light limited, and this is especially true in tropical forests.

gpp_vertical_anomaly_zonal_mean_Opt5_gridbox
Figure 3. Difference in longitudinally averaged GPP as a function of depth in the canopy. Clumping allows greater light penetration to lower canopy layers in which photosynthesis is light limited.

 

References

Beer, C. et al. 2010. Terrestrial gross carbon dioxide uptake: global distribution and covariation with climate. Science329(5993), pp.834-838.

Farquhar, G.D. et al. 1980. A biochemical model of photosynthetic CO2 assimilation in leaves of C3 species. Planta, 149, 78–90.

He, L. et al. 2012. Global clumping index map derived from the MODIS BRDF product. Remote Sensing of Environment119, pp.118-130.

Weedon, G. P. et al. 2014. The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data, Water Resour. Res., 50, 7505–7514.

Williams, K. et al. 2017. Evaluation of JULES-crop performance against site observations of irrigated maize from Mead, Nebraska. Geoscientific Model Development10(3), pp.1291-1320.

The advection process: simulating wind on computers

Email: js102@zepler.net   Web: datumedge.co.uk   Twitter: @hertzsprrrung

This article was originally posted on the author’s personal blog.

If we know which way the wind is blowing then we can predict a lot about the weather. We can easily observe the wind moving clouds across the sky, but the wind also moves air pollution and greenhouse gases. This process is called transport or advection. Accurately simulating the advection process is important for forecasting the weather and predicting climate change.

I am interested in simulating the advection process on computers by dividing the world into boxes and calculating the same equation in every box. There are many existing advection methods but many rely on these boxes having the correct shape and size, otherwise these existing methods can produce inaccurate simulations.

During my PhD, I’ve been developing a new advection method that produces accurate simulations regardless of cell shape or size. In this post I’ll explain how advection works and how we can simulate advection on computers. But, before I do, let’s talk about how we observe the weather from the ground.

In meteorology, we generally have an incomplete picture of the weather. For example, a weather station measures the local air temperature, but there are only a few hundred such stations dotted around the UK. The temperature at another location can be approximated by looking at the temperatures reported by nearby stations. In fact, we can approximate the temperature at any location by reconstructing a continuous temperature field using the weather station measurements.

The advection equation

So far we have only talked about temperatures varying geographically, but temperatures also vary over time. One reason that temperatures change over time is because the wind is blowing. For example, a wind blowing from the north transports, or advects, cold air from the arctic southwards over the UK. How fast the temperature changes depends on the wind speed, and the size of the temperature contrast between the arctic air and the air further south. We can write this as an equation. Let’s call the wind speed v and assume that the wind speed and direction are always the same everywhere. We’ll label the temperature T, label time t, and label the south-to-north direction y, then we can write down the advection equation using partial derivative notation,

\frac{\partial T}{\partial t} = - \frac{\partial T}{\partial y} \times v

This equation tells us that the local temperature will vary over time (\frac{\partial T}{\partial t}), depending on the north-south temperature contrast (- \frac{\partial T}{\partial y}) multiplied by the wind speed v.

Solving the advection equation

One way to solve the advection equation on a computer is to divide the world into boxes, called cells. The complete arrangement of cells is called a mesh. At a point at the centre of each cell we store meteorological information such as temperature, water vapour content or pollutant concentration. At the cell faces where two cells touch we store the wind speed and direction. The arrangement looks like this:

britain-cgrid
A mesh of cells with temperatures stored at cell centres and winds stored at cell faces.  For illustration, the temperature and winds are only shown in one cell.  This arrangement of data is known as an Arakawa C-grid.  Figure adapted from WikiMedia Commons, CC BY-SA 3.0.

The above example of a mesh over the UK uses cube-shaped cells stacked in columns above the Earth, and arranged along latitude and longitude lines. But more recently, weather forecasting models are using different types of mesh. These models tesselate the globe with squares, hexagons or triangles.

meshes
The surfaces of some different types of global mesh. The cells are prismatic since they are stacked in columns above the surface.

Weather models must also rearrange cells in order to represent mountains, valleys, cliffs and other terrain. Once again, different models rearrange cells differently. One method, called the terrain-following method, shifts cells up or down to accommodate the terrain. Another method, called the cut-cell method, cuts cells where they intersect the terrain. Here’s what these methods look like when we use them to represent an idealised, wave-shaped mountain:

terrain-meshes
Two different methods for representing terrain in weather forecast models. The terrain-following method is widely used but suffers from large distortions above steep slopes. The cut cell method alleviates this problem but cells may be very much smaller than most others in a cut cell mesh.

Once we’ve chosen a mesh and stored temperature at cell centres and the wind at cell faces, we can start calculating a solution to the advection equation which enables us to forecast how the temperature will vary over time. We can solve the advection equation for every cell separately by discretising the advection equation. Let’s consider a cell with a north face and a south face. We want to know how the temperature stored at the cell centre, T_\mathrm{cell}, will vary over time. We can calculate this by reconstructing a continuous temperature field and using this to approximate temperature values at the north and south faces of the cell, T_\mathrm{north} and T_\mathrm{south},

\frac{\partial T_\mathrm{cell}}{\partial t} = - \frac{T_\mathrm{north} - T_\mathrm{south}}{\Delta y} \times v

where \Delta y is the distance between the north and south cell faces. This is the same reconstruction process that we described earlier, only, instead of approximating temperatures using nearby weather station measurements, we are approximating temperatures using nearby cell centre values.

There are many existing numerical methods for solving the advection equation but many do not cope well when meshes are distorted, such as terrain-following meshes, or when cells have very different sizes, such as those cells in cut-cell meshes. Inaccurate solutions to the advection equation lead to inaccuracies in the weather forecast. In extreme cases, very poor solutions can cause the model software to crash, and this is known as a numerical instability.

slug-slantedCells-linearUpwind
An idealised simulation of a blob advected over steep mountains. A numerical instability develops because the cells are so distorted over the mountain.

We can see a numerical instability growing in this idealised example. A blob is being advected from left to right over a range of steep, wave-shaped mountains. This example is using a simple advection method which cannot cope with the distorted cells in this mesh.

We’ve developed a new method for solving the advection equation with almost any type of mesh using cubes or hexagons, terrain-following or cut-cell methods. The advection method works by reconstructing a continuous field from data stored at cell centre points. A separate reconstruction is made for every face of every cell in the mesh using about twelve nearby cell centre values. Given that weather forecast models have millions of cells, this sounds like an awful lot of calculations. But it turns out that we can make most of these calculations just once, store them, and reuse them for all our simulations.

slug-slantedCells-cubicFit
Our new advection method avoids the numerical instability that occurred using the simple method.

Here’s the same idealised simulation using our new advection method. The results are numerically stable and accurate.

Further reading

A preprint of our journal article documenting the new advection method is available on ArXiv. I also have another blog post that talks about how to make the method even more accurate. Or follow me on Twitter for more animations of the numerical methods I’m developing.

Understanding our climate with tiny satellites

Gristey, J. J., J. C. Chiu, R. J. Gurney, S.-C. Han, and C. J. Morcrette (2017), Determination of global Earth outgoing radiation at high temporal resolution using a theoretical constellation of satellites, J. Geophys. Res. Atmos., 122, doi:10.1002/2016JD025514.

Email: J.Gristey@pgr.reading.ac.uk          Web: http://www.met.reading.ac.uk/~fn008822/

The surface of our planet has warmed at an unprecedented rate since the mid-19th century and there is no sign that the rate of warming is slowing down. The last three decades have all been successively warmer than any preceding decade since 1850, and 16 of the 17 warmest years on record have all occurred since 2001. The latest science now tells us that it is extremely likely that human influence has been the dominant cause of the observed warming1, mainly due to the release of carbon dioxide and other greenhouse gases into our atmosphere. These greenhouse gases trap heat energy that would otherwise escape to space, which disrupts the balance of energy flows at the top of the atmosphere (Fig. 1). The current value of the resulting energy imbalance is approximately 0.6 W m–2, which is more than 17 times larger than all of the energy consumed by humans2! In fact, observing the changes in these energy flows at the top of the atmosphere can help us to gauge how much the Earth is likely to warm in the future and, perhaps more importantly, observations with sufficient spatial coverage, frequency and accuracy can help us to understand the processes that are causing this warming.

fig1
Figure 1. The Earth’s top-of-atmosphere energy budget. In equilibrium, the incoming sunlight is balanced by the reflected sunlight and emitted heat energy. Greenhouse gases can reduce the emitted heat energy by trapping heat in the Earth system leading to an energy imbalance at the top of the atmosphere.

Observations of energy flows at the top of the atmosphere have traditionally been made by large and expensive satellites that may be similar in size to a large car3, making it impractical to launch multiple satellites at once. Although such observations have led to many advancements in climate science, the fundamental sampling restrictions from a limited number of satellites makes it impossible to fully resolve the variability in the energy flows at the top of atmosphere. Only recently, due to advancements in small satellite technology and sensor miniaturisation, has a novel, viable and sustainable sampling strategy from a constellation of satellites become possible. Importantly, a constellation of small satellites (Fig. 2a), each the size of a shoe-box (Fig. 2b), could provide both the spatial coverage and frequency of sampling to properly resolve the top of atmosphere energy flows for the first time. Despite the promise of the constellation approach, its scientific potential for measuring energy flows at the top of the atmosphere has not been fully explored.

fig2
Figure 2. (a) A constellation of 36 small satellites orbiting the Earth. (b) One of the small “CubeSat” satellites hosting a miniaturised radiation sensor that could be used [edited from earthzine article].
To explore this potential, several experiments have been performed that simulate measurements from the theoretical constellation of satellites shown in Fig 2a. The results show that just 1 hour of measurements can be used to reconstruct accurate global maps of reflected sunlight and emitted heat energy (Fig. 3). These maps are reconstructed using a series of mathematical functions known as “spherical harmonics”, which extract the information from overlapping samples to enhance the spatial resolution by around a factor of 6 when compared with individual measurement footprints. After producing these maps every hour during one day, the uncertainty in the global-average hourly energy flows is 0.16 ± 0.45 W m–2 for reflected sunlight and 0.13 ± 0.15 W m–2 for emitted heat energy. Observations with these uncertainties would be capable of determining the sign of the 0.6 W m–2 energy imbalance directly from space4, even at very short timescales.

fig3
Figure 3. (top) “Truth” and (bottom) recovered enhanced-resolution maps of top of atmosphere energy flows for (left) reflected sunlight and (right) emitted heat energy, valid for 00-01 UTC on 29th August 2010.

Also investigated are potential issues that could restrict similar uncertainties being achieved in reality such as instrument calibration and a reduced number of satellites due to limited resources. Not surprisingly, the success of the approach will rely on calibration that ensures low systematic instrument biases, and on a sufficient number of satellites that ensures dense hourly sampling of the globe. Development and demonstration of miniaturised satellites and sensors is currently underway to ensure these criteria are met. Provided good calibration and sufficient satellites, this study demonstrates that the constellation concept would enable an unprecedented sampling capability and has a clear potential for improving observations of Earth’s energy flows.

This work was supported by the NERC SCENARIO DTP grant NE/L002566/1 and co-sponsored by the Met Office.

Notes:

1 This statement is quoted from the latest Intergovernmental Panel on Climate Change assessment report. Note that these reports are produced approximately every 5 years and the statements concerning human influence on the climate have increased in confidence in every report.

2 Total energy consumed by humans in 2014 was 13805 Mtoe = 160552.15 TWh. This is an average power consumption of 160552.15 TWh  / 8760 hours in a year = 18.33 TW

Rate of energy imbalance per square metre at top of atmosphere is = 0.6 W m–2. Surface area of “top of atmosphere” at 80 km is 4 * pi * ((6371+80)*103 m)2 = 5.23*1014 m2. Rate of energy imbalance for entire Earth = 0.6 W m–2 * 5.23*1014 m2 = 3.14*1014 W = 314 TW

Multiples of energy consumed by humans = 314 TW / 18.33 TW = 17

3 The satellites currently carrying instruments that observe the top of atmosphere energy flows (eg. MeteoSat 8, Aqua) will typically also be hosting a suite of other instruments, which adds to the size of the satellite. However, even the individual instruments are still much larger that the satellite shown in Fig. 2b.

4 Currently, the single most accurate way to determine the top-of-atmosphere energy imbalance is to infer it from changes in ocean heat uptake. The reasoning is that the oceans contain over 90% of the heat capacity of the climate system, so it is assumed on multi-year time scales that excess energy accumulating at the top of the atmosphere goes into heating the oceans. The stated value of 0.6 W m–2 is calculated from a combination of ocean heat uptake and satellite observations.

References:

Allan et al. (2014), Changes in global net radiative imbalance 1985–2012, Geophys. Res. Lett., 41, 5588–5597, doi:10.1002/2014GL060962.

Barnhart et al. (2009), Satellite miniaturization techniques for space sensor networks, Journal of Spacecraft and Rockets46(2), 469–472, doi:10.2514/1.41639.

IPCC (2013), Climate Change 2013: The Physical Science Basis, available online at https://www.ipcc.ch/report/ar5/wg1/.

NASA (2016), NASA, NOAA Data Show 2016 Warmest Year on Record Globally, available online at https://www.nasa.gov/press-release/nasa-noaa-data-show-2016-warmest-year-on-record-globally.

Sandau et al. (2010), Small satellites for global coverage: Potential and limits, ISPRS J. Photogramm., 65, 492–504, doi:10.1016/j.isprsjprs.2010.09.003.

Swartz et al. (2013), Measuring Earth’s Radiation Imbalance with RAVAN: A CubeSat Mission to Measure the Driver of Global Climate Change, available online at https://earthzine.org/2013/12/02/measuring-earths-radiation-imbalance-with-ravan-a-cubesat-mission-to-measure-the-driver-of-global-climate-change/.

Swartz et al. (2016), The Radiometer Assessment using Vertically Aligned Nanotubes (RAVAN) CubeSat Mission: A Pathfinder for a New Measurement of Earth’s Radiation Budget. Proceedings of the AIAA/USU Conference on Small Satellites, SSC16-XII-03