Tropical Circulation viewed as a heat engine

Climate scientists have a lot of insight into the factors driving weather systems in the mid-latitudes, where the rotation of the earth is an important influence. The tropics are less well served, and this can be a problem for global climate models which don’t capture many of the phenomena observed in the tropics that well.

What we do know about the tropics however is that despite significant contrasts in sea surface temperatures (Fig. 1) there is very little horizontal temperature variation in the atmosphere (Fig. 2) – because the Coriolis force (due to the Earth’s rotation) that enables this gradient in more temperate climates is not present. We believe that the large-scale circulation acts to minimise the effect these surface contrasts have higher up. This suggests a model for vertical wind which cools the air over warmer surfaces and warms it where the surface is cool, called the Weak Temperature Gradient (WTG) Approximation, that is frequently used in studying the climate in the tropics.

GrSEMtest1_SST_map2-page-001
Fig.1 Sea surface temperatures (K) at 0Z on 1/1/2000 (ERA-Interim)
GrSEMtest1_T_map2-page-001
Fig.2 Temperatures at 500 hPa (K) at 0Z on 1/1/2000 (ERA-Interim)

 

 

 

 

 

Thermodynamic ideas have been around for some 200 years. Carnot, a Frenchman worried about Britain’s industrial might underpinning its military potential(!), studied the efficiency of heat engines and showed that the maximum mechanical work generated by an engine is determined by the ratio of the temperatures at which energy enters and leaves the system. It is possible to treat climate systems as heat engines – for example Kerry Emanuel has used Carnot’s idea to estimate the pressure in the eye of a hurricane. I have been building on a recent development of these ideas by Olivier Pauluis at New York University who shows how to divide up the maximum work output of a climate heat engine into the generation of wind, the lifting of moisture and a lost component, which he calls the Gibbs penalty, which is the energetic cost of keeping the atmosphere moist. Typically, 50% of the maximum work output is gobbled up by the Gibbs penalty, 30% is the moisture lifting term and only 20% is used to generate wind.

For my PhD, I have been applying Pauluis’ ideas to a modelled system consisting of two connected tropical regions (one over a cooler surface than the other), which are connected by a circulation given by the weak temperature gradient approximation. I look at how this circulation affects the components of work done by the system. Overall there is no impact – in other words the WTG does not distort the thermodynamics of the underlying system – which is reassuring for those who use it. What is perhaps more interesting however, is that even though the WTG circulation is very weak compared to the winds that we observe in the two columns, it does as much work as is done by the cooler column – in other words its thermodynamic importance is huge. This suggests that further avenues of study may help us better express what drives the climate in the tropics.

Should we be ‘Leaf’-ing out vegetation when parameterising the aerodynamic properties of urban areas?

Email: C.W.Kent@pgr.reading.ac.uk

When modelling urban areas, vegetation is often ignored in attempt to simplify an already complex problem. However, vegetation is present in all urban environments and it is not going anywhere… For reasons ranging from sustainability to improvements in human well-being, green spaces are increasingly becoming part of urban planning agendas. Incorporating vegetation is therefore a key part of modelling urban climates. Vegetation provides numerous (dis)services in the urban environment, each of which requires individual attention (Salmond et al. 2016). However, one of my research interests is how vegetation influences the aerodynamic properties of urban areas.

Two aerodynamic parameters can be used to represent the aerodynamic properties of a surface: the zero-plane displacement (zd) and aerodynamic roughness length (z0). The zero-plane displacement is the vertical displacement of the wind-speed profile due to the presence of surface roughness elements. The aerodynamic roughness length is a length scale which describes the magnitude of surface roughness. Together they help define the shape and form of the wind-speed profile which is expected above a surface (Fig. 1).

blogpostpic

Figure 1: Representation of the wind-speed profile above a group of roughness elements. The black dots represent an idealised logarithmic wind-speed profile which is determined using the zero-plane displacement (zd) and aerodynamic roughness length (z0) (lines) of the surface.

For an urban site, zd and z0 may be determined using three categories of methods: reference-based, morphometric and anemometric. Reference-based methods require a comparison of the site to previously published pictures or look up tables (e.g. Grimmond and Oke 1999); morphometric methods describe zd and z0 as a function of roughness-element geometry; and, anemometric methods use in-situ observations. The aerodynamic parameters of a site may vary considerably depending upon which of these methods are used, but efforts are being made to understand which parameters are most appropriate to use for accurate wind-speed estimations (Kent et al. 2017a).

Within the morphometric category (i.e. using roughness-element geometry) sophisticated methods have been developed for buildings or vegetation only. However, until recently no method existed to describe the effects of both buildings and vegetation in combination. A recent development overcomes this, whereby the heights of all roughness elements are considered alongside a porosity correction for vegetation (Kent et al. 2017b). Specifically, the porosity correction is applied to the space occupied and drag exerted by vegetation.

The development is assessed across several areas typical of a European city, ranging from a densely-built city centre to an urban park. The results demonstrate that where buildings are the dominant roughness elements (i.e. taller and occupying more space), vegetation does not obviously influence the calculated geometry of the surface, nor the aerodynamic parameters and the estimated wind speed. However, as vegetation begins to occupy a greater amount of space and becomes as tall as (or larger) than buildings, the influence of vegetation is obvious. Expectedly, the implications are greatest in an urban park, where overlooking vegetation means that wind speeds may be slowed by up to a factor of three.

Up to now, experiments such as those in the wind tunnel focus upon buildings or trees in isolation. Certainly, future experiments which consider both buildings and vegetation will be valuable to continue to understand the interaction within and between these roughness elements, in addition to assessing the parameterisation.

References

Grimmond CSB, Oke TR (1999) Aerodynamic properties of urban areas derived from analysis of surface form. J Appl Meteorol and Clim 38:1262-1292.

Kent CW, Grimmond CSB, Barlow J, Gatey D, Kotthaus S, Lindberg F, Halios CH (2017a) Evaluation of Urban Local-Scale Aerodynamic Parameters: Implications for the Vertical Profile of Wind Speed and for Source Areas. Boundary-Layer Meteorology 164: 183-213.

Kent CW, Grimmond CSB, Gatey D (2017b) Aerodynamic roughness parameters in cities: Inclusion of vegetation. Journal of Wind Engineering and Industrial Aerodynamics 169: 168-176.

Salmond JA, Tadaki M, Vardoulakis S, Arbuthnott K, Coutts A, Demuzere M, Dirks KN, Heaviside C, Lim S, Macintyre H (2016) Health and climate related ecosystem services provided by street trees in the urban environment. Environ Health 15:95.

Future of Cumulus Parametrization conference, Delft, July 10-14, 2017

Email: m.muetzelfeldt@pgr.reading.ac.uk

For a small city, Delft punches above its weight. It is famous for many things, including its celebrated Delftware (Figure 1). It was also the birthplace of one of the Dutch masters, Johannes Vermeer, who coincidentally painted some fine cityscapes with cumulus clouds in them (Figure 2). There is a university of technology with some impressive architecture (Figure 3). It holds the dubious honour of being the location of the first assassination using a pistol (or so we were told by our tour guide), when William of Orange was shot in 1584. To this list, it can now add hosting a one-week conference on the future of cumulus parametrization, and hopefully bringing about more of these conferences in the future.

Delftware_display

Figure 1: Delftware.

Vermeer-view-of-delft

Figure 2: Delft with canopy of cumulus clouds. By Johannes Vermeer, 1661.

Delft_AULA

Figure 3: AULA conference centre at Delft University of Technology – where we were based for the duration of the conference.

So what is a cumulus parametrization scheme? The key idea is as follows. Numerical weather and climate models work by splitting the atmosphere into a grid, with a corresponding grid length representing the length of each of the grid cells. By solving equations that govern how the wind, pressure and heating interact, models can then be used to predict what the weather will be like days in advance in the case of weather modelling. Or a model can predict how the climate will react to any forcings over longer timescales. However, any phenomena that are substantially smaller than this grid scale will not be “seen” by the models. For example, a large cumulonimbus cloud may have a horizontal extent of around 2km, whereas individual grid cells could be 50km in the case of a climate model. A cumulonimbus cloud will therefore not be explicitly modelled, but it will still have an effect on the grid cell in which it is located – in terms of how much heating and moistening it produces at different levels. To capture this effect, the clouds are parametrized, that is, the vertical profile of the heating and moistening due to the clouds are calculated based on the conditions in the grid cell, and this then affects the grid-scale values of these variables. A similar idea applies for shallow cumulus clouds, such as the cumulus humilis in Vermeer’s painting (Figure 2), or present-day Delft (Figure 3).

These cumulus parametrization schemes are a large source of uncertainty in current weather and climate models. The conference was aimed at bringing together the community of modellers working on these schemes, and working out which might be the best directions to go in to improve these schemes, and consequently weather and climate models.

Each day was a mixture of listening to presentations, looking at posters and breakout discussion groups in the afternoon, as well as plenty of time for coffee and meeting new people. The presentations covered a lot of ground: from presenting work on state-of-the-art parametrization schemes, to looking at how the schemes perform in operational models, to focusing on one small aspect of a scheme and modelling how that behaves in a high resolution model (50m resolution) that can explicitly model individual clouds. The posters were a great chance to see the in-depth work that had been done, and to talk to and exchange ideas with other scientists.

Certain ideas for improving the parametrization schemes resurfaced repeatedly. The need for scale-awareness, where the response of the parametrization scheme takes into account the model resolution, was discussed. One idea for doing this was the use of stochastic schemes to represent the uncertainty of the number of clouds in a given grid cell. The concept of memory also cropped up – where the scheme remembers if it had been active at a given grid cell in a previous point in time. This also ties into the idea of transitions between cloud regimes, e.g. when a stratocumulus layer splits up into individual cumulus clouds. Many other, sometimes esoteric, concepts were discussed, such as the role of cold pools, how much tuning of climate models is desirable and acceptable, how we should test our schemes, and what the process of developing the schemes should look like.

In the breakout groups, everyone was encouraged to contribute, which made for an inclusive atmosphere in which all points of view were taken on board. Some of the key points of agreement from these were that it was a good idea to have these conferences, and we should do it more often! Hopefully, in two years’ time, another PhD student will write a post on how the next meeting has gone. We also agreed that it would be beneficial to be able to share data from our different high resolution runs, as well as to be able to compare code for the different schemes.

The conference provided a picture of what the current thinking on cumulus parametrization is, as well as which directions people think are promising for the future. It also provided a means for the community to come together and discuss ideas for how to improve these schemes, and how to collaborate more closely with future projects such as ParaCon and HD(CP)2.

Peer review: what lies behind the curtains?

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

For young researchers, one of the most daunting prospects is the publication of their first paper.  A piece of work that somebody has spent months or even years preparing must be submitted for the process of peer review. Unseen gatekeepers cast their judgement and work is returned either accepted, rejected or with required revisions. I attended the Sense about Science workshop entitled ‘Peer review: the nuts and bolts’, targeted at early career researchers (ECRs), with the intention of looking behind these closed doors. How are reviewers selected? Who can become a reviewer? Who makes the final decisions? This workshop provided an opportunity to interact directly with both journal editors and academics involved in the peer review process to obtain answers to such questions.

This workshop was primarily structured around a panel discussion consisting of Dr Amarachukwu Anyogu, a lecturer in microbiology at the University of Westminster; Dr Bahar Mehmani, a reviewer experience lead at Elsevier; Dr Sabina Alam, an editorial director at F1000Research; and Emily Jesper-Mir, the head of partnerships and governance at Sense about Science. In addition, there were also small group discussions amongst fellow attendees regarding advantages and disadvantages of peer review, potential alternatives, and the importance of science communication.

18527387_1077446359022532_4975831821751623706_o
The panel of (L-R) Dr Sabina Alam, Dr Amarachukwu Anyogu, Dr Bahar Mehmani and Emily Jesper-Mir provided a unique insight into the peer review process from the perspective of both editor and reviewer. Photograph credited to Sense about Science.

Recent headlines have highlighted fraud cases where impersonation and deceit have been used to manipulate the peer review process. Furthermore, fears regarding bias and sexism remain high amongst the academic community. It was hence encouraging to see such strong awareness from both participants and panellists regarding the flaws of the peer review. Post-publication review, open (named) reviews, and the submission of methods prior to the experiment are all ways either in use currently or proposed to increase the accountability and transparency of peer review. Each method brings its own problems however; for example, naming reviewers risks the potential for less critical responses, particularly from younger researchers not wanting to alienate more experienced academics with influence over their future career progression.

One key focus of the workshop was to encourage ECRs to become involved in the peer review process. In the first instance this seems counterintuitive; surely the experience of academics further into their career is crucial to provide high quality reviews? However, ECRs do have the knowledge necessary. We work day to day with the same techniques, using the same analysis as the papers we would then review. In addition, a larger body of reviewers reduces the individual workload and will improve the efficiency of the process, particularly as ECRs do not necessarily have the same time pressures. Increased participation ensures diversity of opinion and ensures particular individuals do not become too influential in what ideas are considered relevant or acceptable. There also exist personal benefits to becoming a reviewer, including an improved ability to critically assess research. Dr Anyogu for example found that reviewing the works of others helped her gain a better perspective of criticism received on her own work.

18527133_1077447019022466_6449209230504509579_o
Participants were encouraged to discuss the advantages and disadvantages of peer review and potential changes that could be made to address current weaknesses in the process. Photograph credited to Sense about Science.

One key message that I took away from the workshop is that peer review isn’t mechanical. Humans are at the heart of decisions. Dr Alam was particularly keen to stress that editors will listen to grievances and reconsider decisions if strong arguments are put forward. However, it also then follows that peer review is only as effective as those who participate in the process.  If the quality of reviewers is poor, then the quality of the review process will be poor. Hence it can be argued that we as members of the academic community have an obligation to maintain high standards, not least so that the public can be reassured the information we provide has been through a thorough quality control process. In a time when phrases such as ‘fake news’ are proliferating, it is crucial more than ever to maintain public trust in the scientific process.

I would like to thank all the panellists for giving up their time to contribute to this workshop; the organisations* who provided sponsorship and sent representatives; Informa for hosting the event; and Sense about Science for organising this unique opportunity to learn more about peer review.

*Cambridge University Press, Peer Review Evaluation, Hindawi, F1000Research, Medical Research Council, Portland Press, Sage Publishing, Publons, Elsevier, Publons Academy, Taylor and Francis Group, Wiley.