At the beginning of September 3 PhD students from Reading, including myself, went to Cambridge to attend the NCAS Climate Modelling Summer School. This is an annual event aimed at PhD students and early career scientists who want to develop their understanding of climate models, with topics covering parameterisations to supercomputers.
The course ran over two weeks with lectures on the components of climate models in the morning, covering fundamental dynamics and thermodynamics, numerical methods and different parameterisations. This was followed by an afternoon of computer practicals and then more topical lectures in the evening, such as “User engagement in climate science” and “The Sun and Earth’s climate system”. The lectures were very fast paced but this was a great opportunity to cover so many topics in a short space of time and get a grounding in lots of different topics that I will definitely be looking over in future. A poster session on the second evening gave us the chance to learn about other people’s work and make connections with other people starting out their careers in climate science, including a few readers of the blog, that will hopefully last throughout our careers.
One of the highlights of the course was the chance to run some (rather interesting) experiments with an earth system model. This involved breaking into groups with each being given a different project. It was exciting to go through the whole process of having an idea, developing a hypothesis, thinking of specific experiments to answer the hypothesis and then analysing the results in just a week – something that takes much longer when you’re doing a PhD! My group worked on the Flat Earth experiment, which looked at the effect of removing all of the earth’s orography not, to our dismay, turning the earth into a flat disk. I learned a lot about how to run models, something which I have never done even though I use the output. It also developed my understanding of different climate processes that I don’t work with such as the monsoons, and even dynamical vegetation.
Throughout the course we stayed at St Catharine’s College. Right in the centre of Cambridge it quickly felt like a home from home, keeping us well fed to get through the intense science. Although the weekend was rainy, apparently breaking a run of excellent weather for the school, we still had plenty of time to explore beautiful Cambridge. A few people were even brave enough to go punting!
An interesting, hectic and inspiring two weeks later we may have been glad to head back to Reading for a good sleep but having thoroughly enjoyed the summer school.
Current numerical modelling and data assimilation methods still face problems in strongly nonlinear cases, like in convective scales. A different, but interesting tool to help overcome these issues can be found in the synchronisation theory.
It all started in 1665, when Christiaan Huygens, a Dutch scientist, discovered that his two pendulum clocks were suddenly oscillating in opposite directions, but in a synchronised way. He tried to desynchronise them, by perturbing randomly one of the clocks, but surprisingly, after some time, both devices were synchronised again. He has attributed the phenomenon to the frame both clocks were sharing and after that, synchronisation field was opened to the world.
Figure 1: A drawing by Christiaan Huygens of his experiment in 1665.
Nowadays, researchers use these synchronisation concepts to reach a main goal: synchronise a model (any) with the true evolution of a system, using measurements. And even when only a reduced part of this system is observed, synchronisation between models and the true state can still be achieved. This is quite similar to what data assimilation looks for, as it aims to synchronise a model evolution with the truth by using observations, finding the best estimate of the state evolution and its uncertainty.
So why not investigate the benefits of recent synchronisation findings and combine these concepts with a data assimilation methodology?
At the start of this project, the first noticeable step that should be taken was to open up the synchronisation field to higher-dimension systems, as the experiments performed in the area were all focused on low-dimension, non-realistic systems. To this end, a first new idea was proposed: an ensemble version of a synchronisation scheme, what we are calling EnSynch (Ensemble Synchronisation). Tests with a partly observed 1000-dimension chaotic model show a very efficient correspondence between the model and the true trajectories, both for estimation and prediction periods. Figures 2 and 3 show how our estimates and the truth are on top of each other, i.e. synchronised. Note that we do not have observations for all of the variables in our system. So, it is amazing to obtain the same successful results for the observed and also for the unobserved variables in this system!
Figure 2: Trajectories of 2 variables (top:observed and bottom: unobserved). Blue lines: truth. Green lines: estimates/predictions. (Predictions start after the red lines, i.e. no data assimilation is used.)
Figure 3: Zoom in the trajectory of a variable, showing how the model matches with the truth. Blue line: truth. Red line: our model. Yellow dots: observations.
The second and main idea is to test a combination of this successful EnSynch scheme with a data assimilation method called Particle Filter. As a proper data assimilation methodology, a particle filter provides us the best estimation of the state evolution and its uncertainty. Just to illustrate the importance of data assimilation in following the truth, figure 4 compares the case of only counting on an ensemble of models running freely in a chaotic nonlinear system, with the case of a data assimilation method applied to it.
Figure 4: Trajectories of ensemble members. Blue: with data assimilation. Red: without data assimilation. Truth is in black.
Efficient results are found with the combination between the new EnSynch and the particle filters. An example is shown in figure 5, where particles (ensemble members) of an unobserved variable nicely follow the truth during the assimilation period and also during the forecast stage (after t=100).
Figure 5: Trajectory for an unobserved variable in a 1000-dimension system. Observations occur at every 10 time steps until t=100. Predictions start after t=100.
These results are motivating and the next and big step is to implement this combined system in a bigger atmospheric model. This methodology has been shown to be a promising solution for strongly nonlinear problems and potential benefits are expected for numerical weather prediction in the near future.
Rey, D., M. Eldridge, M. Kostuk, H. Abarbanel, J. Schumann-Bischoff, and U. Parlitz, 2014a: Accurate state and parameter estimation in nonlinear systems with sparse observations. Physics Letters A, 378, 869-873, doi:10.1016/j.physleta.2014.01.027.
Zhu, M., P. J. van Leeuwen, and J. Amezcua, 2016: Implicit equal-weights particle filter. Quart. J. Roy. Meteorol. Soc., 142, 1904-1919, doi:10.1002/qj.2784.
Every year students from the SCENARIO (Science of the Environment, Natural and Anthropogenic Processes, Impacts and Opportunities) Doctoral Training Partnership organise an annual conference. Those invited include SCENARIO students, NERC employees and industrial partners. This year, after last year’s successful collaboration with the University of Oklahoma, it was decided that we would run the conference (Frontiers in Natural Environment Research) with the Science and Solutions for a Changing Planet (SSCP) and London NERC DTPs, led by a variety of universities and institutions in London.
A similar conference was organised last year (Perspectives on Environmental Change) between SSCP and the London NERC DTP, which was a rousing success. This year, with the addition of Reading and Surrey, we had almost 200 delegates attending with a healthy proportion of supervisors and industry partners, with over 40 oral presentations and 40 posters from students at the various institutions. The conference was held in the Physics building at Imperial College, a literal stone’s throw away from the Royal Albert Hall.
Organising the conference was a daunting task; there was a lot of work involved between the nine PhD students on the committee! One of the challenges, (but also one of the most exciting parts of the conference), was the sheer variety of research being presented. Many of the attendees were from the Met department, but there were also students from Chemistry and Geography from SCENARIO, and students from the London institutions doing topics as varied as sociology, ecology, biology, materials science and plate tectonics. This made for a really interesting conference since there was so much on offer from such a wide range of fields, but made our lives quite difficult when trying to organise keynote speakers and sort abstracts!
As well as the student presentations we also ran workshops and panel discussions, and had two invited keynote speakers. The workshops were about communicating science through social media, and also on getting published in one of the Nature journals (similar to the successful workshop ran by SCENARIO here at Reading). The panel discussions were themed around “Science and Development” and “Science in a post-truth world”, looking at ways in which science (particularly that within the NERC remit) can help to solve the UN’s Sustainable Development Goals, and how we communicate science in a time of “fake news”.
Perhaps my favourite part of the conference were the two keynote speakers. Finding speakers who would appeal to the majority of people attending the conference was no easy task, given the huge range of disciplines!
Opening the conference, Marcus Munafo, Professor in Biological Psychology at Bristol University spoke about the “reproducibility crisis” and how incentive structures affect the scientific process. I can honestly say it was one of the most thought-provoking lectures I’ve ever been to. His main argument was that ultimately science is done by people who have an incentive to do certain things, (e.g. publish in high impact journals), for the benefit of their careers. However, this incentivisation means that often one “big result” can mean more for the career of someone than all the work they’ve done previously, even if that result ended up being retracted or proven false later on, (he went on to demonstrate that happens a lot). One of the statistics he presented was that the higher the impact factor of a journal, the higher the chance of retraction, which I thought was really interesting and certainly made me re-evaluate the way in which I approach my own work.
The other keynote speaker was Lucy Hawkes, Senior Lecturer in Physiological Ecology at Exeter, talking about her work and career, particularly “biologging” of animals and looking at their migratory patterns. Aside from all the great anecdotes and stories (like swimming with sharks in order to plant bio-tags on them), from a meteorologist’s perspective it was interesting listening to her talk about how these migratory patterns change with the climate.
Of course any conference worth its salt has entertainment and things outside work. A BBQ was hosted in the courtyard underneath the Queen’s Tower, and drinks and comedy (the Science Showoff) in the wonderfully titled hBar at Imperial. The Science Showoff in particular was really good, hosted by a professional comedian but with most of the material coming from PhD students at the various institutes (although shamefully no-one from Met volunteered).
One of the other really useful parts was meeting students from disparate fields at the other institutions. As Joanna Haigh (director of the SSCP DTP) said in her closing speech, the people we meet at these conferences will be our colleagues for our entire careers, so it’s really important to get to know people socially and professionally. In the end I think it went really well, and I’m certainly looking forward to seeing the London students again at next year’s conference!
From the last week of June until the 1st September I took part in the Met Office Training and Research (MOTR), as part of the Mathematics of Planet Earth CDT.
Inspired by the highly popular and successful Geophysical Fluid Dynamics Summer School at the Woods Hole Oceanographic Insitution in the USA, it is a 10 week-programme, hosted by Met Office in Exeter. The PhD students have the opportunity to handle an applied research topic outside the area of their PhD, diversify their portfolio and experience the working and social life at the Met Office.
In the first two weeks we participated in a summer school. In particular in the first week there was a lecture course on ”Regional Climate Variability and Change”. In the morning the lectures were given by David Karoly from University of Melbourne on “patterns of climate change”, starting from the basic concepts of the climate systems and then expanding to the climate change attribution. In the afternoon we had specialist lectures by Met Office and University of Exeter scientists about El Nino, modelling paleoclimates and attribution of extreme weather events.
In addition, in the afternoon we had to do lab work working in pairs, using Climate Explorer, choosing a specific continent of the world and investigating past climate and future climate projections for that area. My colleague and I selected South America and we gave a presentation about that.
During the second week we participated in the workshop ”Future opportunities to inform UK regional projections”, with a lecture given by Ed Hawkins, amongst others, talking about sources of uncertainty.
From the third week onward each student started a research project in different Met Office research groups. A different colleague and I worked within the Atmospheric Processes and Parametrization group (APP), supervised by Gabriel Rooney. My project was on numerical simulations and theoretical aspects of colliding density currents. Other colleagues were placed within the Climate Science, Dynamics groups and Informatics Lab, a partner of Met Office.
A typical day for us at Met Office started at 9am, meeting almost every day with Gabriel at 9.45 am, coffee break at 10.30 for half an hour or so (where I also met Annelize, previously in Reading), 1 hour lunch break and then “working” again until 5.30 pm or so.
Also, once a week we had the meeting with the smaller convection group, where everyone was asked to give an update of their own work. We also attended journal club sessions on Fridays and a brainstorming meeting on 21st July. It was nice to take part in these events, even being only summer interns. During the project phase we had also weekly advanced seminars by Glenn Shutts and Mike Cullen, mainly about large-scale dynamics and hierarchies of operational models used in NWP.
Personally, it was a wonderful experience for several reasons. The Met Office is a very pleasant place to work, with very friendly and flexible people. Since I think my project was quite academic I did not find many differences with working at university itself. Nevertheless, interacting with new people in a new environment has provided me with new inspirations and insights. I had the chance to talk with several scientists and also a chief Meteorologist, since I was curious about the activities carried out in the Operational room and how much communication there is with the research side. There were some social and sports events organised by MOSSA (Met Office Social and Sports Association) which I really enjoyed (picnic and sports day), getting the chance to meet and to talk with people of other research divisions.
Finally, to top it off, I visited Exeter a lot and the area around, mainly during the weekends and the 5 days of holidays agreed at the beginning, even going to Cornwall for 2 days.
In the end I would like to thank all of the organisers, my supervisor and all the people I talked with for giving me and my colleagues this very valuable opportunity which I will keep always in mind for my future career.
If there’s one thing you can count on in Britain it’s that at any given time someone, somewhere, is talking about rain. Either it’s raining, or we want it to rain, or it absolutely mustn’t be raining today. It’s just one of those things we love to complain about – and we do!
My work isn’t in forecasting rain, but observing it. It’s a little known fact that the rainfall map you see on the front page of the Met Office website doesn’t just come straight out of one giant weather radar, neatly packaged. There’s a lot of work that has to happen before we can turn the “reflectivities” from many different radar echoes into a sensible estimate of how heavily it’s raining on your street.
The Met Office owns and manages a network of 15 weather radars across the UK, and receives data from three more, in Ireland and the Channel Islands. We’ve now almost completed a major upgrade of the network, replacing key components of the old radars – some of which had been running operationally for over 30 years! – with new technology. The dual polarisation and Doppler information we obtain from the upgraded radars improves our ability to distinguish between “rain” and “non-rain” echoes, and to measure how fast the rain is moving, feeding improvements in short range “nowcasts” and flood forecasting models. It can also help us estimate the quantity of rainfall and other types of precipitation in real time.
For my PhD, I’m looking at how to improve Met Office estimates of surface rain rates from radar measurements at long range. When a radar measures weather, it does so with a beam of energy that spreads out with distance. 20 km from the radar, the echo received represents a volume of space around 600 m by 400 m by 400 m. At 50 km, the volume is 600 m by 1 km squared. By 100 km, the beam has spread out to be 2 km wide. This effect is called “beam broadening”, and limits the spatial detail with which we can measure rainfall.
The other effect of range is the increasing height of the radar beam above the ground. This means the radar isn’t always measuring liquid rain drops, but may be measuring frozen ice crystals or snow, high up in colder parts of the atmosphere, which will melt before they reach the surface. Snow, melting snow and rain all have different reflectivities, so we have to correct for this “vertical profile” to calculate how much rain is falling at ground level.
The Met Office corrects for the vertical reflectivity profile (VPR) using an iterative scheme (Kitchen et al. 1994, Kitchen 1997). We know roughly the VPR shape, and the amount of beam broadening at a given range, so we can scale this shape to the actual radar reflectivity measurement. This allows us to correct for the impacts of melting snow, which causes a huge enhancement of the radar measurement and would – if uncorrected – lead to extreme overestimates of rainfall. In the early days of weather radar, this caused rings of very high rain rates to appear in the image: an effect called “bright banding”. VPR correction also compensates for the underestimation of rain rates at very long distances from the radar.
In my work, I’m using measurements from the upgraded dual polarisation radar network to choose between different VPR shapes in making this correction. Specifically, I’m investigating the depolarising properties of different melting drops, to identify the rare situations where we don’t need to correct for “bright banding”. The linear depolarisation ratio (LDR), which I’m using to identify the large melting snowflakes that cause radar bright bands, has to be measured using different scans from the ones used to collect reflectivities for rainfall rates, and the Met Office is one of very few meteorological services capable of measuring LDR operationally. Using LDR in this way can improve rainfall estimates significantly in cases where there is no bright band (Smyth and Illingworth, 1998; Sandford et al., in press).
As a logical extension to this work, I’m also looking into new VPR shapes for “non-bright band” rain, using vertical slice “range height indicator” scans from our research radar at Wardon Hill. Correction for bright band is well established in the radar literature, as this is the most common type of rain at high latitudes (where the freezing level is low enough to affect radar measurements), but other types of VPR (e.g. Fabry and Zawadzki, 1995) are rarely discussed. With the improvements in classification achieved by the new LDR algorithm, a suitable VPR shape is needed to correct for underestimation far from the radar in cases without bright band. I’ve recently developed a test profile shape for non-bright band VPRs, and demonstrated improvements to rain rates in localised convective case studies. The method is currently being trialled for use in the Met Office’s operational radar processing software.
In the future it’s hoped that the work I’m doing will further improve the accuracy of Met Office rainfall estimates, particularly in thunderstorms and convective showers. And when the weather is doing things like this, that’s good to know!
Fabry, F. and I. Zawadzki, 1995: Long-term radar observations of the melting layer of precipitation and their interpretation. Journal of the Atmospheric Sciences, 52, 838-851.
Kitchen, M., 1997: Towards improved radar estimates of surface precipitation rate at long range. Quarterly Journal of the Royal Meteorological Society, 123, 145-163.
Kitchen, M., R. Brown. and A. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Quarterly Journal of the Royal Meteorological Society, 120, 1231-1254.
Sandford, C., A. Illingworth, and R. Thompson, in press: The potential use of the linear depolarisation ratio to distinguish between convective and stratiform rainfall to improve radar rain rate estimates. Journal of Applied Meteorology and Climatology.
Smyth, T. and A. Illingworth, 1998: Radar estimates of rainfall rates at the ground in bright band and non-bright band events. Quarterly Journal of the Royal Meteorological Society, 124, 2417-2434.
Recently in the department we have had a fair number of students submitting their PhD theses and awaiting or completing their viva.
For many students at the start of the PhD the viva seems a long way off and can often be thought of as a terrifying experience. So why then do many PhD students come out of their viva saying that they enjoyed it? and is it really as XKCD portray it?
With the help of some former PhD students (Hannah Bloomfield, Sammie Buzzard, Hannah Gough and Leo Saffin) we’ve come up with a summary of our own experiences and some advice for people just about to go in.
But before I get into that I’ll briefly explain a little bit about the viva. The viva is (alongside writing the thesis) the examination for the PhD. Its essentially an oral exam where you sit and talk about your thesis and the area surrounding your field. The viva can last anywhere between 90 minutes and 5 hours, depending on how much you have to talk about (and how much you or your examiners talk). The result from the viva is as follows: Fail; Major Corrections requiring another viva; Pass: Major corrections; Pass: Minor corrections (the most common) and Pass: No corrections (very rare), and at the end of the day it’s the pass or fail that matters.
So what can you expect from a viva? Well, as with each PhD each viva is different (hence why this post is a collaborative effort). Even people’s nerves are different, some go in feeling confident, whilst others are still fairly nervous about it (which of course is very understandable). I certainly was in the nervous camp, but I would have been disappointed if I wasn’t because I always feel I perform better if I am nervous beforehand. Indeed, many of us who are initially nervous become relaxed as soon as we get into the swing of things and the questions start flowing. Furthermore, many examiners (not all) will know and understand that you will be nervous so will immediately put you at ease by saying something along the lines of “I really enjoyed reading your thesis and you don’t need to be worried about the result.” This last statement is probably key for anyone going into the viva – by the time it gets to the viva your examiners have already decided the result, the viva is mainly to check that you did the work.
Looking at the recent experiences of the PhD students I have broadly classified the viva into three types, Presentation, “Traditional” and Thesis covering described below.
Presentation (Hannah Gough):
Hannah was asked to produce a presentation for her viva. She did find this useful as it was a good way to settle into the viva and bring across the aims and key conclusions of her thesis, at the same time highlight what she felt was the most important figures in her thesis. After the presentation, the examiners asked questions on her entire thesis. These ranged from points of clarification, to the wider implications of her work.
“Traditional” (Hannah Bloomfield, Sammie Buzzard and Leo Saffin):
The more “traditional” viva asks you to summarise your thesis for the first 3-5 minutes and then goes through the thesis asking about wider implications and where your work fits in, basic theory, parts of the thesis they are unsure about and implications of your work (amongst other things).
Thesis covering (myself):
Essentially, all we did was go through my thesis cover-to-cover discussing bits specifically related to my project (some minor wider implications/knowledge) and comments that they had on my work.
So why do people enjoy the viva then? Well, there is a fairly simple answer to this question. You’ve been doing work for between three and four years and now you get to discuss it in detail and the examiner can see that you know what you are talking about and will often ask some interesting and thought provoking questions that you either haven’t considered or didn’t necessarily view as important.
Other things that are worth mentioning about the viva, before going on to our collective advice, is that most of the time (unless you spend a while talking about basics of your area) the viva doesn’t feel it is taking as long as it actually is (2 hours feels like 15 minutes – I’m not just saying that, it really does!) – it’s essentially the old saying “time flies when you are having fun”.
So, that’s a brief overview of the viva and our experiences, so how do you actually survive it? Our collective advice would be as follows:
You are the expert in your thesis – so don’t panic – your examiners don’t know as much about what you did as you do.
The examiners are not there to trick you, they are just checking that you did your work – they’ve already made the pass/fail decision.
Don’t be afraid to ask for breaks from time to time (your examiners may want a break too).
Don’t look at the clock (if there is one in the room). All you will then do is think about how long you have been in the viva.
Bring food (biscuits, etc) and enough to share with your examiners.
Prepare a simple 3-5 minute overview of your thesis and know it well – generally you will be asked to summarise your thesis.
It can be useful to read a couple of your external examiners papers – just to find out a little bit about them at the very least.
Don’t be afraid to ask questions to be explained in more detail so you know exactly what they want.
Eat something before you go in no matter how bad you feel.
Try and get a good night’s sleep beforehand.
Don’t be afraid to say how you would do things differently, after having had time to look back at it.
You are the expert in your thesis – so don’t panic – your examiners don’t know as much about what you did as you do.
With that all I can say if you are facing a viva soon is good luck.
A special thanks to all the former PhD students that helped provide information for this blog: Hannah Gough, Hannah Bloomfield, Samantha Buzzard and Leo Saffin.
When modelling urban areas, vegetation is often ignored in attempt to simplify an already complex problem. However, vegetation is present in all urban environments and it is not going anywhere… For reasons ranging from sustainability to improvements in human well-being, green spaces are increasingly becoming part of urban planning agendas. Incorporating vegetation is therefore a key part of modelling urban climates. Vegetation provides numerous (dis)services in the urban environment, each of which requires individual attention (Salmond et al. 2016). However, one of my research interests is how vegetation influences the aerodynamic properties of urban areas.
Two aerodynamic parameters can be used to represent the aerodynamic properties of a surface: the zero-plane displacement (zd) and aerodynamic roughness length (z0). The zero-plane displacement is the vertical displacement of the wind-speed profile due to the presence of surface roughness elements. The aerodynamic roughness length is a length scale which describes the magnitude of surface roughness. Together they help define the shape and form of the wind-speed profile which is expected above a surface (Fig. 1).
Figure 1: Representation of the wind-speed profile above a group of roughness elements. The black dots represent an idealised logarithmic wind-speed profile which is determined using the zero-plane displacement (zd) and aerodynamic roughness length (z0) (lines) of the surface.
For an urban site, zd and z0 may be determined using three categories of methods: reference-based, morphometric and anemometric. Reference-based methods require a comparison of the site to previously published pictures or look up tables (e.g. Grimmond and Oke 1999); morphometric methods describe zd and z0 as a function of roughness-element geometry; and, anemometric methods use in-situ observations. The aerodynamic parameters of a site may vary considerably depending upon which of these methods are used, but efforts are being made to understand which parameters are most appropriate to use for accurate wind-speed estimations (Kent et al. 2017a).
Within the morphometric category (i.e. using roughness-element geometry) sophisticated methods have been developed for buildings or vegetation only. However, until recently no method existed to describe the effects of both buildings and vegetation in combination. A recent development overcomes this, whereby the heights of all roughness elements are considered alongside a porosity correction for vegetation (Kent et al. 2017b). Specifically, the porosity correction is applied to the space occupied and drag exerted by vegetation.
The development is assessed across several areas typical of a European city, ranging from a densely-built city centre to an urban park. The results demonstrate that where buildings are the dominant roughness elements (i.e. taller and occupying more space), vegetation does not obviously influence the calculated geometry of the surface, nor the aerodynamic parameters and the estimated wind speed. However, as vegetation begins to occupy a greater amount of space and becomes as tall as (or larger) than buildings, the influence of vegetation is obvious. Expectedly, the implications are greatest in an urban park, where overlooking vegetation means that wind speeds may be slowed by up to a factor of three.
Up to now, experiments such as those in the wind tunnel focus upon buildings or trees in isolation. Certainly, future experiments which consider both buildings and vegetation will be valuable to continue to understand the interaction within and between these roughness elements, in addition to assessing the parameterisation.
Grimmond CSB, Oke TR (1999) Aerodynamic properties of urban areas derived from analysis of surface form. J Appl Meteorol and Clim 38:1262-1292.
Kent CW, Grimmond CSB, Barlow J, Gatey D, Kotthaus S, Lindberg F, Halios CH (2017a) Evaluation of Urban Local-Scale Aerodynamic Parameters: Implications for the Vertical Profile of Wind Speed and for Source Areas. Boundary-Layer Meteorology 164: 183-213.
Kent CW, Grimmond CSB, Gatey D (2017b) Aerodynamic roughness parameters in cities: Inclusion of vegetation. Journal of Wind Engineering and Industrial Aerodynamics 169: 168-176.
Salmond JA, Tadaki M, Vardoulakis S, Arbuthnott K, Coutts A, Demuzere M, Dirks KN, Heaviside C, Lim S, Macintyre H (2016) Health and climate related ecosystem services provided by street trees in the urban environment. Environ Health 15:95.