Of all the research agenda’s dominating the intersections of scientific research, business and technology, gathering useful weather and climate data is amongst the top of them. This past year alone has hosted big news such as the UK Met Office’s procurement of the most powerful supercomputer dedicated to weather and climate forecasting. But, at risk of sounding like a true defeatist to some, perhaps sometimes solving the problem isn’t about size but more about smarts? The paradox is though, to get smarter, you need a bigger feed of information. And so it goes…
The less information you have about something, the harder it is to predict its behaviour and, thus, its future. Herein lies the key issue with predicting extreme weather events; there just isn’t enough past information about them, to make a judgement about when the next one will occur. In AI- speak, there are not enough historical data points to create an accurate model of the extreme weather event being forecasted. Extreme weather events are, in every sense, extremely rare.
To understand just how few data points we are working with here, we go to Fabio Porto, Founder of the DEXL Lab in Brazil.
”For Rio de Janeiro city, from 2006 to now we have probably 100 events, and extreme weather events are those where rainfall is beyond like 30 mm / hour”. Its not an awfully low number, but, it has led to Fabio taking the lead on using an interesting approach, which we’re going to walk through below to breakdown just how he is tackling this scientific problem – in the true informal style of Digital Anthroconomy.
So at the most basic level, Fabio is an expert on answering 3 specific questions –
1. When is it going to strongly rain?
2. Where is the rain going to fall?
3. What volume of rain will fall?
And he needs to be able to solve these questions within a maximum window of 6 hours: this challenges things slightly.
Most current weather forecasting comes from the domain of meteorology, where numerical models / simulations of the global climate are pretty good to be used for long-term predictions. To assemble these numerical simulations of the climate, we need to gather data on the “initial conditions” of the environment such as the atmospheric conditions and other related information. Gathering all this data and feeding it into the models takes a lot of time (and energy).
During extreme weather conditions, these “initial conditions” change often. On top of this, when it comes to extreme weather, as we’ve established, time is not a resource that we have a lot of. In fact there is such little time that we need to be able to process this volatile data in real-time (as and when it occurs). And so, we’re faced with a very basic problem which is “how do we set up a continuously updated feed of these initial atmospheric conditions, such that short-term models and weather prediction can be accurate?”.
Here’s where the AI comes – because contrary to numerical simulations, AI models are very fast at inference (whilst training the models is the more costly and energy intensive part).
We can use the neural net-based AI models in 2 ways here -
1. We can use AI models to process the real-time weather data, and then feed that into the original numerical simulation. OR
2. We (or I should say Fabio) can switch entirely to a deep learning approach.
The reason Fabio is trying to switch entirely to a deep learning approach, is because he understands the core problem which is that “there is not enough extreme weather data for initial atmospheric conditions, to be able to accurately forecast extreme events based on just the data that comes from geometers and weather stations”. And so, bearing in mind that AI calculations are very good at running lots in parallel, Fabio has decided that it’s better to process lots of different data from different sources in parallel, rather than the limited data just from one source. In particular, he is interested in enriching his current data with data from other sources, such as radar data, ocean temperature data, and satellite lightning data – “We’re still in the previous phase which is trying to understand how to identify sub-regions of the data, that will lead to the models that I want to built”. From an AI-models perspective, I’ll refer to this as the “pre- training phase”.
I won’t go into the further science of the 3D graphical analysis because the aim of this brand is to minimalise and make understandable the first- principles. But, the approach he is taking follows the current paradigm shift characteristic of the age of AI, which is about calculating breadth of information from an array of sources in parallel, and creating specialised models of each specialised sub-area. In Fabio’s work, we characterised this as “multi-model analysis”. This took us on to the broader conversation that summarises much of Fabio’s recent interest in this field.
“So do you think right now, scientifically, we’re in the era of specialising or generalising?”
In essence, we are in an era of model specialisation in the global AI roadmap. But the real answer is a bit deeper than this.
“Learning is a process that optimises to the average” – Fabio says. And what this means is that if you collect lots of data which has a very large distribution spread (i.e lots and lots of general data about a broad question), you will never reach the optimal information for a specific question you are asking. And so, in essence, to answer a more specific question, or build a more specific model, you need to optimise your data collection to be tailored around that specific question. The challenge however, is that as you are collecting more of the specialised data, you need to ensure that the original data distribution does not change, so that you don’t change reality. You can think of this challenge as the same frame of mind as “getting so specialised in something that you lose perspective of why you are doing it in the first place” (a challenge to me that seems to summarise a social paradigm within which we exist today).
In Fabio’s work, he is trying to answer these specific questions about the weather by building a cloud data lake (essentially a huge store of information in the cloud) to collect all the relevant information to his questions from sources such as the Navy and Aeronautics. And, whilst this is currently early stage in yet identifying the data required to build the training models, weather prediction is a field that implicates nearly every major industry.
The smart should very much be paying attention to innovations in this space, whilst, the wise should very much be paying attention to the broader battle between generalisation and specialisation.