Downscaling Global Economic Activity Data : A Spatial and Sectorial perspective using Bayesian Hierarchical Modelling

Oxford Progamme for Sustainable Infrastructure Systems (OPSIS)

Introduction

Glossary

BHM - Bayesian Hierarchical Model

MCMC - Markov Chain Monte Carlo

PDF - probability density function

RV - random variable

Marginal Probability - PDF expressing dependency on a single parameter

Pycnophilcatic - mass preserving

Where we left off

  • Pycnophylactic condition Tobler (1979)
  • Rescaling, using Overture POIs and their categories as source for density distribution of activity on the ground.

Pros

  • (too ?) Simple
  • Efficient
  • Global

Cons

  • Did not use additional available information : sector level data, regional accounts from local sources, other global layers such as population, Non-Residential built infrastructure etc…
  • Univariate model
  • Hard constraint
  • No flexibility

Can we do better ?

YES

But at a bigger cost.

The methodology was significantly expanded to use Bayesian Hierarchical Modelling (BHM) (Schoot et al. 2021).

Thomas Bayes (18th century); Has been long avoided due to technical complications, but has made a comeback since the 80s. Is now used across a wide range is disciplines, from physics and engineering, to economics, social sciences and medicine.

Pros

  • Can handle complex data
  • Embeds uncertainties about
  • Embeds prior (expert) knowledge
  • Flexibility

Cons

  • Computationally demanding
  • Requires careful prior selection
  • Sensitive to model specification

Bayesian is generally complex : Theory and formalism (Bayes, Markov Chain Monte Carlo, Information theory, Statistics…), software ecosystem (world of it’s own), PPLs (Probabilistic Programming Languages)…

As the name suggests, at the core of this method lies the familiar Bayes rule. Let us remind ourselves. If we have two phenomena (expressed as RV), we might be asking the following kind of questions:

What is the probability of a joint event, or a conditional event, having observed the outcome of a single one ?

BHM

This, in practice, takes the form \(p(x,z)\). And in such cases, Bayes rule tells us that

\[ p(x,z) = p(z) * p(x|z) \\ p(x,z) = p(x) * p(z|x) \]

Which can be combined into:

\[ p(x) = \frac{p(z) \cdot p(x|z)}{p(z|x)} = \frac{p(x|z)}{p(z|x)}\cdot p(z) \]

The first term is called the likelihood. If we can estimate the RHS using \(z\), then we can estimate the marginal of \(x\).

Now the most important point in BM, is that we apply such reasoning to the parameters of our model. In other words we are asking the question :

What are the chances of observing certain data from my model under some parameters \(\mathbf{\theta}\) ?

And we refine the parameter if we have some observations \(\{X_i\}\).

Simple example

Let’s say we have some data \(\{ X_i \}\) and we model it as \(X \sim \mathcal{N(\mu,\sigma)}\). Usually, we would use the data to make an estimate on the values of \((\mu,\sigma)\), with both parameters fixed and computed from the data. But what if we set one parameter to have variability? In this case, the model becomes parametrised as \(p_X \equiv p_X(.|\mu)\). Formally, let’s model the parameters as RVs as well. The values we will draw from our original model will be conditioned on the parameter, which will itself be sampled. This is where Bayesian thinking. We defined an additional probability density, associated to our belief on the values of the mean \(\mu \sim \mathcal{N(\tau,\gamma)}\). We have a model defined not only for the data, but for the parameters of our model. The distribution on \(\mu\) as well as the one for \(X\) form our prior, encompassing the best of our knowledge about the observed phenomenon. The parameters \((\tau,\gamma)\) are referred to as hyper-parameters and can be used to fine tune our belief on the mean of the data we observe and sample down the way.

What about observed data ?

Let’s say now we have some observed sample data \(\{X_i\}\). Having laid a general behavior for the model, we can now turn to it and ask the question :

“What are the chances of observing the sample \(\{X_i\}\) in our model, conditioned to the parameter \(\mu\) ?”

In other words we are looking at \(p(\{X_i\}|\mu)\), recall Bayes rule from an earlier slide.

Ex : Height distribution in human population

Learning from the data

From this point, the model has to learn the best possible parameters, given the observed data and prior beliefs that we communicated to it. This step relies on MCMC, sampling from the prior distribution and updating it based on the likelihood. At the end of this iterative process, we get the posterior distribution, which under the given set of priors and parameters, and for a number of samples, gives us the best belief on the parameters for the model to generate data.

Posterior distribution

The posterior distribution emerges once we have adapted the prior using the likelihood we measure with respect to observed data. This step is similar to a learning epoch in the training process of Deep Learning.

MCMC

The method allows us to fine tune the posterior distribution, by sampling synthetic data out of the prior and adapting it to be more similar to the observed data at every new iteration. Markov Chains Monte Carlo is tool that allows us to do this.

Summary BHM

Data Parameters
Prior \(f(.|\theta)\) \(\pi(\theta)\)
Posterior \(f(.|\theta)\pi(\theta|\{X_i\})\) \(\pi(\theta|\{X_i\}) \propto f(X|\theta) \cdot \pi(\theta)\)

Where the family of \(f\) is a choice of the user that can have a huge impact on the modelling.

Spatial Economic Activity

How do we develop a downscaling model with this ?

After this rushed summary, we get in the specific details of our problem. The idea is to embed an econometrics model predicting the fine scale values into this Bayesian framework. On the one hand informing the behavior at the fine spatial scale, dictated by the econometrics model, and controlling that this behavior aligns with our prior knowledge and constraints at the coarse spatial and sector scale through the BHM.

Spatial Hierarchy

Economic Hierarchy

 

Econometrics modelling

We use a linear model, inspired by (spatial) econometrics(Anselin 1988; Redding 2024; Zellner 1985), to inform the bayesian method on how we expect our predictor variables to be linked to the industry level output. We apply this model at the fine resolution to every location that has some non-zero predictor variable.

\[ \mu_{S_i} = \sum_m \omega_m * x_m \]

where \(\omega_m\) are learned parameters and \(x_m\) are the proxy variables that we assign to the relevant sectors. The result is a set of linear equations, one for each sector, with chosen variables for each sector. We select only a subset of the available variables for each output to reduce the dimensionality of the problem and simplify our prior.

The linear model in turn yields a value \(\mu_{S_i}\) for a specific sector, which is interpreted as a mean value for a sector in a location by the Bayesian method and is combined with an uncertainty metric \(\sigma\) measured before hand as the average availability of data in the system. The tuple of values \((\mu_{S_i}, \sigma)\), with \(\sigma\) fixed and \(\mu_{S_i}\) obtained from the sampled linear combination of the \(\alpha\) parameters, are then used as parameters in a \(LogNormal\) model predicting the economic activity of a location.

\[Y_{S_i} \sim LogNormal(\mu_{S_i}, \sigma)\] We integrate our coarse spatial and sectorial output constraints by adding aggregation layers, which sum up the high resolution values and verify their validity with the observed totals at whatever resolution they are available.

Further, we introduce an normal noise \(\mathcal{N}(\mathcal{S_k}, \frac{\mathcal{S_k}}{10})\) on the reported outputs, which helps avoiding to set hard constraints and can be helpful down the way when dealing with uncertain reports.

All together

Data

Input1

Current

POIs, GHSL NRES, DOSE-WDI, Copernicus, GHSL pop, Bureau of Economic Analysis (BEA), ILOSTAT, UK Business Value Added, EU IO tables

Validation

  • Kummu
  • Bea
  • EU IO
  • Null model (OLD)

Subsectors

GEM, CGFI, Climatrace, Edgar, MAPSPAM

Catalogue

Is up on the cluster with access and manipulation facilitated by the scalenav2 package

Let’s see some results !

Selected Regions

FRA.13_1

Selected Regions

 

 

Selected Regions

 

 

Selected Regions

Japan

Selected Regions

 

 

Selected Regions

 

 

Modelling Diagnostics

The BHM side of things

Validation 1

We use existing available downscaled data sets, in which one of the 2 dimensions (spatial, sectorial) is coarse, and reduce our data to validate at the resolution of the available data. Example with (Kummu, Taka, and Guillaume 2018b), where the data is total GDP at a fine spatial scale. We reduce the BHM model data by aggregating each locations output across sectors.

FRA.13_1

Validation 1

JPN

Validation 2

Florida

Validation 2

 

 

Validation 2

 

 

Validation 2

Using national reports on per sector productivity at coarse geographic resolution.

Reversing the question

The flexibility of BHM allows us to reverse the use of the finer resolution economic output data and use it as a prior knowledge. This is a work in progress feature that is almost implemented.

Current challenges, limitations and next steps

Challenges

We use a mix of methods, benefiting from their strengths, but we also inherit their limitations and challenges.

  • Dimensionality

  • Econ model

    • Colinearity of proxies
    • Sensitive to spatial and sectorial resolution
  • BHM

    • Sampling in very high dimensions
    • Intricate Diagnostics
    • Complex formalism
    • Different software ecosysytem and programming paradigm
    • Sensitive to spatial and sectorial resolution
  • Data

    • We are constantly looking for data that could be either used to inform our prior, proxy layers, or validation.
  • Predictive modelling

    • Requires further specifying what are we predicting ?

Next Steps

  • Running all regions on the cluster
  • Validation routines using incoming regional/national datasets of different resolutions and dimensions
  • Methods and data publication
  • Refinement of the model to the different cases of specific data that comes for some areas

Questions ?

References

Anselin, L. 1988. Spatial Econometrics. Studies in Operational Regional Science Ser v.4. Dordrecht: Springer Netherlands.
European Commission. Joint Research Centre. 2023. GHSL data package 2023. LU: Publications Office. https://data.europa.eu/doi/10.2760/098587.
Gaulier, Guillaume, and Soledad Zignago. 2010. “BACI: International Trade Database at the Product-Level. The 1994-2007 Version.” http://www.cepii.fr/CEPII/fr/publications/wp/abstract.asp?NoDoc=2726.
Janssens-Maenhout, Greet, Monica Crippa, Diego Guizzardi, Marilena Muntean, Edwin Schaaf, Frank Dentener, Peter Bergamaschi, et al. 2019. “EDGAR V4.3.2 Global Atlas of the Three Major Greenhouse Gas Emissions for the Period 19702012.” Earth System Science Data 11 (3): 959–1002. https://doi.org/10.5194/essd-11-959-2019.
Kummu, Matti, Maija Taka, and Joseph H. A. Guillaume. 2018b. “Gridded Global Datasets for Gross Domestic Product and Human Development Index over 19902015.” Scientific Data 5 (1): 180004. https://doi.org/10.1038/sdata.2018.4.
———. 2018a. “Gridded Global Datasets for Gross Domestic Product and Human Development Index over 19902015.” Scientific Data 5 (1): 180004. https://doi.org/10.1038/sdata.2018.4.
Redding, Stephen J. 2024. “Spatial Economics,” Working paper series, November. https://doi.org/10.3386/w33125.
Schoot, Rens van de, Sarah Depaoli, Ruth King, Bianca Kramer, Kaspar Märtens, Mahlet G. Tadesse, Marina Vannucci, et al. 2021. “Bayesian Statistics and Modelling.” Nature Reviews Methods Primers 1 (1): 1–26. https://doi.org/10.1038/s43586-020-00001-2.
Tobler, Waldo R. 1979. “Smooth Pycnophylactic Interpolation for Geographical Regions.” Journal of the American Statistical Association 74 (367): 519–30. https://doi.org/10.1080/01621459.1979.10481647.
Wenz, Leonie, Sven Norman Willner, Alexander Radebach, Robert Bierkandt, Jan Christoph Steckel, and Anders Levermann. 2015. “REGIONAL AND SECTORAL DISAGGREGATION OF MULTI-REGIONAL INPUTOUTPUT TABLES A FLEXIBLE ALGORITHM.” Economic Systems Research 27 (2): 194–212. https://doi.org/10.1080/09535314.2014.987731.
You, Liangzhi, Stanley Wood, Ulrike Wood-Sichra, and Wenbin Wu. 2014. “Generating Global Crop Distribution Maps: From Census to Grid.” Agricultural Systems 127 (May): 53–60. https://doi.org/10.1016/j.agsy.2014.01.002.
Zellner, Arnold. 1985. “Bayesian Econometrics.” Econometrica 53 (2): 253. https://doi.org/10.2307/1911235.