Solving Spatial Optimization Problems via Lagrangian Relaxation and Automatic Gradient Computation

Spatial optimization is an integral part of GIS and spatial analysis. It involves making various decisions in space, ranging from the location of public facilities to vehicle routing and political districting. While useful, such problems (especially large problem instances) are often difficult to so...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhen Lei, Ting L. Lei
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:ISPRS International Journal of Geo-Information
Subjects:
Online Access:https://www.mdpi.com/2220-9964/14/1/15
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spatial optimization is an integral part of GIS and spatial analysis. It involves making various decisions in space, ranging from the location of public facilities to vehicle routing and political districting. While useful, such problems (especially large problem instances) are often difficult to solve using general mathematical programming (due to their generality). Traditionally, an alternative solution method is Lagrangian relaxation, which, if well-designed, can be fast and optimal. One has to derive the Lagrangian dual problem and its (sub)gradients, and move towards the optimal solution via a search process such as gradient descent. Despite its merits, Lagrangian relaxation as a solution algorithm requires one to derive the (sub)gradients manually, which is error-prone and makes the solution algorithm difficult to develop and highly dependent on the model at hand. This paper aims to ease the development of Lagrangian relaxation algorithms for GIS practitioners by employing the automatic (sub)gradient (autograd) computation capabilities originally developed in modern Deep Learning. Using the classic <i>p</i>-median problem as an example, we demonstrate how Lagrangian relaxation can be developed with paper and pencil, and how the (sub)gradient computation derivation can be automated using autograd. As such, the human expert only needs to implement the Lagrangian problem in a scientific computing language (such as Python), and the system can find the (sub)gradients of this code, even if it contains complex loops and conditional statements. We verify that the autograd version of the algorithm is equivalent to the original version with manually derived gradients. By automating the (sub)gradient computation, we significantly lower the cost of developing a Lagrangian algorithm for the <i>p</i>-median. And such automation can be applied to numerous other optimization problems.
ISSN:2220-9964