Learning neural operators on Riemannian manifolds

Learning mappings between functions (operators) defined on complex computational domains is a common theoretical challenge in machine learning. Existing operator learning methods mainly focus on regular computational domains, and have many components that rely on Euclidean structural data. However,...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen Gengxiang, Liu Xu, Meng Qinglu, Chen Lu, Liu Changqing, Li Yingguang
Format: Article
Language:English
Published: Science Press 2024-04-01
Series:National Science Open
Subjects:
Online Access:https://www.sciengine.com/doi/10.1360/nso/20240001
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Learning mappings between functions (operators) defined on complex computational domains is a common theoretical challenge in machine learning. Existing operator learning methods mainly focus on regular computational domains, and have many components that rely on Euclidean structural data. However, many real-life operator learning problems involve complex computational domains such as surfaces and solids, which are non-Euclidean and widely referred to as Riemannian manifolds. Here, we report a new concept, neural operator on Riemannian manifolds (NORM), which generalises neural operator from Euclidean spaces to Riemannian manifolds, and can learn the operators defined on complex geometries while preserving the discretisation-independent model structure. NORM shifts the function-to-function mapping to finite-dimensional mapping in the Laplacian eigenfunctions' subspace of geometry, and holds universal approximation property even with only one fundamental block. The theoretical and experimental analyses prove the significant performance of NORM in operator learning and show its potential for many scientific discoveries and engineering applications.
ISSN:2097-1168