Authors:
(1) Cody Rucker, Department of Computer Science, University of Oregon and Corresponding author;
(2) Brittany A. Erickson, Department of Computer Science, University of Oregon and Department of Earth Sciences, University of Oregon.
Table of Links
Abstract and 1. Context and Motivation
- Physics-Informed Deep Learning Framework
- Learning Problems for Earthquakes on Rate-and-State Faults
- 2D Verification, Validation and Applications
- Summary and Future Work and References
Abstract
Direct observations of earthquake nucleation and propagation are few and yet the next decade will likely see an unprecedented increase in indirect, surface observations that must be integrated into modeling efforts. Machine learning (ML) excels in the presence of large data and is an actively growing field in seismology. However, not all ML methods incorporate rigorous physics, and purely data-driven models can predict physically unrealistic outcomes due to observational bias or extrapolation. Our work focuses on the recently emergent Physics-Informed Neural Network (PINN), which seamlessly integrates data while ensuring that model outcomes satisfy rigorous physical constraints. In this work we develop a multi-network PINN for both the forward problem as well as for direct inversion of nonlinear fault friction parameters, constrained by the physics of motion in the solid Earth, which have direct implications for assessing seismic hazard. We present the computational PINN framework for strike-slip faults in 1D and 2D subject to rate-and-state friction. Initial and boundary conditions define the data on which the PINN is trained. While the PINN is capable of approximating the solution to the governing equations to low-errors, our primary interest lies in the network’s capacity to infer friction parameters during the training loop. We find that the network for the parameter inversion at the fault performs much better than the network for material displacements to which it is coupled. Additional training iterations and model tuning resolves this discrepancy, enabling a robust surrogate model for solving both forward and inverse problems relevant to seismic faulting.
Keywords: physics-informed neural network, rate-and-state friction, earthquake, inverse problem, fully dynamic
1. Context and Motivation
Faults are home to a vast spectrum of event types, from slow aseismic creep, to slow-slip to megathrust earthquakes followed by postseismic afterslip. The Cascadia subduction zone in the Pacific Northwest, for example, hosts several types of slow earthquake processes including low (and very low) frequency earthquakes, non-volcanic tremor (NVT) and slow-slip events (SSE) [22], but also large, fast earthquakes, the last of which was a magnitude ∼9 in the year 1700 [1]. Understanding the physical mechanisms for such diversity of slip styles is crucial for mitigating the associated hazards but major uncertainties remain in the depth-dependency of frictional properties at fault zones, which affect fault locking and therefore rupture potential [3, 41]. Direct observations of earthquake nucleation and propagation are few and yet the next decade will likely see an unprecedented increase in indirect, surface observations that could be integrated into modeling efforts [2].
Traditional numerical approaches for solving the partial differential equations (PDE) governing earthquake processes (e.g. finite difference methods) have seen incredible growth in the past century, in particular in terms of convergence theory and high-performance computing. Traditional methods employ a mesh (either a finite number of grid points/nodes or elements) and a range of time-integration schemes in order to obtain an approximate solution whose accuracy depends directly on the mesh size (with error decreasing with decreasing node spacing or element size). This mesh dependency introduces limitations when high resolution is needed, and while traditional methods give rise to a forward problem, solving inverse problems require additional machinery and can be prohibitively expensive [27, 16]. In addition, noisy or sparse data cannot be seamlessly integrated into the computational framework of traditional methods.
Machine learning (ML), on the other hand, excels in the presence of large data and is an actively growing field in seismology, with applications ranging from earthquake early warning (EEW) to ground-motion prediction [31]. However, not all ML methods incorporate rigorous physics, and purely data-driven models can predict physically unrealistic outcomes due to observational bias or extrapolation [53]. A new Deep Learning technique has recently emerged called the Physics-Informed Neural Network (PINN), which seamlessly integrates sparse and/or noisy data while ensuring that model outcomes satisfy rigorous physical constraints. PINNs do not outperform traditional numerical methods for forward problems (except in high-dimensional settings) [26], but they offer advantages over traditional numerical methods in that both forward and inverse problems can be solved in the same computational framework. However, the majority of PINN applications are currently limited to simple mechanical models, forward problems and/or do not incorporate real-world observations [15, 42, 25]. Here we introduce a new, physically-rigorous modeling framework for both forward and inverse problems that can be integrated with observational data in order to better understand earthquake fault processes. This Deep Learning approach will vastly expand our computational abilities to explore and infer relevant parameter spaces responsible for slip complexity, and the conditions that enable the world’s largest earthquakes.
Though the PINN framework lacks the robust error analysis that comes with traditional methods, a large number of publications have emerged since ∼2017 which aim to customize PINNs through the use of different activation functions, gradient optimization techniques, neural network architecture, and loss function structure [7]. Careful formulation of the loss function using the weak form of the PDE have been proposed for constructing deep learning analogues of Ritz [52] and Galerkin [28, 29, 23] methods which use numerical quadrature to reduce the order of the PDE resulting in a simpler learning problem [6, 14]. In tandem, statistical learning theory has been used to deduce global error bounds for PINNs in terms of optimization error, generalization error, and approximation error [32]. For wide but shallow networks utilizing hyperbolic tangent activation functions, the approximation error has been shown to be bounded over Sobolev spaces [9]. Bounds on PINN generalization error have been derived for linear second-order PDE [47] (later extended to all linear problems [48]) and some specific cases like Navier-Stokes [8]. Moreover, the abstract framework for PINNs can leverage stability of a PDE to provide conditions under which generalization error is small whenever training error is small for both forward and inverse problems [39, 38]. More recently, a PINN-specific optimization algorithm has achieved markedly improved accuracy over other optimization algorithms by incorporating a PDE energy into the backpropagation step [40]. In addition to this rapid framework development, PINNs have been shown to perform well on a variety of physical problems like Navier-Stokes [51, 49, 24], convection heat transfer [5], solid mechanics, [19, 18] and the Euler equations [35].
In this work we focus specifically on rate-and-state friction [e.g. 46], an experimentally-motivated, nonlinear friction law capable of reproducing a wide range of observed earthquake behaviors and is used in nearly all modern dynamic rupture and earthquake cycle simulations [20, 12]. A better understanding of the depth-dependency of rate-and-state parameters - which have a direct correlation to fault locking and seismic rupture potential – is a fundamental task [3, 41]. To address this task we develop a multi-network PINN for modeling a vertical, strike-slip frictional fault embedded in an elastic half-space, and consider deformation in both 1D and 2D. The paper is organized as follows: In section 2 we first provide an overview of the physics-informed deep learning framework and PINN architecture for general initial-boundary-value problems, in order to best describe the implementation to our application problem. In section 3 we provide specific details of the PINN framework applied problems, first illustrated in 1D with an example forward problem, then further developed to include inverse problems in 2D. In section 4 we report details of our optimal network architecture and training methods, verifying our methods with a manufactured solution to ensure accuracy of our inversions. We conclude with a summary and discussion of future work in section 5.
This paper is available on arxiv under CC BY 4.0 DEED license.