A Python 3 implementation of the Weak-form Sparse Identification of Nonlinear Dynamics (WSINDy) algorithm for partial differential equations.
Based on the JCP paper by D. A. Messenger, D. M. Bortz (2021).
- See the original authors' MatLab code repository
- Also see the PyWSINDy for ODEs code repository
For other existing implementations, also see the PySINDy documentation.
The core functionality of this codebase is contained within two files:
-
wsindy.py
The fundamental WSINDy class definition. -
helper_fcns.py
A list of utilities and helper functions.
The wsindy_examples.ipynb notebook illustrates the Weak SINDy algorithm being applied to various spatiotemporal systems:
-
KURAMOTO SIVASHINKSY
Numerical simulation of the$(1+1)$ -dimensional Kuramoto-Sivashinksy equation (pictured above). See theKS.txtfile (1.3 MB). The data were sourced from this GitHub repository. -
SWIFT HOHENBERG
Numerical simulation of the$(2+1)$ -dimensional Swift-Hohenberg$(23)$ equation. Simulated data were obtained using MatLab's Chebfun package, seesh23_simulation.m. -
MHD EQUATIONS
Numerical simulation of forced turbulence in the$(3+1)$ -dimensional incompressible MHD equations, sourced from the Johns Hopkins Tubulence Database.
Note: to access a dataset stored in Google Drive (e.g., /content/drive/My Drive/WSINDy/dataset_name.txt) while using Google Colab, use the following commands to change directories.
# Create directory if necessary
!mkdir -p "/content/drive/My Drive/WSINDy"
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/WSINDywget -q https://raw.githubusercontent.com/SethMinor/PyWSINDy-for-PDEs/main/wsindy.py
wget -q https://raw.githubusercontent.com/SethMinor/PyWSINDy-for-PDEs/main/helper_fcns.pySee environment.yml. This codebases uses the following modules and naming conventions:
import torch
import scipy
import numpy as np
import itertools
import symengine as sp
import torch.linalg as la
from scipy.signal import convolve
from scipy.special import factorial
import matplotlib.pyplot as plt
from tqdm import tqdm
from wsindy import *
from helper_fcns import *For a dataset U, derivative library alpha, function library beta, and coordinate axes X, the syntax to create a class instance is:
model = wsindy(U, alpha, beta, X, **params)# Coordinate axes (x,y,...,t)
X = [x,t]
# Candidate derivatives (dx,dy,...,dt)
alpha = [[0,1], # d/dt
[0,0], # 1
[1,0], # d/dx
[2,0], # d^2/dx^2
[3,0], # d^3/dx^3
[4,0]] # d^4/dx^4
# Candidate monomial powers for (u1,...,ud) := u1
beta = [[0], # u1^0 = 1
[1], # u1^1
[2]] # u1^2
# Candidate monomial powers for (u1,...,ud) := (u1,u2)
beta = [[0,0], # u1^0 * u2^0 = 1
[1,0], # u1^1 * u2^0
[0,1]] # u1^0 * u2^1params = {
'V': [], # Extra variables [u2,...,ud]
'names': None, # Variable names ['u_1',...,'u_d']
'm': None, # Test fcn support [mx,...,mt]
'p': None, # Test fcn degrees [px,...,pt]
's': None, # Subsampling [sx,...,st]
'jacobian': 1., # Volume element
'tau': 1e-10, # Test fcn tolerance
'tau_hat': 2, # Fourier test fcn tolerance
'verbosity': True, # Print out details
'rescale': True, # Use preconditioner for LS solves
'init_guess': [10,1,10,0], # [x0, y0, m1, m2], for (kx,kt) curve fit
}# Create standard library terms
[G,powers,derivs,rhs_names] = model.create_default_library()
# Set lhs
lhs_name = 'u' + model.derivative_names[0]
model.build_lhs(lhs_name)
# Set library
model.set_library(G, powers, derivs, rhs_names)
# Find sparse weights
w = model.MSTLS()
model.print_report()Specify a sequence of terms by applying derivative $\mathcal{D}^i$ to function $f_j(u_1, \dots, u_d)$ :
G = [tensor1, ..., tensorN]
powers = [list1, ..., listN]
derivs = [int1, ..., intN]
rhs_names = [string1, ..., stringN]
model.set_library(G, powers, derivs, rhs_names)display(Math(r'\Theta=' + r'\{' + r', \, '.join(rhs_names) + r'\}'))For non-homogeneous functions, set the corresponding powers[j] = [0, ..., 0].
Also see the following helper functions:
compute_weak_poly(...)
compute_weak_multipoly(...)
compute_weak_trig(...)Convolved tensors should be evaluated over query points ${(\boldsymbol{x}_k, t_k)}$ using tensor[model.conv_mask]. For example:
term = compute_weak_poly(model.U, kernels, model.spacing, yu=model.yu, yxyt=yxyt)
evaluated_term = term[model.conv_mask]# Trim out the redundant u_x term
remove_cols = [6]
print(f'Removing: {rhs_names[6]}\n')
for column in sorted(remove_cols, reverse=True):
G.pop(column)
powers.pop(column)
derivs.pop(column)
rhs_names.pop(column)
display(Math(r'\Theta=' + r'\{' + r',\,'.join(rhs_names) + r'\}'))
del remove_cols,column# Augment the library to include an advection term, (u·∇)ζ
print(f'Combining: {rhs_names[14]}, {rhs_names[31]}')
columns = [14,31]
coeffs = [1,1]
name = '(𝘂·∇)ζ'
info = [G, powers, derivs, rhs_names]
[G,powers,derivs,rhs_names] = composite_term(columns, coeffs, name, model, info)