Convolutional Cell Averaged-based Neural Network
The Cell Averaged-based Neural Network (CANN) method provides a local solution to the next time step based on the local solution at the current time step. This means that for each time step, we must repeatedly apply the network to each point in the domain. This limits speed of approximation, especially if one wishes to lift the method to higher dimensions.
The goal of extending CANN to a convolutional structure is to enable efficient lifting of solutions to higher dimensions and longer time-scales. If this proves successful, we will also work on the advection equation
In traditional numerical methods for approximation of partial differential equation solutions (finite difference, volume, element, etc.), time step sizes are restricted by CFL conditions—necessary conditions for convergence while solving partial differential equations. For example consider the 1D heat equation
Small time steps are needed to maintain stability, i.e.
which can take larger time steps, but require solving a large linear system of equation for each time step.
The model learns a map from
The labels are not labels in the traditional sense. Instead, they are the cell averages at the \textit{next} time step, since this is what the model is designed to predict.
Qiu and Yan introduced the original cell average based neural network method. This method takes a finite volume approach to solving PDEs of the form
A neural network is used to approximate the right hand side
Leading to the update scheme
where
The main advantage of the method is the ability to take larger time steps than a traditional numerical method.