Skip to content

Latest commit

 

History

History
71 lines (57 loc) · 4.76 KB

File metadata and controls

71 lines (57 loc) · 4.76 KB

Physics-Inspired AI

Led by Bethany Lusch from ALCF ([email protected]) and Shawn Rosofsky ([email protected]) from University of Illinois at Urbana-Champaign

Outline:

  1. Overview of approaches (by Bethany, see below)
  2. Invariances in CNNs (by Bethany, Rotated-Mnist)
  3. Physics-Informed Neural Networks (PINNs) by Shawn:
  4. Physics-Informed DeepONets by Shawn:

Overview

There are many ways to incorporate domain knowledge, such as known physics, into AI, and this is a rapidly-growing area of research. Incorporating previous knowledge into AI can have many benefits, such as:

  • A more accurate or simpler model or less need for training data because it doesn't have to "rediscover" patterns we already know
  • More interpretable models
  • Models that are more robust or trustworthy because, for example, they do not violate conservation of energy
  • Leveraging the generalizability of physical laws or equations to make AI models more generalizable

For these reasons and more, incorporating domain knowledge in ML and AI was listed as a grand challenge in the Department of Energy's AI for Science Report.

Some examples of existing approaches to incorporating physics in AI are:

  1. Embedding symmetries or invariances in a network, as we will see in the Rotated-Mnist notebook.
  2. Relatedly: applying physical constraints, such as conservation laws.
  3. Creating custom loss functions.
  4. Carefully choosing input representations so that all relevant physical information is provided.
  5. Constraining the network to learn a solution to the known differential equation, as we will see in the Physics-Informed Neural Networks (PINNs) section.
  6. Or, to generalize PDE solutions across changes like different initial conditions, etc. learning an operator network, as we will see in the Physics-Informed DeepONets section.
  7. Learning governing differential equations from data, with the assumption that certain types of terms are allowed and/or there should only be a few terms.
  8. Learning a hybrid model, such as having a neural network learn only the behavior not covered by given differential equations. (Related terms: gray-box modeling, closure modeling, and discrepancy modeling.)
  9. Training an ML model as a surrogate for only part of a simulation, then feeding the prediction back into the simulation. (See this previous ALCF tutorial on coupling simulations and ML at our SDL workshop.)

For more, check out the "ML for Physics and Physics for ML" tutorial from NeurIPS 2021 by Shirley Ho and Miles Cranmer. They cover many ways to use "physics-informed inductive biases" across three categories: energy, geometry, and differential equations.

Conda Environment

For these notebooks, we use a custom conda environment.

  1. Install the Jupyter kernel env. There are two ways of doing this, from the jupyter notebook or from the terminal via ssh:

    • from the jupyter notebook: copy the following code, paste it in a new cell on the notebook and run it

      !source activate /lus/grand/projects/ALCFAITP/physics-inspiredAI/env;\
      python -m ipykernel install --user --name physics-inspiredAI
      
    • from the terminal via ssh:

      # Log in to Theta
      ssh [email protected]
      
      # Log in to a ThetaGPU service node
      ssh thetagpusn1
      
      # Load Anaconda
      module load conda/2021-09-22
      
      # Use Anaconda to activate the environment we've prepared for you
      conda activate /lus/grand/projects/ALCFAITP/physics-inspiredAI/env
      
      # Install the new Jupyter kernel to use
      python -m ipykernel install --user --name physics-inspiredAI
  2. Change the notebook's kernel to env (you will need to change kernel each time you open a notebook for the first time):

    1. select Kernel in the menu bar
    2. select Change kernel...
    3. select physics-inspiredAI from the drop-down menu