EI、Scopus 收录
中文核心期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于物理信息神经网络的薄壁结构屈曲分析

冯唐思捷,梁伟

downloadPDF
冯唐思捷, 梁伟. 基于物理信息神经网络的薄壁结构屈曲分析. 力学学报, 2023, 55(11): 2539-2553 doi: 10.6052/0459-1879-23-277
引用本文: 冯唐思捷, 梁伟. 基于物理信息神经网络的薄壁结构屈曲分析. 力学学报, 2023, 55(11): 2539-2553doi:10.6052/0459-1879-23-277
Feng Tangsijie, Liang Wei. The buckling analysis of thin-walled structures based on physics-informed neural networks. Chinese Journal of Theoretical and Applied Mechanics, 2023, 55(11): 2539-2553 doi: 10.6052/0459-1879-23-277
Citation: Feng Tangsijie, Liang Wei. The buckling analysis of thin-walled structures based on physics-informed neural networks.Chinese Journal of Theoretical and Applied Mechanics, 2023, 55(11): 2539-2553doi:10.6052/0459-1879-23-277

基于物理信息神经网络的薄壁结构屈曲分析

doi:10.6052/0459-1879-23-277
基金项目:国家自然科学基金资助项目(U1837207)
详细信息
    通讯作者:

    梁伟, 副教授, 主要研究方向为复合材料结构、动力学与控制 . E-mail:liangwei@buaa.edu.cn

  • 中图分类号:TU311.2

THE BUCKLING ANALYSIS OF THIN-WALLED STRUCTURES BASED ON PHYSICS-INFORMED NEURAL NETWORKS

  • 摘要:基于物理信息神经网络(PINN)建立了一种求解薄壁结构屈曲非线性控制方程组的方法. 薄壁结构的非线性控制方程可由挠度和应力函数表示成复杂的4阶非线性偏微分方程组, 使用物理信息神经网络(PINN)解法可以克服传统数值方法对求解域网格的依赖性. 文中建立的神经网络模型根据基于加权的均方误差的损失函数更新网络参数, 并用弧长法迭代的思想进行外层迭代控制以应对屈曲问题的迭代特性. 将弧长法, 硬边界条件, 基于预训练的权重调整策略, 以及自适应激活函数策略融合进网络优化的过程中使得PINN能够更为高效地求解线性与非线性屈曲问题. 文章对两种典型的薄壁结构进行了屈曲模态和带有缺陷的非线性后屈曲问题求解, 并将神经网络获得的解和有限元结果进行了对比. 结果分析表明, 物理信息神经网络方法能够在不需要标签数据的前提下对薄壁结构的屈曲问题进行有效分析, 并且给予的额外标签数据能够提高此方法的求解效率. 该方法虽较成熟的有限元解法收敛速度较慢, 但不需要对求解域进行人为的前处理, 有一定工程应用可行性.

  • 图 1PINN求解网络框架图

    Figure 1.A framework of using PINN network as PDE solver

    图 2以弧长法为外层控制的物理信息神经网络屈曲/后屈曲求解框图

    Figure 2.The framework of PINN as PDE sovler towards buckling/postbuckling problems governed by arcs method

    图 3带有缺陷的均质薄板的非线性屈曲位移云图对比 (上: PINN结果, 下: FEM结果)

    Figure 3.The comparison of nonlinear buckling with deficiency (up: PINN results, down: FEM results)

    图 4薄壁圆筒壳及坐标系

    Figure 4.Thin-walled shell and its coordinate system

    图 5带有缺陷的均质圆筒壳在最大$L P F_{\operatorname{max}}$处非线性屈曲位移云图对比 (续)

    Figure 5.The comparison of nonlinear buckling displacement at$ LP{F_{\max }} $ (continued)

    图 6后屈曲载荷−端部缩短曲线

    Figure 6.The end shrinkage plot of the cylindrical shell against LPF

    图 7圆筒壳的缺陷敏感度影响曲线

    Figure 7.Comparison of deficiency sensitivity of cylindrical shells

    图 8不同权重策略对损失函数收敛特性的影响

    Figure 8.The influence of different weighting scheme to loss functions

    图 9不同网络参数对损失函数收敛特性的影响

    Figure 9.The influence of different network parameters to loss functions

    图 10各优化器/求解器迭代到收敛的计算时间对比, 以Adam优化器的PINN算法为基准(3种边界条件算例综合的平均值)

    Figure 10.The influence of optimizers and solvers on computation time, PINN with Adam optimizer as standard (the average of three different boundary conditions)

    图 11使用各优化器时对损失函数的影响

    Figure 11.The influence of optimizers to loss functions

    图 12L-BFGS陷入的局部最优解

    Figure 12.The local optimum result that L-BFGS stuck with

    图 13均质薄板算例中不同激活函数对损失函数的影响

    Figure 13.The influence of the activation function on the loss function

    图 14relu激活函数导致的求出不符合物理规律的解

    Figure 14.A result obtained by relu activation function that does not meet the governing equation

    图 15标签数据对损失函数收敛特性的影响

    Figure 15.The influence of labeled data to loss functions

    图 16本文算例所需时间及其组成

    Figure 16.The computation time of the numerical examples and its composition

    算法1: 基于预训练的损失函数权重的PINN训练过程
    Input: 预训练任务最大迭代数$ite{r_{{\rm{pre}}}}$, 主要训练任务最大迭代数$ ite{r_{\max }} $, 数量级超参$ {\boldsymbol{\alpha }} $
    1: 初始化神经网络权重和偏置向量$\left[{{\boldsymbol{W}}}^{i},{{\boldsymbol{b}}}^{i}\right], i\in La$
    2: ${\left( {\tilde W,\tilde F} \right)_{ite{r_{{\rm{pre}}}}}} = \Im (x,y,\varTheta )$(神经网络预训练${N_{{\rm{pre}}}}$次)
    3:  $\begin{gathered}Los{s_{ite{r_{{\rm{pre}}}}}} = {\left\| {{\boldsymbol{MS}}{{\boldsymbol{E}}_P}} \right\|_1} + {\left\| {{\boldsymbol{MS}}{{\boldsymbol{E}}_B}} \right\|_1} + {\left\| {{\boldsymbol{MS}}{{\boldsymbol{E}}_I}} \right\|_1} \\ \lg \left( {{{\boldsymbol{w}}_P}} \right) = \left[ {\boldsymbol{\alpha }} \right] - \left[ {{\text{lg}}{{\left( {{\boldsymbol{MS}}{{\boldsymbol{E}}_P}} \right)}_{{N_{{\rm{pre}}}}}}} \right] \\ \lg \left( {{{\boldsymbol{w}}_B}} \right) = \left[ {\boldsymbol{\alpha }} \right] - \left[ {{\text{lg}}{{\left( {{\boldsymbol{MS}}{{\boldsymbol{E}}_B}} \right)}_{{N_{{\rm{pre}}}}}}} \right] \\ \lg \left( {{{\boldsymbol{w}}_I}} \right) = \left[ {\boldsymbol{\alpha }} \right] - \left[ {{\text{lg}}{{\left( {{\boldsymbol{MS}}{{\boldsymbol{E}}_I}} \right)}_{{N_{{\rm{pre}}}}}}} \right] \\ {\text{ }} \\ \end{gathered}$
    4: for $ epoch = 0 $ to $ epoch = ite{r_{\max }} $ do:(主训练任务)
    {
     ${\left( {\tilde W,\tilde F} \right)_{{\rm{epoch}}}} = \Im (x,y,\varTheta )$(神经网络前向传播)
     $Los{s_{{\rm{epoch}}}} = {{\boldsymbol{w}}_{{P}}}{\boldsymbol{MS}}{{\boldsymbol{E}}_P} + {{\boldsymbol{w}}_{{B}}}{\boldsymbol{MS}}{{\boldsymbol{E}}_B} + {{\boldsymbol{w}}_{{I}}}{\boldsymbol{MS}}{{\boldsymbol{E}}_I}$
     Update $ \varTheta $(神经网络反向传播)
    }
    End for
    下载: 导出CSV
    算法2: 基于弧长法的PINN近似解增量求解算法
    Input: 利用抽样得到求解板壳上的点集$ \left( {{x_i},{y_i}} \right) $, 初始载荷系数$ LP{F_0} $, 误差容限$ \varepsilon $, 迭代次数上限$ ite{r_{\max }} $
    Output: 屈曲偏微分控制方程近似挠度$ \tilde W\left( {x,y} \right) $及其最大值输出$ {W_{\max }} $, 对应的载荷因子$ LP{F_{\max }} $
    1: 构造一个前馈神经网络 $ \tilde W = \Im (x,y,LPF,\varTheta ) $, 其拥有$ La $个隐藏层, 每层有$ {N_u} $个神经元, 训练批次数为$ ite{r_{\max }} $
    2: 进行PDE求解域内和求解边界上的撒点, 域内为$ {N_P} $个, 域边界上为$ {N_B} $个, 采样点总数为$ {N_S} = {N_P} + {N_B} $个
    3: $ Loss = {{\boldsymbol{w}}_P}{\boldsymbol{MS}}{{\boldsymbol{E}}_P} + {{\boldsymbol{w}}_B}{\boldsymbol{MS}}{{\boldsymbol{E}}_B} + {{\boldsymbol{w}}_I}{\boldsymbol{MS}}{{\boldsymbol{E}}_I} $
    4: $ \tilde {{W_0}}\left( {{x_i},{y_i}} \right) = \hat u\left( {{x_i},{y_i},LP{F_0},\varTheta } \right) $
    5: while $ \left( {\left| {LP{F_j} - LP{F_{j - 1}} > \varepsilon } \right|} \right)\& j \leqslant ite{r_{\max }} $ do:
    {
    $\Delta LP{F_{j + 1}} = \dfrac{{ - \displaystyle\sum\limits_{s = 1}^{{N_S}} {\left[ {\tilde {{W_j}}\left( {LP{F_j},{x_s},{y_s}} \right)\Delta \tilde {{W_j}}\left( {LP{F_{j + 1}},{x_s},{y_s}} \right)} \right]} }}{{{\omega ^2}LP{F_j} - \displaystyle\sum\limits_{s = 1}^{{N_S}} {\left[ {\tilde {{W_j}}\left( {LP{F_j},{x_s},{y_s}} \right)} \right]\frac{{\partial F}}{{\partial w}}\Delta LP{F_j}} }}$
    $ LP{F_{j + 1}} = LP{F_j} + \Delta LP{F_{j + 1}} $
    $ {\tilde W_{j + 1}}\left( {{x_i},{y_i}} \right) = \Im \left( {{x_i},{y_i},LP{F_{j + 1}},\varTheta } \right) $
    }
    6: return最佳网络参数 $ {\varTheta ^*} $
    下载: 导出CSV

    表 1第一阶屈曲模态对应的外载对比

    Table 1.The buckling load of the first buckling mode

    Boundary conditions PINN results/
    (N·m−1)
    FEM results/
    (N·m−1)
    Relative error
    SSSS 1466.2 1537.6 4.86%
    SCSC 4092.1 4048.5 1.07%
    SCSF 893.2 868.7 2.82%
    下载: 导出CSV

    表 2带缺陷的圆筒壳屈曲行为最大比例载荷因子$ \left( {LP{F_{\max }}} \right) $与无量纲挠度$ \left( {{w_{\max }}} \right) $误差对比

    Table 2.The $ LP{F_{\max }} $and $ {w_{\max }} $of shell buckling with initial deficiency and their relative error

    Network parameters $ LP{F_{\max }} $ $ {w_{\max }} $ Relative
    error$ \left( {LP{F_{\max }}} \right) $
    Relative
    error$ \left( {{w_{\max }}} \right) $
    L= 3,Nu= 40,
    Adam
    0.522 3.3 5.66% 4.76%
    L= 3,Nu= 40,
    L-BFGS
    0.455 3.64 8.57% 15.5%
    L= 4,Nu= 40,
    Adam
    0.480 2.97 2.92% 6.06%
    L= 4,Nu= 40,
    L-BFGS
    0.432 2.88 14.35% 9.38%
    L= 4,Nu= 50,
    Adam
    0.531 2.77 7.48% 13.7%
    L= 4,Nu= 50,
    L-BFGS
    0.399 2.99 23.8% 5.35%
    下载: 导出CSV

    表 3拥有标签数据训练得到的第一阶屈曲载荷

    Table 3.The buckling load of the first buckling mode trained with labeled data

    Boundary conditions PINN results/(N·m−1) FEM results/×103% Relative error/%
    SSSS 1721.8 1583.1 8.76
    SCSC 4388.2 4169.4 5.24
    SCSF 1009.5 894.7 12.77
    下载: 导出CSV
    Variable name Definition
    $ a $ shell structure dimension
    $ h $ shell structure thickness
    $ E $ modulus of elasticity
    $ \nu $ Poisson's ratio
    $ P $ shell edge load
    $ U,V,W $ in-plane displacement
    $ \tilde W $ fit displacement generated by NN
    $ Q $ middle surface stress
    $ F $ airy stress function
    $ D $ bending stiffness
    $ {\varepsilon ^0} $ middle surface strain
    $ La $ NN hidden layer number
    $ {N_u} $ neuron number for hidden layer
    $ {\boldsymbol{W}} $ weight matrix for NN
    $ {\boldsymbol{b}} $ bias for NN
    $ {\sigma _i} $ activation function for NN
    $ {\boldsymbol{MSE}} $ mean square error vector
    $ \varTheta $ hyperparameters for NN
    ${ {\boldsymbol{w} }_{{P} } }{{,\;} }{ {\boldsymbol{w} }_{{B} } }{{,\;} }{ {\boldsymbol{w} }_{{I} } }$ weights vector for loss function
    $ {N_P} $ sample points inside the PINN domain
    $ {N_B} $ sample points on the PINN domain boundary
    $ {N_S} $ total sample points of PINN
    $ {N_T} $ test points of PINN
    $ LPF $ load proportional factor
    $ lr $ PINN learning rate
    $ite{r_{{\rm{pre}}} }$ maximum iteration for PINN pre-training
    $ ite{r_{\max }} $ maximum iteration for PINN training
    $ {a_{mn}} $ deficiency coefficient
    $ R $ cylindrical shell radius
    $ L $ cylindrical shell height
    $ {\bar Z _B} $ batdorf parameter
    下载: 导出CSV
    Numerical example $ \alpha $ ${ {\boldsymbol{w} }_{{P} } }$ ${ {\boldsymbol{w} }_{{B} } }$
    plate, SSSS −4 [1,1] [104,104,104,104]
    plate, SCSC −4 [1,1] [104,103,104,103]
    plate, SCSF −4 [1,1] [104,103,104,102]
    shell −4 [104, 10−2] [104,102,102,102,105]
    下载: 导出CSV
    Hyperparameter setup Number of random seeds Variance of lg (Loss)
    $L = 3, {N}_{u} = 60\;\; {\rm{Adam}}\;\; \left[{w}_{P},{w}_{B}\right] = \left[1,1\right]$ 10 1.900 6
    $L = 3, {N}_{u} = 60\;\; {\rm{Adam}}\;\; \left[{w}_{P},{w}_{B}\right] = {\left[{w}_{P},{w}_{B}\right]}_{{\rm{epoch}}}$ 10 0.880 6
    $L = 3, {N}_{u} = 40\;\;{\rm{Adam}}$ 10 0.931 0
    $L = 3, {N}_{u} = 50\;\; {\rm{Adam}}$ 10 0.726 7
    $L = 3, {N}_{u} = 70\;\; {\rm{Adam}}$ 10 0.372 5
    $L = 3, {N}_{u} = 80\;\; {\rm{Adam}}$ 10 0.883 0
    $L = 4, {N}_{u} = 60\;\; {\rm{Adam}}$ 10 0.590 7
    $L = 5, {N}_{u} = 60\;\; {\rm{Adam}}$ 10 0.974 5
    $L = 3, {N}_{u} = 60\;\; {\rm{Adagrad}}$ 10 0.933 3
    $L = 3, {N}_{u} = 60\;\;{\rm{Adadelta}}$ 10 0.794 1
    $L = 3, {N}_{u} = 60\;\; {\rm{rmsprop}}$ 10 0.490 9
    $L = 3, {N}_{u} = 60\;\; {\rm{L-BFGS}}$ 10 1.041 9
    $L = 3, {N}_{u} = 60\;\; {\rm{LAAF} } = \text{relu}$ 10 1.451 8
    $L = 3, {N}_{u} = 60\;\; {\rm{LAAF}} = \mathrm{tanh}$ 10 0.652 9
    下载: 导出CSV
  • [1] 查文舒, 李道伦, 沈路航等. 基于神经网络的偏微分方程求解方法研究综述. 力学学报, 2022, 54(3): 543-556 (Zha Wenshu, Li Daolun, Shen Luhang, et al. Review of neural network-based methods for solving partial differential equations.Chinese Journal of Theoretical and Applied Mechanics, 2022, 54(3): 543-556 (in Chinese)

    Zha Wenshu, Li Daolun, Shen Luhang, et al. Review of neural network-based methods for solving partial differential equations.Chinese Journal of Theoretical and Applied Mechanics, 2022, 54(3): 543-556 (in Chinese)
    [2] 陈苏, 丁毅, 孙浩等. 物理驱动深度学习波动数值模拟方法及应用. 力学学报, 2023, 55(1): 272-282 (Chen Su, Ding Yi, Sun Hao, et al. Methods and applications of physical information deep learning in wave numerical simulation.Chinese Journal of Theoretical and Applied Mechanics, 2023, 55(1): 272-282 (in Chinese)

    Chen Su, Ding Yi, Sun Hao, Zhao Mi, et al. Methods and applications of physical information deep learning in wave numerical simulation.Chinese Journal of Theoretical and Applied Mechanics, 2023, 55(1): 272-282 (in Chinese)
    [3] Margenberg N, Hartmann D, Lessig C, et al. A neural network multigrid solver for the Navier-Stokes equations.Journal of Computational Physics, 2022, 460: 110983doi:10.1016/j.jcp.2022.110983
    [4] Shin J, Park SK, Kim J. A hybrid FEM for solving the Allen–Cahn equation.Applied Mathematics and Computation, 2014, 1(244): 606-612
    [5] Xie J, Li M. A fast BDF2 Galerkin finite element method for the one-dimensional time-dependent Schrödinger equation with artificial boundary conditions.Applied Numerical Mathematics, 2023, 168: 89-106
    [6] Cybenko G. Approximation by superpositions of a sigmoidal function.Journal Mathematics of Control,Signals,and Systems, 1989, 2(4): 303-314doi:10.1007/BF02551274
    [7] Hornik K. Approximation capabilities of multilayer feedforward networks.Neural Networks, 1991, 4(2): 251-257doi:10.1016/0893-6080(91)90009-T
    [8] Margossian CC. A review of automatic differentiation and its efficient implementation.Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery, 2019, 9(4): e1305doi:10.1002/widm.1305
    [9] Raissi M, Perdikaris P, Karniadakis GE. Physics informed deep learning (part I): Data-driven solutions of nonlinear partial differential equations.arXiv, 2017, 1711: 10561
    [10] Raissi M, Perdikaris P, Karniadakis GE. Physics informed deep learning (part II): Data-driven discovery of nonlinear partial differential equations.arXiv, 2017, 1711: 10566
    [11] Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 2019, 378: 686-707doi:10.1016/j.jcp.2018.10.045
    [12] Baydin AG, Pearlmutter BA, Radul AA, et al. Automatic differentiation in machine learning: a survey.Journal of Machine Learning Research, 2018, 18(153): 1-43
    [13] 李野, 陈松灿. 基于物理信息的神经网络: 最新进展与展望. 计算机科学, 2022, 49(4): 254-262 (Li Ye, Chen Songchan. Physics-informed neural networks: recent advances and prospects.Computer Science, 2022, 49(4): 254-262 (in Chinese)

    Ye L, Songcan C. Physics-informed neural networks: recent advances and prospects.Computer Science, 2022, 49(4): 254-262 (in Chinese)
    [14] Grohs P, Hornung F, Jentzen A, et al. A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. American Mathematical Society, 2023
    [15] 陈新海, 刘杰, 万仟等. 一种改进的基于深度神经网络的偏微分方程求解方法. 计算机工程与科学, 2022, 44(11): 1932-1940

    Chen Xinhai, Liu Jie, Wan Qian, et al. An improved method for solving partial differential equations using deep neural networks.Computer Engineering & Science,2022, 44(11): 1932-1940 (in Chinese)
    [16] Fang Z. A high-efficient hybrid physics-informed neural networks based on convolutional neural network.IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(10): 5514-5526doi:10.1109/TNNLS.2021.3070878
    [17] Pang G, Lu L, Karniadakis GE. fPINNs: fractional physics-informed neural networks.SIAM Journal on Scientific Computing, 2019, 41(4): A2603-A2626doi:10.1137/18M1229845
    [18] Xiang Z, Peng W, Zheng X, et al. Self-adaptive loss balanced physics-informed neural networks for the incompressible Navier-Stokes equations.Neurocomputing, 2022, 496: 11-34doi:10.1016/j.neucom.2022.05.015
    [19] Shukla K, Jagtap AD, Karniadakis GE. Parallel physics-informed neural networks via domain decomposition.Journal of Computational Physics, 2021, 447(0021-9991): 110683
    [20] Tao F, Liu X, Du H, et al. Physics-informed artificial neural network approach for axial compression buckling analysis of thin-walled cylinder//AIAA Scitech 2020 Forum. Orlando, FL. American Institute of Aeronautics and Astronautics, 2020
    [21] Li W, Bazant MZ, Zhu J, et al. A physics-guided neural network framework for elastic plates: Comparison of governing equations-based and energy-based approaches.Computer Methods in Applied Mechanics and Engineering, 2021, 383: 113933
    [22] Haghighat E, Raissi M, Moure A, et al. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics.Computer Methods in Applied Mechanics and Engineering, 2021, 379: 113741doi:10.1016/j.cma.2021.113741
    [23] Bastek JH, Kochmann DM. Physics-informed neural networks for shell structures.European Journal of Mechanics-A/Solids, 2023, 97: 104849doi:10.1016/j.euromechsol.2022.104849
    [24] Yan CA, Vescovini R, Dozio L. A framework based on physics-informed neural networks and extreme learning for the analysis of composite structures.Computers&Structures, 2022, 265: 106761
    [25] Schilling JC, Mittelstedt C. Local buckling analysis of omega-stringer-stiffened composite panels using a new closed-form analytical approximate solution.Thin-Walled Structures, 2020, 147: 106534doi:10.1016/j.tws.2019.106534
    [26] Xu Y, Tong Y, Liu M, et al. A new effective smeared stiffener method for global buckling analysis of grid stiffened composite panels.Composite Structures, 2016, 158: 83-91doi:10.1016/j.compstruct.2016.09.015
    [27] Lu L, Meng X, Mao Z, et al. DeepXDE: A deep learning library for solving differential equations.SIAM Review, 2021, 63(1): 208-228doi:10.1137/19M1274067
    [28] Tauchert TR. Large Plate Deflections, von Kármán Theory, Statical Problems//Encyclopedia of Thermal Stresses. Hetnarski RB ed. Dordrecht: Springer Netherlands, 2014: 2697-2704
    [29] 沈惠申. 板壳后屈曲行为. 上海: 上海科学技术出版社, 2014

    Shen Huishen. Postbuckling Behavior of Plates and Shells. Shanghai: Shanghai Scientific & Technical Publishers, 2014 (in Chinese)
    [30] Jagtap AD, Kawaguchi K, Em Karniadakis G. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks.Proceedings of the Royal Society A:Mathematical,Physical and Engineering Sciences, 2020, 476(2239): 20200334doi:10.1098/rspa.2020.0334
    [31] Chu J, Liu X, Zhang Z, et al. A novel method overcoming overfitting of artificial neural network for accurate prediction: Application on thermophysical property of natural gas.Case Studies in Thermal Engineering, 2021, 28: 101406doi:10.1016/j.csite.2021.101406
    [32] 唐明健, 唐和生 基于物理信息的深度学习求解矩形薄板力学正反问题. 计算力学学报, 2022, 39(1): 120-128

    Tang Mingjian, Tang Hesheng. A physics-informed deep learning method for solving forward and inverse mechanics problems of thin rectangular plates.Chinese Journal of Computational Mechanics, 2022, 39(1): 120-128 (in Chinese)
    [33] Jiao Y, Lai Y, Li D, et al. A rate of convergence of Physics Informed Neural Networks for the linear second order elliptic PDEs.Communications in Computational Physics, 2022, 31(4): 1272-1295doi:10.4208/cicp.OA-2021-0186
    [34] Krishnapriyan A, Gholami A, Zhe S, et al. Characterizing possible failure modes in physics-informed neural networks.Advances in Neural Information Processing Systems, 2021, 34: 26548-26560
    [35] De Ryck T, Mishra S. Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs.Advances in Computational Mathematics, 2022, 48(6): 1-40
    [36] Psaros AF, Kawaguchi K, Karniadakis GE. Meta-learning PINN loss functions.Journal of Computational Physics, 2022, 458: 111121doi:10.1016/j.jcp.2022.111121
  • 加载中
图(16)/ 表(8)
计量
  • 文章访问数:116
  • HTML全文浏览量:45
  • PDF下载量:23
  • 被引次数:0
出版历程
  • 收稿日期:2023-06-29
  • 录用日期:2023-09-22
  • 网络出版日期:2023-09-23

目录

    /

      返回文章
      返回
        Baidu
        map