diff --git a/docs/modules/ROOT/images/fin.png b/docs/modules/ROOT/images/fin.png new file mode 100644 index 0000000..cd7e42d Binary files /dev/null and b/docs/modules/ROOT/images/fin.png differ diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc index 3b71977..4fad702 100644 --- a/docs/modules/ROOT/nav.adoc +++ b/docs/modules/ROOT/nav.adoc @@ -1,6 +1,10 @@ * xref:index.adoc[Introduction] * xref:quickstart.adoc[] * xref:overview.adoc[] +* Homework +** xref:homework/problem-set-1.adoc[Homework 1] +** xref:homework/problem-set-2.adoc[Homework 2] +** xref:homework/problem-set-3.adoc[Homework 3] * Setting up the development Environment ** xref:env/cmake.adoc[] ** xref:env/antora.adoc[] diff --git a/docs/modules/ROOT/pages/homework/problem-set-1.adoc b/docs/modules/ROOT/pages/homework/problem-set-1.adoc new file mode 100644 index 0000000..f42bdd0 --- /dev/null +++ b/docs/modules/ROOT/pages/homework/problem-set-1.adoc @@ -0,0 +1,293 @@ += Problem Set 1: RB for Linear Affine Elliptic Problems +:page-jupyter: true +C. Prud’homme +2023-11-27 +:stem: latexmath + +We consider the problem of designing a thermal fin to effectively remove heat from a surface. The two-dimensional fin, shown in <>, consists of a vertical central "`post`" and four horizontal "`subfins`"; the fin conducts heat from a prescribed uniform flux "`source`" at the root, latexmath:[\Gamma_{\mathrm{root}}] , through the large-surface-area subfins to surrounding flowing air. The fin is characterized by a five-component parameter vector, or "`input,`" latexmath:[\mu_ += (\mu_1 , \mu_2, \ldots, \mu_5 )],where latexmath:[\mu_i = k^i , i = 1, \ldots +, 4], and latexmath:[\mu_5 = \mathrm{Bi}]; latexmath:[\mu] may take on any value in a specified design set latexmath:[D \subset \mathbb{R}^5]. + +.Thermal fin +[#fig:1] +image::fin.png[image] + +Here latexmath:[k^i] is the thermal conductivity of the ith subfin (normalized relative to the post conductivity latexmath:[k^0 \equiv 1]); and latexmath:[\mathrm{Bi}] is the Biot number, a nondimensional heat transfer coefficient reflecting convective transport to the air at the fin surfaces (larger latexmath:[\mathrm{Bi}] means better heat transfer). For example, suppose we choose a thermal fin with latexmath:[k^1 = 0.4, k^2 = 0.6, k^3 = 0.8, k^4 = 1.2], and latexmath:[\mathrm{Bi} = 0.1]; for this particular configuration latexmath:[\mu = \{0.4, 0.6, 0.8, 1.2, 0.1\}], which corresponds to a single point in the set of all possible configurations D (the parameter or design set). The post is of width unity and height four; the subfins are of fixed thickness latexmath:[t = 0.25] and length latexmath:[L = 2.5]. + +We are interested in the design of this thermal fin, and we thus need to look at certain outputs or cost-functionals of the temperature as a function of latexmath:[\mu]. We choose for our output latexmath:[T_{\mathrm{root}}], the average steady-state temperature of the fin root normalized by the prescribed heat flux into the fin root. The particular output chosen relates directly to the cooling efficiency of the fin — lower values of latexmath:[T_{\mathrm{root}}] imply better thermal performance. The steady–state temperature distribution within the fin, latexmath:[u(\mu)], is governed by the elliptic partial differential equation + +[latexmath] +++++ +\label{eq:1} +-k^i \Delta u^i = 0 \text{ in } \Omega^i , i = 0, \ldots, 4, +++++ + +where latexmath:[\Delta] is the Laplacian operator, and latexmath:[u_i] refers to the restriction of latexmath:[u \text{ to } \Omega^i] . Here latexmath:[\Omega^i] is the region of the fin with conductivity latexmath:[k^i , i = 0,\ldots, 4: \Omega^0] is thus the central post, and latexmath:[\Omega^i , i = 1,\ldots, 4], corresponds to the four subfins. The entire fin domain is denoted latexmath:[\Omega (\bar{\Omega} = \cup_{i=0}^4 \bar{\Omega}^i )]; the boundary latexmath:[\Omega] is denoted latexmath:[\Gamma]. We must also ensure continuity of temperature and heat flux at the conductivity– discontinuity interfaces latexmath:[\Gamma^i_{int} \equiv \partial\Omega^0 \cap \partial\Omega^i , i = 1,\ldots, 4], where latexmath:[\partial\Omega^i] denotes the boundary of latexmath:[\Omega^i], we have on latexmath:[\Gamma^i_{int} i = 1,\ldots, 4] : + +[latexmath] +++++ +\begin{aligned} + u^0 &= u^i \\ + -(\nabla u^0 \cdot n^i ) &= -k^i (\nabla u^i \cdot n^i ) +\end{aligned} +++++ + +here latexmath:[n^i] is the outward normal on latexmath:[\partial\Omega^i] . Finally, we introduce a Neumann flux boundary condition on the fin root + +[latexmath] +++++ +-(\nabla u^0 \cdot n^0 ) = -1 \text{ on } \Gamma_{\mathrm{root}} , +++++ + +which models the heat source; and a Robin boundary condition + +[latexmath] +++++ +-k^i (\nabla u^i \cdot n^i ) = \mathrm{Bi} u^i \text{ on } \Gamma^i_{ext} , i = 0,\ldots, 4, +++++ + +which models the convective heat losses. Here latexmath:[\Gamma^i_{ext}] is that part of the boundary of latexmath:[\Omega^i] exposed to the flowing fluid; note that latexmath:[\cup_{i=0}^4 \Gamma^i_{ext} = \Gamma\backslash\Gamma_{\mathrm{root}}]. The average temperature at the root, latexmath:[T_{\mathrm{root}} (\mu)], can then be expressed as latexmath:[\ell^O(u(\mu))], where + +[latexmath] +++++ +\ell^O (v) = \int_{\Gamma_{\mathrm{root}}} v +++++ + +(recall latexmath:[\Gamma_{\mathrm{root}}] is of length unity). Note that latexmath:[\ell(v) = \ell^O(v)] for this problem. + +[[sec:part-1-finite]] +== Part 1 - Finite Element Approximation + +We saw in class that the reduced basis approximation is based on a "`truth`" finite element approximation of the exact (or analytic) problem statement. To begin, we have to show that the exact problem described above does indeed satisfy the affine parameter dependence and thus fits into the framework shown in class. + +{empty}a) Show that latexmath:[u^e (\mu) \in X^e \equiv H^1 (\Omega)] satisfies the weak form + +[latexmath] +++++ +a(u^e (\mu), v; \mu) = \ell(v), \forall v \in X^e , +++++ + +with + +[latexmath] +++++ +\begin{aligned} +a(w, v; \mu) &=\sum_{i=0}^4 k^i \int_{\Omega^i} \nabla w \cdot \nabla v dA ++ \mathrm{Bi}\int_{\Gamma\backslash\Gamma_{\mathrm{root}}} w v dS,\\ +\ell(v) &= \int_{\Gamma_{\mathrm{root}}} v +\end{aligned} +++++ + +{empty}b) Show that latexmath:[u^e (\mu)] is the argument that minimizes + +[latexmath] +++++ +\label{eq:2} +J(w) = \frac{1}{2}\sum_{i=0}^4 k^i \int_{\Omega_i} \nabla w \cdot +\nabla w dA + \frac{\mathrm{Bi}}{2} +\int_{\Gamma\backslash\Gamma_{\mathrm{root}}} w^2 dS - +\int_{\Gamma_{\mathrm{root}}} w dS +++++ +over all functions latexmath:[w] in latexmath:[X^e] . + +We now consider the linear finite element space + +[latexmath] +++++ +X^{\mathcal{N}} = \{v \in H^1 (\Omega)| v|_{T_h} \in \mathbb{P}^1 (\mathcal{T}_h ), \forall T_h \in \mathcal{T}_h \}, +++++ +and look for latexmath:[u^{\mathcal{N}} (\mu) \in X^{\mathcal{N}}] such that + +[latexmath] +++++ +a(u^{\mathcal{N}} (\mu), v; \mu) = \ell(v), \forall v \in X^{\mathcal{N}} ; +++++ +our output of interest is then given by + +[latexmath] +++++ +T_{\mathrm{root}}^\mathcal{N}(\mu) = \ell^O (u^{\mathcal{N}} (\mu)). +++++ +Applying our usual nodal basis, we arrive at the matrix equations + +[latexmath] +++++ +\begin{aligned} +% + A^{\mathcal{N}} (\mu) % + u^{\mathcal{N}} (\mu) & = % + F^{\mathcal{N}} ,\\ +T_{\mathrm{root}}^\mathcal{N}(\mu) &= (% + L^{\mathcal{N}} )^T % + u^{\mathcal{N}} (\mu), +\end{aligned} +++++ + +where latexmath:[% + A^{\mathcal{N}} \in +\mathbb{R}^{\mathcal{N}\times\mathcal{N}} , +% + u^{\mathcal{N}} \in \mathbb{R}^{\mathcal{N}} , +% + F^{\mathcal{N}} \in \mathbb{R}^{\mathcal{N}} , and +% + L^{\mathcal{N}} \in \mathbb{R}^{\mathcal{N}}]; here latexmath:[\mathcal{N}] is the dimension of the finite element space latexmath:[X^{\mathcal{N}}], which (given our natural boundary conditions) is equal to the number of nodes in latexmath:[\mathcal{T}_h]. + +[[sec:part-2-reduced]] +== Part 2 - Reduced-Basis Approximation + +In general, the dimension of the finite element space, latexmath:[\mathop{\mathrm{dim}}X = +\mathcal{N}], will be quite large (in particular if we were to treat the more realistic three-dimensional fin problem), and thus the solution of latexmath:[% + A^{\mathcal{N}} % + u^{\mathcal{N}} (\mu) = % + F^\mathcal{N}] can be quite expensive. We thus investigate the reduced-basis methods that allow us to accurately and very rapidly predict latexmath:[T_{\mathrm{root}} (\mu)] in the limit of many evaluations — that is, at many different values of latexmath:[\mu] — which is precisely the "`limit of interest`" in design and optimization studies. To derive the reduced-basis approximation we shall exploit the energy principle, + +[latexmath] +++++ +u(\mu) = \mathop{\mathrm{argmin}}_{w \in X} J(w), +++++ +where latexmath:[J(w)] is given by (#eq:2[[eq:2]]). + +To begin, we introduce a sample in parameter space, latexmath:[S_N = \{\mu_1 , + \mu_2 ,\ldots, \mu_N \}] with latexmath:[N \ll \mathcal{N}]. Each latexmath:[\mu_i , i = +1,\ldots, N] , belongs in the parameter set latexmath:[\mathcal{D}]. For our parameter set we choose latexmath:[\mathcal{D} = [0.1, 10.0]^4 \times [0.01, 1.0]], that is, latexmath:[0.1 \leq k^i \leq 10.0, i = 1,\ldots, 4] for the conductivities, and latexmath:[0.01 +\leq \mathrm{Bi} \leq 1.0] for the Biot number. We then introduce the reduced-basis space as + +[latexmath] +++++ +W_N = \mathop{\mathrm{span}}\{u^\mathcal{N} (\mu_1 ), u^\mathcal{N} (\mu_2 ),\ldots, u^\mathcal{N}(\mu_N ) \} +++++ +where latexmath:[u_N (\mu_i )] is the finite-element solution for latexmath:[\mu = +\mu_i]. + +To simplify the notation, we define latexmath:[\xi^i \in X] as latexmath:[\xi^i = u_N +(\mu_i ), i = 1,\ldots, N]; we can then write latexmath:[W_N = span\{\xi^i , i = + 1,\ldots, N \}]. Recall that latexmath:[W_N = \mathop{\mathrm{span}}\{\xi^i , i = 1,\ldots, N \}] means that latexmath:[W_N] consists of all functions in latexmath:[X] that can be expressed as a linear combination of the latexmath:[\xi^i] ; that is, any member latexmath:[v_N] of latexmath:[W_N] can be represented as + +[latexmath] +++++ +v_N =\sum_{i=1}^N \beta^j \xi^j , +++++ +for some unique choice of latexmath:[\beta^j \in \mathbb{R}, j = 1,\ldots, +N]. (We implicitly assume that the latexmath:[\xi^i , i = 1,\ldots, N], are linearly independent; it follows that latexmath:[W_N] is an latexmath:[N] -dimensional subspace of latexmath:[X^\mathcal{N}].) In the reduced-basis approach we look for an approximation latexmath:[u_N (\mu)] to latexmath:[u^\mathcal{N} (\mu)] (which for our purposes here we presume is arbitrarily close to latexmath:[u^e (\mu))] in latexmath:[W_N] ; in particular, we express latexmath:[u_N (\mu)] as + +[latexmath] +++++ +u_N (\mu) =\sum_{i=1}^N u^j_N \xi^j ; +++++ + +we denote by latexmath:[% + u_N (\mu) \in \mathbb{R}^N] the coefficient vector latexmath:[(u^1_N ,\ldots, u^N_N )^T]. The premise — or hope — is that we should be able to accurately represent the solution at some new point in parameter space, latexmath:[\mu], as an appropriate linear combination of solutions previously computed at a small number of points in parameter space (the latexmath:[\mu_i , i = 1,\ldots, N ).] But how do we find this appropriate linear combination? And how good is it? And how do we compute our approximation efficiently? The energy principle is crucial here (though more generally the weak form would suffice). To wit, we apply the classical Rayleigh-Ritz procedure to define + +[latexmath] +++++ +\label{eq:3} +u_N (\mu) = \mathop{\mathrm{argmin}}_{wN \in W_N} J(w_N ); +++++ + +alternatively we can apply Galerkin projection to obtain the equivalent statement + +[latexmath] +++++ +\label{eq:4} +a(u_N (\mu), v; \mu) = \ell(v),\quad \forall v \in W_N . +++++ + +The output can then be calculated from + +[latexmath] +++++ +\label{eq:5} +{T_{\mathrm{root}}}_N (\mu) = \ell^O (u_N (\mu)). +++++ + +We now study this approximation in more detail. + +{empty}a) Prove that, in the energy norm latexmath:[||| \cdot ||| \equiv (a(·, ·; +\mu))^{1/2}] , + +[latexmath] +++++ +|||u(\mu) - u_N (\mu)||| \leq |||u(\mu) - w_N |||, \forall w_N \in W_N . +++++ + +This inequality indicates that out of all the possible choices of wN in the space latexmath:[W_N] , the reduced basis method defined above will choose the "`best one`" (in the energy norm). Equivalently, we can say that even if we knew the solution latexmath:[u(\mu)], we would not be able to find a better approximation to latexmath:[u(\mu)] in latexmath:[W_N] — in the energy norm — than latexmath:[u_N +(\mu).] + +{empty}b) Prove that + +[latexmath] +++++ +T_{\mathrm{root}} (\mu) - {T_{\mathrm{root}}}_N (\mu) = |||u(\mu) - u_N (\mu)|||^2. +++++ + +{empty}c) Show that latexmath:[u_N (\mu)] as defined in #eq:3[[eq:3]]-#eq:5[[eq:5]] satisfies a set of latexmath:[N \times N] linear equations, + +[latexmath] +++++ +A_N (\mu) % + u_N (\mu) = % + F_N ; +++++ +and that + +[latexmath] +++++ +{T_{\mathrm{root}}}_N (\mu) = % + L^T_N % + u_N (\mu). +++++ + +Give expressions for latexmath:[% + A_N (\mu) \in \mathbb{R}^{N\times + N}] in terms of latexmath:[% + A^\mathcal{N} (\mu)] and latexmath:[Z, +% + F^N \in \mathbb{R}^N] in terms of latexmath:[% + F^\mathcal{N}] and latexmath:[Z,] and latexmath:[% + L^N \in +\mathbb{R}^N] in terms of latexmath:[% + L^\mathcal{N}] and latexmath:[Z]; here latexmath:[Z] is an latexmath:[\mathcal{N} \times N] matrix, the latexmath:[j] th column of which is latexmath:[% + u_N (\mu^j)] (the nodal values of latexmath:[% + u^\mathcal{N} +(\mu^j)]). + +{empty}d) Show that the bilinear form latexmath:[a(w, v; \mu)] can be decomposed as + +[latexmath] +++++ +a(w, v; \mu) = \sum_{q=0}^Q \theta^q (\mu) a^q (w, v), \forall w, v \in X, \forall \mu \in D,; +++++ +for latexmath:[Q=6] and give expressions for the latexmath:[\theta^q (\mu)] and the latexmath:[a^q +(w, v)]. Notice that the latexmath:[aq (w, v)] are not dependent on latexmath:[\mu]; the parameter dependence enters only through the functions latexmath:[\theta^q +(\mu), q = 1,\ldots, Q.] Further show that + +[latexmath] +++++ +% + A^\mathcal{N} (\mu) = \sum_{q=1}^Q \theta^q (\mu) {% + A^\mathcal{N}}^q , +++++ +and + +[latexmath] +++++ +% + A^N (\mu) = \sum_{q=1}^Q \theta^q (\mu) % + A^q_N , +++++ + +Give an expression for the latexmath:[{% + A^\mathcal{N}}^q] in terms of the nodal basis functions; and develop a formula for the latexmath:[% + A^q_N] in terms of the latexmath:[{% + A^\mathcal{N}}^q] and latexmath:[Z]. + +{empty}e) The coercivity and continuity constants of the bilinear form for the continuous problem are denoted by latexmath:[\alpha^e (\mu)] and latexmath:[\gamma^e +(\mu)], respectively. We now assume that the basis function latexmath:[\xi i , +i = 1,\ldots, N,] are orthonormalized. Show that the condition number of latexmath:[% + A_N (\mu)] is then bounded from above by latexmath:[\gamma^e (\mu)/\alpha^e +(\mu).] + +f): Take into account the parameters latexmath:[L] and latexmath:[t] as parameters: the geometric transformation must be taken into account in the affine decomposition procedure. + +_acknowledgment:_ This problem set is based on a problem set in the class "`16.920 Numerical Methods for Partial Differential Equations`" at MIT. We would like to thank Prof. A.T. Patera and Debbie Blanchard for providing the necessary material and the permission to use it. diff --git a/docs/modules/ROOT/pages/homework/problem-set-2.adoc b/docs/modules/ROOT/pages/homework/problem-set-2.adoc new file mode 100644 index 0000000..b0bc6b6 --- /dev/null +++ b/docs/modules/ROOT/pages/homework/problem-set-2.adoc @@ -0,0 +1,106 @@ += Problem Set 2: RB for Linear Affine Elliptic Problems +:page-jupyter: true +:page-plotly: true +Christophe Prud’homme +:stem: latexmath +:eqnums: all + +== Problem Statement — Design of a Thermal Fin + +We consider the problem of designing a thermal fin described in Problem Set 1. In PS1 we looked at some thoeretical issues (weak formulation and optimization formulation, convergence of the reduced basis approximation) and derived the necessary reduced basis quantities, i.e., expressions for latexmath:[A_N ( \mu )], latexmath:[F_N] , and latexmath:[L_N] . This problem set is devoted to implementing the reduced basis approximation and solving a simple design problem. + +=== Finite element approximation + +[source,python] +---- +import feelpp + +---- + +=== Part 1 – Reduced Basis Approximation + +The point of departure for the reduced basis approximation is a high – dimensional finite element "`truth`" discretization. In the offline stage we require the finite element solution to build the reduced basis and we thus also need the FE matrices. In this problem set we skip the FE assembly step and provide all of the necessary data for use in Python (see Appendix 1). + +We saw in class that the reduced basis solution latexmath:[u_N ( \mu ) \in \mathbb{R}^N] satisfies the set of latexmath:[N\times N] linear equations, + +[latexmath#eq:1.1] +++++ + A_N( \mu )u_N( \mu ) = F_N; +++++ +and that the outputis given by + +[latexmath#eq:1.2] +++++ + {T_{root}}_N ( \mu ) = L^T_N u_N ( \mu ). +++++ + +We derived expressions for latexmath:[A_N( \mu ) \in \mathbb{R}^{N\times N}] in terms of latexmath:[A_N( \mu )] and latexmath:[Z], latexmath:[F_N \in \mathbb{R}^N] in terms of latexmath:[F_N] and latexmath:[Z], and latexmath:[L_N \in \mathbb{R}^N] in terms of latexmath:[L_N] and latexmath:[Z]; here latexmath:[Z] is an latexmath:[\mathcal{N} \times N] matrix, the jth column of which is latexmath:[u_N ( \mu j )] (the nodal values of latexmath:[u_N ( \mu j ))]. Finally, it follows from affine parameter dependence that latexmath:[A_N ( \mu )] can be expressed as + +[latexmath#eq:1.3] +++++ +A_N( \mu ) = \sum_{q=1}^Q \Theta^q( \mu )A^q_N. +++++ +The goal is to implement an offline/ online version of the reduced – basis method following the computational decomposition indicated below. + +* Offline +. Choose latexmath:[N]. +. Choose the sample latexmath:[S_N] . +. Construct latexmath:[Z]. +. Construct latexmath:[A^q_N, q = 1,\ldots,Q; F_N; \text{ and } L_N.] +* Online +. Form latexmath:[A_N ( \mu )] from (<>). +. Solve latexmath:[A_N( \mu )u_N( \mu ) = F_N.] +. Evaluate the output latexmath:[{T_{root}}_N ( \mu )] from <>). + +1 The idea is that the offline stage is done only once, generating a small datafile with the latexmath:[A^q_N , q = 1,\ldots,Q], latexmath:[F_N], and latexmath:[L_N]; the on-line stage then accesses this datafile to provide real-time response to new latexmath:[\mu] queries. For the required off-line finite element calculations in this and the following questions, you should first use the coarse triangulation latexmath:[\mathcal{T}_{h,\mathrm{coarse}}]. + +[loweralpha] +. Show that the operation count for the on-line stage of your code is independent of latexmath:[\mathcal{N}] . In particular show that the operation count (number of floating-point operations) for the on-line stage, for each new latexmath:[\mu] of interest, can be expressed as + +[latexmath#eq:4] +++++ +c_1N^{\gamma_1} +c_2 N^{\gamma_2} +c_3 N^{\gamma_3}, +++++ +for latexmath:[c_1, c_2, c_3, \gamma_1, \gamma_2,] and latexmath:[\gamma_3] independent of latexmath:[N]. Give values for the constants latexmath:[c_1, c_2, c_3, \gamma_1, \gamma_2,] and latexmath:[\gamma_3]. + +. We first consider a one parameter (latexmath:[P = 1]) problem. To this end, we keep the Biot number fixed at latexmath:[Bi = 0.1] and assume that the conductivities of all fins are equivalent, i.e., latexmath:[k_1 = k_2 = k_3 = k_4], but are allowed to vary between latexmath:[0.1] and latexmath:[10] – we thus have latexmath:[\mu \in D = +[0.1, 10].] The sample set latexmath:[S_N] for latexmath:[N_{max} = 8] is given in the datafile `+RB_sample.sample1+`. + +. Generate the reduced basis "`matrix`" latexmath:[Z] and all necessary reduced basis quantities. You have two options: you can use the solution "snapshots" directly in latexmath:[Z] or perform a Gram-Schmidt orthonormalization to construct latexmath:[Z] (Note that you require the latexmath:[X] – inner product to perform Gram-Schmidt; here, we use latexmath:[(\cdot, \cdot)_X = a(\cdot, \cdot; \mu )], where latexmath:[\mu = 1] – all conductivities are latexmath:[1] and the Biot number is latexmath:[0.1]). Calculate the condition number of latexmath:[A_N ( \mu )] for latexmath:[N = 8] and for latexmath:[\mu = 1] and latexmath:[\mu = 10] with and without Gram – Schmidt orthonormalization. What do you observe? Solve the reduced basis approximation (where you use the snapshots directly in latexmath:[Z]) for latexmath:[\mu_1 = 0.1] and latexmath:[N = 8]. What is latexmath:[u_N( \mu_1)]? How do you expect latexmath:[u_N( \mu_2)] to look like for latexmath:[\mu_2 + = 10.0]? What about latexmath:[\mu_3 = 1.0975]? Solve the Gram – Schmidt orthonormalized reduced basis approximation for latexmath:[\mu_1 = 0.1] and latexmath:[\mu + 2 = 10] for latexmath:[N = 8]. What do you observe? Can you justify the result? For the remaining questions you should use the Gram – Schmidt orthonormalized reduced basis approximation. +.. Verify that, for latexmath:[\mu = 1.5] (recall that Biot is still fixed at latexmath:[0.1]) and latexmath:[N = 8], the value of the output is latexmath:[{T_{root}}_N ( \mu ) = 1.53107]. +.. We next introduce a regular test sample, latexmath:[\Xi_{test} \subset D], of size latexmath:[ntest = 100] (in Python you can simply use `+linspace(0.1, 10, 100)+` to generate latexmath:[\Xi_{test}]). Plot the convergence of the maximum relative error in the energy norm latexmath:[\max_{\mu \in\Xi_{test}} |||u( \mu ) - + u_N ( \mu )|||_\mu /|||u( \mu )|||_\mu] and the maximum relative output error max latexmath:[\mu \in\Xi_{test} |{T_{root}}( \mu ) - {T_{root}} N( \mu + )|/{T_{root}}( \mu )] as a function of latexmath:[N] (use the Python command `+semilogy+` for plotting). +.. Compare the average CPU time over the test sample required to solve the reduced basis online stage with direct solution of the FE approximation as a function of latexmath:[N]. +.. What value of latexmath:[N] do you require to achieve a relative accuracy in the output of 1%. What savings in terms of CPU time does this % correspond to? +.. Solve problems b) 3. to 5. using the medium and fine FE triangulation. Is the dependence on latexmath:[\mathcal{N}] as you would anticipate? + +. We now consider another one parameter latexmath:[(P = 1)] problem. This time, we assume that the conductivities are fixed at latexmath:[\{k_1,k_2,k_3,k_4\} = \{0.4,0.6,0.8,1.2\}], and that only the Biot number, latexmath:[Bi], is allowed to vary from latexmath:[0.01] to latexmath:[1]. The sample set latexmath:[S_N] for latexmath:[N_{max} = 11] is given in the datafile `+RB_sample.sample2+`. Generate an orthonormal latexmath:[Z] from the sample set using the medium triangulation. + +.. Verify that, for latexmath:[\mu_0 = {0.4, 0.6, 0.8, 1.2, 0.15}], i.e. latexmath:[Bi = 0.15], the value of the output is latexmath:[{T_{root}}_N ( \mu 0) = 1.51561]. +.. We next introduce a regular test sample, latexmath:[\Xi_{test} \subset D], of size latexmath:[ntest =100] (in Python you can simply use `+linspace(0.01, 1, 100)+` to generate latexmath:[\Xi_{test}]). Plot the convergence of the maximum relative error in the energy norm latexmath:[\max_{\mu \in\Xi_{test}} |||u( \mu ) - u_N ( \mu )|||_\mu /|||u( \mu + )|||_\mu] and the maximum relative output error latexmath:[\max_{\mu \in\Xi_{test}} |{T_{root}}( \mu ) - {T_{root}}_N( \mu )|/{T_{root}}( + \mu )] as a function of latexmath:[N] (use the Python command `+semilogy+` for plotting). +.. The Biot number is directly related to the cooling method; higher cooling rates (higher latexmath:[Bi]) imply lower (better) latexmath:[{T_{root}}] but also higher (worse) initial and operational costs. We can thus define (say) a total cost function as ++ +[latexmath#eq:CBi] +++++ +C(Bi) = Bi + {T_{root}}(Bi), +++++ ++ +minimization of which yields an optimal solution. Apply your (online) reduced – basis approx – imation for latexmath:[{T_{root}}_N] (that is, replace latexmath:[{T_{root}}(Bi)] in (<>) with latexmath:[{T_{root}}_N (Bi))] to find the optimal latexmath:[Bi.] Any (simple) optimization procedure suffices for the minimization. + +. We consider now a two parameter latexmath:[(P = 2)] problem where the conductivities are assumed to be equivalent, i.e., latexmath:[k_1 = k_2 = k_3 = k_4], but are allowed to vary between latexmath:[0.1] and latexmath:[10]; and the Biot number, latexmath:[Bi], is allowed to vary from latexmath:[0.01] to latexmath:[1]. The sample set latexmath:[S_N] for latexmath:[N_{max} = 46] is given in the datafile `+RB_sample.sample3+`. Generate an orthonormal latexmath:[Z] from the sample set using the coarse triangulation. + +. We next introduce a regular grid, latexmath:[\Xi_{test} \subset D], of size latexmath:[ntest = 400] (a regular latexmath:[20 \times 20] grid). Plot the convergence of the maximum relative error in the energy norm latexmath:[\max_{\mu \in\Xi_{test}} |||u( \mu ) - u_N ( \mu )|||_\mu /|||u( \mu + )|||_\mu] and the maximum relative output error latexmath:[max_{\mu \in \Xi_{test}} |{T_{root}}( \mu ) - {T_{root}}_N( \mu )|/{T_{root}}( \mu)] as a function of latexmath:[N]. + +. We now consider the POD method and we wish to compare it with the Greedy approximation. To this end, we sample log randomly the parameter space (latexmath:[P=2]) and take latexmath:[n_{\mathrm{train}}=100] samples. Build the POD approximation using these samples as training set and compare the results with the Greedy approximation. Compute the RIC and the dimension of the POD space (latexmath:[N]) such that the RIC is latexmath:[99\%] of the total energy. Plot the POD and Greedy convergence of the maximum relative error in the energy norm latexmath:[\max_{\mu \in\Xi_{test}} |||u( \mu ) - u_N ( \mu )|||_\mu /|||u( \mu +)|||_\mu] and the maximum relative output error latexmath:[max_{\mu \in \Xi_{test}} |{T_{root}}( \mu ) - {T_{root}}_N( \mu )|/{T_{root}}( \mu +)] as a function of latexmath:[N]. + +== Appendix 1 – Finite Element Method Implementation + +We use Feel++ to implement the finite element matrices. diff --git a/docs/modules/ROOT/pages/homework/problem-set-3.adoc b/docs/modules/ROOT/pages/homework/problem-set-3.adoc new file mode 100644 index 0000000..e0caa5b --- /dev/null +++ b/docs/modules/ROOT/pages/homework/problem-set-3.adoc @@ -0,0 +1,145 @@ += Problem Set 3: A Posteriori Error Bounds, Greedy Sampling Procedure +:page-jupyter: true +:page-plotly: true +:stem: latexmath +Christophe Prud’homme + + +We consider again the problem of designing a thermal fin of Problem Set 1 and 2. Given the reduced basis approximation implemented in PS2, we turn to implementing the associated a posteriori error estimation procedures developed in the lecture. The second half of this problem set is devoted to implementing the greedy sampling procedure. We will consider the following two cases: + +* Case I (latexmath:[P=1]): We keep the Biot number fixed at latexmath:[Bi = 0.1] and assume that the conductivities of all fins are equivalent, i.e., latexmath:[k = k_1 = k_2 = k_3 = k_4], but are allowed to vary between 0.1 and 10 — we thus have latexmath:[\mu \in D = [0.1, 10].] For this latexmath:[P = 1] case we define the latexmath:[X]-inner product latexmath:[(\cdot, \cdot)_X = a(\cdot, \cdot; \bar{\mu}),] where latexmath:[\bar{\mu} = 1.] +* Case II (latexmath:[P = 2]): We again assume that the conductivities of all fins are equivalent, i.e., latexmath:[k = k_1 = k_2 = k_3 = k_4] , but are allowed to vary between 0.1 and 10; furthermore, we assume that the Biot number satisfies latexmath:[0.01 \leq Bi \leq 1.] We thus have latexmath:[\mu = (k, Bi) = [0.1, 10\] \times [0.01, 1\].] For this latexmath:[P = 2] case we define the latexmath:[X]-inner product latexmath:[(\cdot, \cdot)_X = a(\cdot, \cdot; \bar{\mu}),] where latexmath:[\bar{\mu} = (1, 0.1)] (the two inner products are in fact the same since latexmath:[Bi = 0.1] here). + +We also define the parameter grids latexmath:[G^{\mathrm{lin}}_{[ \mu_{min} , \mu_{max} ;10\]}] and latexmath:[G^{\mathrm{ln}}_{[ \mu_{min} , \mu_{max} ;10\]}]. The former grid is equi-spaced in latexmath:[\mu], the latter grid is equi-spaced in latexmath:[ln(\mu)] — often advantageous within the reduced basis context. More generally, the "`log`" spacing represents equal relative increments, and thus represents better coverage for parameters that vary over a large range. For the latexmath:[P = 2] case we can then define tensor grids over latexmath:[\mathcal{D}], latexmath:[\Xi^{\mathrm{log}}_M \subset D \subset \mathbb{R}^2] , as + +[latexmath] +++++ +\Xi^{log}_M = G^{log}_{[ \mu_{min} , \mu_{max} ;M ]} \times G^{log}_{[ \mu_{min} , \mu_{max} ;M ]} ; +++++ +note latexmath:[\Xi^{log}_M] contains latexmath:[M^2] points; a similar definition applies to latexmath:[\Xi^{lin}_M]. We also define a particular test grid (biased neither towards "`log`" nor "`lin`") + +[latexmath] +++++ +\Xi^{test}_M = \Xi^{lin}_M \cup \Xi^{log}_M ; +++++ +note latexmath:[\Xi^{test}_M] contains latexmath:[2M^2] points. + +[[sec:1]] +== Part 1 - Coercivity Lower Bound and X-Norm Error Bound + +We first consider the construction of the lower bound for the coercivity constant. + +=== Q1. + +Since our problem is parametrically coercive, the simple latexmath:[\min \theta]-approach suffices for the construction of the coercivity lower bound, latexmath:[\alpha_{LB} (\mu)]. However, we have to slightly adapt the lower bound to Case I and II. + + (a) Derive an explicit expression for latexmath:[\alpha_{LB} (\mu)] for Case I and Case II. (b) What is the largest effectivity for the energy norm error bound and the output error bound we should anticipate for Case I and Case II? + +=== Q2. + +Prove the bounding properties of the latexmath:[X]-norm error bound, i.e., the effectivity latexmath:[\eta_N(\mu)] satisfies + +[latexmath] +++++ +1 \leq \eta_N(\mu) \leq \frac{\gamma_{UB} (\mu)}{\alpha_{LB} (\mu)}, \quad \forall \mu \in \mathcal{D}. +++++ + + +== Part 2 - A Posteriori Error Estimation + +Given the coercivity lower bound, we can now turn to implementing the a posteriori error bounds. Note that, in principle, there is an online-inefficient and an online-efficient way to evaluate these error bounds. We first consider the latter: From the lecture we know that the energy norm a posteriori error bound is given by + +[latexmath#eq:2.1] +++++ +\Delta^{en}_N(\mu)= \frac{\|\hat{e}(\mu\|}{\sqrt{\alpha_{LB}(\mu)}} +++++ +where latexmath:[\hat{e}(\mu) \in X] satisfies + +[latexmath#eq:2.2] +++++ +(\hat{e}(\mu), v)_X = r(v; \mu), \quad \forall v \in X, +++++ +and the residual is given by + +[latexmath#eq:2.3] +++++ +r(v; \mu) = f (v; \mu) - a(u_N (\mu), v; \mu),\quad \forall v \in X. +++++ + +For any new latexmath:[\mu] and associated reduced basis solution, latexmath:[u_N (\mu),] we can now directly calculate latexmath:[\hat{e}(\mu)] from <> and <>, evaluate the norm latexmath:[\|\hat{e}(\mu)||_X] and — given latexmath:[\alpha_{LB} (\mu)] — obtain latexmath:[\Delta^{en}_N (\mu)] from <>. Although this approach is online-inefficient because the computational cost depends on latexmath:[O(\mathcal{N})], it is nevertheless useful as a means to test your offline-online computational decomposition. We will consider Case I and Case II in the sequel. Note that you should only require one code to handle both cases, i.e., Case I is a specialization of Case II by keeping one of the parameters fixed. Also, when using you should try to replace loops by matrix-vector products as much as possible (e.g. try to write the nested loop over latexmath:[N] when summing up the contributions of the latexmath:[\|\hat{e}(\mu)\|_X] norm as a vector-matrix-vector product — the nested loop over latexmath:[Q_a] is less critical). + +=== Q3. + +We first consider Case I. To answer this question you should use the sample set latexmath:[S_N] provided for PS2 (`+RB_sample.sample1+`), orthonormalize the basis functions, and use the medium grid. + +[loweralpha] +. Implement an offline/online version of the a posteriori error bound calculation following the computational decomposition shown in the lecture. Show that the direct calculation and the offline-online decomposition deliver the same results for the error bound, latexmath:[\Delta^{en}_N (\mu)], for all latexmath:[N (1 \leq N \leq 8)] and (say) latexmath:[5] parameter values randomly distributed in latexmath:[\mathcal{D}.] + +. Calculate latexmath:[\eta^{en}_{\min,N},\eta^{en}_{\max,N}] and latexmath:[\eta^{en}_{ave,N}] the minimum, maximum, and average effectivity latexmath:[\eta^{en}_N(\mu)] over latexmath:[\Xi test = G^{lin}[ \mu_{min} , \mu_{max} ;50] \cup G^{ln}[ \mu_{min} , \mu_{max} ;50]], respectively (note that latexmath:[\Xi^{test}] is of size 100 since latexmath:[P = 1]). + +Present the results in a table for all latexmath:[N] . Is the minimum effectivity greater than unity? How does the maximum effectivity compare with your theoretical upper bound for the effectivity? (Note you should exclude from the min/max/mean-operation all points in latexmath:[\Xi^{test}] for which latexmath:[\|u(\mu) - u_N (\mu)\|_X] is less than (say) latexmath:[10e-11] .) + +. Evalulate the effectivity for latexmath:[\mu = 1] for latexmath:[N = 1,\ldots, 8]. What do you observe? Justify your answer. (d ) Evaluate the exact error, latexmath:[\|u(\mu) - u_N (\mu)\|_X] , and error bound for latexmath:[\mu = 0.1]. What do you observe? Justify your answer. + +=== Q4. + +We consider Case II. To answer this question you should use the sample set latexmath:[S_N] provided for the PS2 (`+RB_sample.sample3+`), orthonormalize the basis functions, and use the medium grid. + +. Implement an offline/online version of the a posteriori error bound calculation following the computational decomposition shown in the lecture. Show that the direct calculation and the offline-online decomposition deliver the same results for the error bound, latexmath:[\Delta^{en}_N (\mu)], for all latexmath:[N] latexmath:[(1 \leq N \leq 46)] and (say) 5 parameter values randomly distributed in latexmath:[\mathcal{D}]. + +. Calculate latexmath:[\eta^{en}_{\min,N},\eta^{en}_{\max,N}] and latexmath:[\eta^{en}_{ave,N}] the minimum, maximum, and average effectivity latexmath:[\eta^{en}_N(\mu)] over latexmath:[\Xi^{test}_M = \Xi^{lin}_{M=10}\cup \Xi^{log}_{M=10}], respectively. Present the results in a table for latexmath:[N = 5, 10, 20,30, 40.] Is the minimum effectivity greater than unity? How does the maximum effectivity compare with your theoretical upper bound for the effectivity? (Note you should again exclude from the min/max/mean-operation all points in latexmath:[\Xi^{test}_M] for which latexmath:[\|u(\mu) - u_N (\mu)\|_X] is less than (say) latexmath:[10e-11].) + +== Part 3 - Reduced Basis Output Bound + +Given the a posteriori error bound from Part 2 we can now directly evaluate the output error bound. + +=== Q5. + +We consider Case II. To answer this question you should use the sample set SN provided for PS2 (`+RB_sample.sample3+`), orthnormalize the basis functions, and use the medium grid. + +[loweralpha] +. Extend your code to also calculate the output error bound. + +. Calculate latexmath:[\eta^{s}_{\min,N},\eta^{s}_{\max,N}] and latexmath:[\eta^{s}_{ave,N}] the minimum, maximum, and average effectivity latexmath:[\eta^{s}_N(\mu)] over latexmath:[\Xi^{test}_M = \Xi^{lin}_M =10 \cup \Xi^{log}_M =10], respectively. Present the results in a table for latexmath:[N = 5, 10, 20,30, 40.] How does the maximum effectivity compare with your theoretical upper bound for the effectivity? (Note you should exclude from the min/max/mean-operation all points in latexmath:[\Xi^{test}] for which latexmath:[|s(\mu) - s_N (\mu)|] is less than (say) latexmath:[10e-11] .) + +. What value of latexmath:[N] do you require to achieve a relative accuracy in the output bound of approximately 1%? What is the true error for this value of latexmath:[N] ? + +. How does the online computational cost to calculate latexmath:[\Delta^s_N (\mu)] compare to the online computational cost to calculate latexmath:[s_N (\mu)] as a function of latexmath:[N] (take the average over the test sample latexmath:[\Xi^{test}_M] )? + +. How does the computational cost to calculate the truth output latexmath:[s(\mu)] compare to the online computational cost to calculate latexmath:[s_N (\mu)] and latexmath:[\Delta^s_N (\mu)] as a function of latexmath:[N] (take the average over the test sample latexmath:[\Xi^{test}_M] )? + + +== Part 4 - Greedy Sampling Procedure + +Given your (now tested and - hopefully - functioning) offline-online computational decomposition for the reduced basis approximation and associated a posteriori error estimation, we turn to the Greedy Sampling Procedure. In PS2 you where given the sample sets latexmath:[S_N] — now you can construct these yourself. + +For this problem set, you should use the algorithm with latexmath:[\omega(\mu) = +|||u_N (\mu)|||_\mu] (note that we can calculate latexmath:[|||u_N (\mu)|||_\mu] online-efficient in latexmath:[O(N^2)] operations — as opposed to latexmath:[|||u(\mu)|||_\mu] which would require latexmath:[O(\mathcal{N})] operations). We set the desired error tolerance to latexmath:[\varepsilon_{tol,\min} = +10e-6] and choose latexmath:[S_1 = \mu_{min}] and latexmath:[X_1 = \mathrm{span}\{u( \mu_{min} )\}.] + +Note that there are many steps implicit in the greedy loop. In particular, after the update latexmath:[S_N = S_{N-1} \cup \mu^{*}_N] , we must calculate latexmath:[u(\mu^{*}_N )] to construct (using Gram-Schmidt) the new contribution to our orthonormal basis set, latexmath:[\zeta_N] , to "`form`" latexmath:[X_N] , and finally calculate all the necessary online quantities for both our reduced basis approximation and associated a posteriori error estimation. We note here a practical point for our hierarchical space: as we proceed from latexmath:[N] to latexmath:[N + 1], we should only compute the necessary incremental quantities — the incremental contributions to the various online inner-product arrays required for the reduced basis approximation and a posteriori error estimators. + +== Q6. + +We consider Case I. Apply the greedy algorithm with latexmath:[\Xi^{train} = G^{ln}_{[ \mu_{min} , \mu_{max} ;100\]} , S_1 = \mu_{min} = 0.1] and latexmath:[\varepsilon_{tol,min} = 1e-6]. + +[loweralpha] +. What is the value of Nmax to achieve the desired accuracy? In a sequence of Nmax figures (or subplots), plot the relative exact error latexmath:[\|u(\mu) - u_N (\mu)\|_X /|||u_N (\mu)|||_\mu] and the relative energy error bound, latexmath:[\Delta^{en}_N (\mu)/|||u_N (\mu)|||_\mu] , over latexmath:[\mu \in \Xi^{train}] . In each plot, mark the parameter value which is picked by the greedy procedure in this step. + +. Plot latexmath:[\Delta_N^{max}] as a function of latexmath:[N] . + +. Generate a non-hierarchical reduced basis approximation for latexmath:[S^{lin}_N=G^{lin}_{[ \mu_{min} , \mu_{max} ;N \]}] and latexmath:[S^{ln}_N = +G^{ln}_{[ \mu_{min} , \mu_{max} ;N \]}] with latexmath:[2 \leq N \leq N_{max}] . We would like to compare the convergence of the reduced basis approximation generated using the greedy algorithm and the reduced basis approximations from the linear and logarithmic sample. Plot the convergence of the maximum relative error in the energy norm latexmath:[max_{\mu \in \Xi^test} |||u(\mu) - u_N (\mu)|||_\mu /|||u(\mu)|||_\mu] as a function of latexmath:[N] for all ln three cases in one plot. Here, latexmath:[\Xi^{test} = G^{lin}_{[ \mu_{min} , \mu_{max} ;50\]} \cup G^{ln}[ \mu_{min} , \mu_{max} ;50\]] is a test sample of size latexmath:[n_{test} = 100.] + +== Q7. + +We consider Case II. + +Apply the greedy algorithm with latexmath:[\Xi^{train} = \Xi^{log}_M] (the log tensor product grid with latexmath:[M = 25]), latexmath:[S_1 = \mu_{min} = (0.1, 0.01)], and latexmath:[\varepsilon_{tol,min} = 10e-6] . + +[loweralpha] +. What is the value of Nmax to achieve the desired accuracy? (b) Plot latexmath:[\Delta_N^{max}] as a function of latexmath:[N]. + +. Plot your greedy samples latexmath:[S_N] ; present your results as dots in the latexmath:[(ln \mu_1 , ln \mu_2 )] plane. Can you attribute the observed distribution of parameter points to any mathematical or physical causes? + +. For the reduced basis approximation you just generated, plot the convergence of the maximum relative error in the energy norm latexmath:[\max_{\mu \in \Xi^{test}} |||u(\mu) - u_N (\mu)|||_\mu /|||u(\mu)|||_\mu] and the maximum relative output error latexmath:[\max_{\mu\in \Xi^{test}} |Troot (\mu) - Troot_N (\mu)|/Troot (\mu)] as a function of latexmath:[N] . Use latexmath:[\Xi^{test} = \Xi^{test}_M] with latexmath:[M = 10] (the combined linear and logarithmic tensor product grid).