Formal Solutions for Polarized Radiative Transfer. III. Stiffness and Instability

and

Published 2018 April 18 © 2018. The American Astronomical Society. All rights reserved.
, , Citation Gioele Janett and Alberto Paganini 2018 ApJ 857 91 DOI 10.3847/1538-4357/aab3d9

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/857/2/91

Abstract

Efficient numerical approximation of the polarized radiative transfer equation is challenging because this system of ordinary differential equations exhibits stiff behavior, which potentially results in numerical instability. This negatively impacts the accuracy of formal solvers, and small step-sizes are often necessary to retrieve physical solutions. This work presents stability analyses of formal solvers for the radiative transfer equation of polarized light, identifies instability issues, and suggests practical remedies. In particular, the assumptions and the limitations of the stability analysis of Runge–Kutta methods play a crucial role. On this basis, a suitable and pragmatic formal solver is outlined and tested. An insightful comparison to the scalar radiative transfer equation is also presented.

Export citation and abstract BibTeX RIS

1. Introduction

The transfer of partially polarized light is described by the following linear system of first-order coupled inhomogeneous ODEs

Equation (1)

where s is the spatial coordinate measured along the ray under consideration, ${\boldsymbol{I}}$ is the Stokes vector, ${\bf{K}}$ is the propagation matrix, and ${\boldsymbol{\epsilon }}$ is the emission vector. For notational simplicity, the frequency dependence of these quantities is not explicitly indicated.

It is common practice to solve Equation (1) by means of numerical methods, because its analytical solution is known for a few simple atmospheric models (which determine ${\bf{K}}$ and ${\boldsymbol{\epsilon }}$) only. However, Equation (1) exhibits stiff behavior, i.e., formal solvers may face instability issues. For instance, Murphy (1990) observed instability problems using the DELO-parabolic method. Thereafter, Bellot Rubio et al. (1998) encountered instability when using the cubic Hermitian method for the spectral synthesis of strong lines. De la Cruz Rodríguez & Piskunov (2013) underlined the importance of preserving stability when DELO methods are extended to high-order schemes in terms of quadratic and cubic Bézier interpolations. Štěpán & Trujillo Bueno (2013) use Bézier interpolants to control abrupt changes in the atmospheric quantities, which potentially lead to instabilities. Steiner et al. (2016) proposed a different approach to deal with strong gradients, using piecewise continuous reconstructions and slope limiters. Finally, Janett et al. (2017a, 2017b) provide a characterization of formal solvers in terms of their stability region paying particular attention to the eigenvalues of the propagation matrix.

The concept and the relevance of stability are ubiquitous in numerical analysis, and numerical methods for ODEs are not an exception (e.g., Dahlquist 1963; Deuflhard & Bornemann 2002). In particular, stability is a necessary condition for convergence. Indeed, to ensure that a numerical solution of an ODE converges, it is first necessary to show that the numerical scheme employed is consistent, that is, that the local error introduced in one step decays superlinearly with respect to the step-size ${\rm{\Delta }}t$. Unfortunately, this consistency condition is not sufficient to ensure convergence because the cumulative sum of local errors may grow exponentially. However, this exponential growth cannot happen if the numerical method is stable. In light of this, Hackbusch (2014) concludes that "whether consistency implies convergence depends on stability." Stability analysis is employed to provide additional requirements to numerical methods (e.g., a limited step-size). However, these particular stability requirements are problem-dependent and often difficult to be determined.

This paper aims to give a deeper analysis on stability conditions, when facing the numerical integration of Equation (1). Section 2 focuses on the propagation matrix and on its eigenvalues. Section 3 presents the stability analysis of Runge–Kutta methods. Particular attention is paid to the assumptions and the limitations of this analysis, emphasizing their relevance in the formal solution for polarized light. Section 4 analyzes the effect of the conversion to optical depth on numerical stability, while Section 5 exposes the numerical approximation of this conversion. Section 6 describes the structure of a pragmatic numerical method for the numerical integration of Equation (1). Section 7 presents complementary considerations on this topic. Finally, Section 8 provides remarks and conclusions.

2. The Propagation Matrix

The propagation matrix ${\bf{K}}$ that appears in Equation (1) can be written in the form (Landi Degl'Innocenti & Landolfi 2004)

Equation (2)

where the seven independent coefficients are, in general, functions of the frequency, propagation direction, and of a series of physical parameters describing the atmosphere. The matrix ${\bf{K}}$ can be decomposed into three different contributions, namely,

The first matrix is called the absorption matrix, it is diagonal, and it is responsible for the usual exponential decay of the whole Stokes vector. The second matrix is called the dichroism matrix, it is symmetric, and it is responsible for dichroism effects, i.e., the property of absorbing light to different extents depending on the polarization states. The third matrix is called the dispersion matrix, it is skew-symmetric, and it describes the coupling of the Stokes components due to anomalous dispersion effects.

The propagation matrix coefficients consist, in general, of two different kinds of contributions: continuum processes (due to bound–free and free–free transitions) and spectral lines (due to bound–bound transitions). In solar context, continuum processes do not introduce dichroism or anomalous dispersion effects.

This section describes the propagation matrix coefficients for an isolated spectral line originating from the atomic transition between two levels with total angular momentum Ju (upper level) and ${J}_{{\ell }}$ (lower level), respectively. Each J-level is composed of $2J+1$ magnetic sublevels, which are degenerate in the absence of magnetic fields and are characterized by the magnetic quantum number M ($M=-J,-J+1,\,\ldots ,\,J$). The magnetic field removes the degeneracy among the various sublevels (Zeeman effect), inducing energy splitting, that is,

where ${\nu }_{{\rm{L}}}$ is the Larmor frequency and g is the Landé factor. The spectral line takes into account the contribution of all the allowed transitions connecting an upper sublevel $({J}_{u}{M}_{u})$ and a lower sublevel $({J}_{{\ell }}{M}_{{\ell }})$. Atomic polarization is neglected.

Coming back to the matrix ${\bf{K}}$, the total absorption coefficient ${\eta }_{I}$ can be written as

where kc is the local continuum absorption coefficient, kL is the (frequency integrated) line absorption coefficient, and ${\phi }_{I}$ is the intensity absorption profile. Note that ${\eta }_{I}$ can always be assumed to be positive.4 The dichroism coefficients and the anomalous dispersion coefficients read

respectively, where $i=Q,U,V$. When the orientation of the magnetic field ${\boldsymbol{B}}$ with respect to the line of sight is described with the inclination angle θ and the azimuth angle χ (as in Figure 1), one has

Equation (3)

Figure 1.

Figure 1. Angles θ and χ specify the direction of the magnetic field ${\boldsymbol{B}}$ with respect to the coordinate system of the line of sight s. The Stokes component Q is defined as the intensity difference of the linearly polarized light in the two orthogonal axes ${\boldsymbol{e}}$1 and ${\boldsymbol{e}}$2 in the plane perpendicular to the light beam.

Standard image High-resolution image

In the observer's frame, the explicit expressions of the absorption profiles ${\phi }_{q}$ and the dispersion profiles ${\psi }_{q}$ (q = −1, 0, 1) read, respectively,

Equation (4)

Equation (5)

where ${S}_{q}^{{J}_{{\ell }}{J}_{u}}({M}_{{\ell }},{M}_{u})$ is the relative strength of the Zeeman component q connecting the upper sublevel $({J}_{u}{M}_{u})$ and the lower sublevel $({J}_{{\ell }}{M}_{{\ell }})$. Using Wigner 3-j symbols, its explicit expression is given by

The functions H and L appearing in Formulas (4) and (5) correspond to the Voigt and Faraday–Voigt profiles defined by

respectively. Denoting with gu and ${g}_{{\ell }}$ the Landé factors associated to the upper and lower levels, respectively, the quantity ω is defined as

where the reduced frequency v is defined by

with ν and ${\nu }_{0}$ being the frequency under consideration and line-center frequency, respectively. The Doppler width of the line ${\rm{\Delta }}{\nu }_{{\rm{D}}}$ is given by

where ${w}_{{\rm{T}}}$ denotes the random velocity of the atoms due to thermal and microturbulent motions, and c is the speed of light. The quantity

is the normalized frequency shift due to a bulk motion of velocity wA in the medium. The normalized Zeeman splitting vB is given by

The damping constant a is given by

where Γ takes into account the natural width of the line ${{\rm{\Gamma }}}_{n}$ (due to the finite life-time of the upper and lower level) and the collisional width ${{\rm{\Gamma }}}_{c}$ (due to collisions of the atom under consideration with other atoms and ions in the plasma) and it reads

2.1. Eigenvalues of the Propagation Matrix

Let

denote the dichroism and the anomalous dispersion vectors, respectively. The four eigenvalues of the propagation matrix ${\bf{K}}$ read (Landi Degl'Innocenti & Landolfi 2004)

Equation (6)

where

and

The module of the dichroism vector satisfies

Equation (7)

but no similar relation holds for ρ. The comprehension of these expressions is facilitated by Table 1, where the factors ${{\rm{\Lambda }}}_{+}$ and ${{\rm{\Lambda }}}_{-}$ are given for certain special cases. Note that ${{\rm{\Lambda }}}_{+}$ and ${{\rm{\Lambda }}}_{-}$ do not depend on the azimuth angle χ of the magnetic field vector and they always assume real positive values limited by

Equation (8)

The combination of conditions (7) and (8) guarantees that the real part of the eigenvalues in Equation (6) is always positive. Therefore, the spectral radius $r({\bf{K}})$ of the propagation matrix ${\bf{K}}$ satisfies

Equation (9)

Finally, knowing if the propagation matrix ${\bf{K}}$ is diagonalizable is relevant information, because stability analysis is notably simpler in this case. If η = 0 or ρ = 0, the propagation matrix is normal (see Appendix A) and, consequently, diagonalizable in ${\mathbb{R}}$. If ${\boldsymbol{\eta }}\cdot {\boldsymbol{\rho }}\ne 0$, then both ${{\rm{\Lambda }}}_{+}\gt 0$ and ${{\rm{\Lambda }}}_{-}\gt 0$. This implies that ${\bf{K}}$ has four distinct eigenvalues and can be thus diagonalized in ${\mathbb{C}}$. On the other hand, if ${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and neither η = 0 nor ρ = 0, ${\bf{K}}$ may not be diagonalizable because its eigenvalues are not distinct (see Table 1).

Table 1.  Factors ${{\rm{\Lambda }}}_{+}$ and ${{\rm{\Lambda }}}_{-}$ for Different Values of η and ρ

Special Cases ${{\rm{\Lambda }}}_{+}$ ${{\rm{\Lambda }}}_{-}$
$\eta =\rho =0$ 0 0
$\rho =0$ η 0
$\eta =0$ 0 ρ
${\boldsymbol{\eta }}\,\parallel \,{\boldsymbol{\rho }}$ η ρ
${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and $\eta =\rho $ 0 0
${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and $\eta \gt \rho $ $\sqrt{{\eta }^{2}-{\rho }^{2}}$ 0
${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and $\rho \gt \eta $ 0 $\sqrt{{\rho }^{2}-{\eta }^{2}}$

Download table as:  ASCIITypeset image

3. Stability Analysis

Performing stability analysis of numerical methods for ODEs is often quite involved. A gentle introduction to stability analysis of numerical methods for ODEs can be found in Higham & Trefethen (1993).

This section is dedicated to the study of the stability properties of Runge–Kutta methods applied to Equation (1). In this equation, the Stokes vector ${\boldsymbol{I}}$ is the only quantity that can propagate or amplify errors introduced in previous steps. Consequently, the emission term ${\boldsymbol{\epsilon }}$ can be omitted in the stability analysis, because it does not explicitly depend on ${\boldsymbol{I}}$.

Moreover, Equation (1) is linear in the variable ${\boldsymbol{I}}$ and the propagation matrix ${\bf{K}}$ depends on the space variable s. In this case, it is common to analyze the dynamics of the system assuming that ${\bf{K}}$ is constant around each position s0 of interest. Denoting by ${\bf{A}}=-{\bf{K}}({s}_{0})$ the propagation matrix with "frozen" coefficients, one easily performs the stability analysis on the simpler initial value problem (IVP)

Equation (10)

The remainder of this section is structured as follows: Section 3.1 presents the stability analysis further assuming that the "frozen" matrix ${\bf{A}}$ is diagonalizable. This particular case is notably simpler, because the linear system of ODEs is reduced to a set of scalar problems via diagonalization. Section 3.2 analyzes the case of a more general (non-diagonalizable) "frozen" matrix. Finally, Section 3.3 addresses the limits due to the "frozen" matrix assumption, by investigating how spatial variations in matrix ${\bf{A}}$ affect the stability of numerical methods.

3.1. Reduction to the Scalar Case

A matrix ${\bf{A}}$ is called diagonalizable if there is an invertible matrix ${\bf{U}}$ such that

where ${\bf{D}}$ is a diagonal matrix whose entries are the eigenvalues of ${\bf{A}}$. From Equation (10), it is easy to see that ${\boldsymbol{x}}={\bf{U}}{\boldsymbol{y}}$ satisfies

Equation (11)

Runge–Kutta methods are affine covariant. This means that the very same approximation of ${\boldsymbol{y}}$ that is obtained by applying a Runge–Kutta method to the IVP (10) can be computed applying the Runge–Kutta method to the IVP (11) first and multiplying the result with the matrix ${\bf{U}}$−1 at the end. For this reason, the IVP (10) can be replaced by the IVP (11), and since the latter is a system of decoupled differential equations, it is sufficient to consider the scalar case

Equation (12)

where λ represents any of the eigenvalues of ${\bf{A}}$.

The solution of the IVP (12) is given by

When $\mathrm{Re}(\lambda )\lt 0$, x(t) converges to zero as $t\to \infty $. The imaginary part of λ only introduces an oscillatory behavior of the solution.

Let $\{{t}_{k}\}$ be a discrete grid, and let ${x}_{k}\approx x({t}_{k})$ be a numerical solution computed with a Runge–Kutta method. Then, xk and ${x}_{k+1}$ satisfy

Equation (13)

where ${\rm{\Delta }}t={t}_{k+1}-{t}_{k}$, and ϕ is the stability function of the numerical method (Frank 2012).

A numerical solution of an IVP is said to be asymptotically stable if the sequence $\{{x}_{k}\}$ converges to zero for $k\to \infty $. Intuitively, this guarantees that any perturbation in the solution is attenuated with the recursive numerical integration. In light of Equation (13), asymptotic stability is equivalent to

Equation (14)

The stability of a numerical solution is therefore related to both the step-size ${\rm{\Delta }}t$ and the eigenvalue λ. More precisely, it depends on the product $\lambda {\rm{\Delta }}t$. The stability region S of a Runge–Kutta method is defined as the set of complex values $z=\lambda {\rm{\Delta }}t$ for which Equation (14) is satisfied, that is,

To give an example, the stability function of the explicit Runge–Kutta 4 method is given by

and it is displayed in yellow in Figure 2.

Figure 2.

Figure 2. Pseudospectra for the matrices (a) ${\bf{K}}$1, (b) ${\bf{K}}$2, and (c) ${\bf{K}}$3 given in Equation (16). The black circles (which are almost invisible in (c)) are the boundaries of the epsilon-pseudospectrum ${{\rm{\Lambda }}}_{\epsilon }$ for $\epsilon ={10}^{-1},{10}^{-2},{10}^{-3},\,...$, where the outermost curve corresponds to $\epsilon ={10}^{-1}$. The red dots represent the eigenvalues of the matrix, which are four times degenerate in (a) and (b) and twice degenerate in (c). In yellow, the stability region for the explicit Runge–Kutta 4 method.

Standard image High-resolution image

If $\mathrm{Re}(\lambda )\lt 0$, asymptotic stability is guaranteed when $\lambda {\rm{\Delta }}t$ lies inside the stability region of the numerical method. For the particular case of Equation (1), Section 2.1 shows that the real part of the eigenvalues of the propagation operator $-{\bf{K}}$ is always nonpositive. Since its eigenvalues are known explicitly, Equation (9) can be used to derive a sharp upper bound on the step-size ${\rm{\Delta }}s$ that ensures asymptotic stability of the numerical solution.

If the Runge–Kutta method is consistent, complex numbers z with negative real part and sufficiently small absolute value lie in the stability region S. Therefore, for consistent methods, instabilities can be prevented by choosing a sufficiently small step-size ${\rm{\Delta }}t$. However, the downside of small step-sizes is that, for a fixed integration interval, the number of integration steps increases.

To overcome the need of choosing very small step-sizes, the stability region of the numerical method employed should be as large as possible. In particular, to ensure that the numerical solution remains asymptotically stable independently of the choice of ${\rm{\Delta }}t$, the stability region should comprise the complex left half-plane ${{\mathbb{C}}}^{-}$. Runge–Kutta methods that satisfy this condition are called A-stable, and one of the simplest A-stable Runge–Kutta methods is the (implicit) trapezoidal method.

A-stability guarantees that the numerical solution is stable if $\mathrm{Re}(\lambda )\lt 0$. However, if $\mathrm{Re}(\lambda {\rm{\Delta }}t)$ is a large negative value, A-stability may not be sufficient to replicate the exponential decay of the sequence $\{x(k{\rm{\Delta }}t)\}$, because A-stability does not guarantee that

Equation (15)

For instance, the stability function of the trapezoidal method ${\phi }_{{\rm{T}}}$ satisfies ${\mathrm{lim}}_{\mathrm{Re}(z)\to -\infty }{\phi }_{{\rm{T}}}(z)=-1$. In this case, the numerical solution $\{{x}_{k}\}$ will still converge to zero, but the decay becomes arbitrarily slow as $\mathrm{Re}(\lambda {\rm{\Delta }}t)\to -\infty $. A-stable Runge–Kutta methods that further satisfy condition (15) are called L-stable, and they correctly replicate exponential attenuations even when the step-size is large. The simplest L-stable Runge–Kutta method is the implicit (or backward) Euler scheme.

Runge–Kutta methods are also classified into explicit and implicit methods. A Runge–Kutta method that does not require solving a system of equations to update the solution is called explicit; otherwise, it is called implicit. Clearly, explicit methods are computationally less expensive. However, explicit Runge–Kutta methods cannot be A-stable, and therefore also not L-stable. Diagonally implicit Runge–Kutta methods (e.g., Kennedy & Carpenter 2016) offer a good compromise between stability, order of accuracy, and computational complexity. For instance, second-order L-stable diagonally implicit Runge–Kutta methods are available. When applied to linear ODEs like Equation (1), their computational cost is particularly competitive because it grows only linearly with respect to the number of Runge–Kutta stages. However, the same computational cost may grow as d3, where d is the dimension of the ODE. Note that d = 4 in Equation (1).

3.2. Analysis of Pseudospectra

The stability analysis presented in the previous section hinges on assuming that the matrix ${\bf{A}}$ in Equation (10) is diagonalizable and only considers the scalar IVP (12) with λ representing any of the eigenvalues of ${\bf{A}}$. However, Section 2.1 shows that, in general, the propagation matrix ${\bf{K}}$ may not be diagonalizable.

Instead of using eigenvalues, Higham & Trefethen (1993) suggest to perform stability analyses focusing on pseudospectra (more details on pseudospectra are given in Appendix B). They point out that the analysis based on eigenvalues can lead to too liberal conditions for the absence of stiffness, because eigenvalues describe the asymptotic behavior only, whereas instability and stiffness are transient phenomena that depend on how the effects compound over few integration steps. For instance, if the spectrum $\sigma ({\bf{A}})\subset {{\mathbb{C}}}^{-}$, then the solution ${\boldsymbol{y}}(t)$ to the IVP (10) satisfies ${\mathrm{lim}}_{t\to \infty }\parallel {\boldsymbol{y}}(t)\parallel =0$. However, the decay of $\parallel {\boldsymbol{y}}(t)\parallel $ may not be monotone. Higham & Trefethen (1993) conclude that numerical instability around t in Equation (10) occurs when the pseudospectra of the frozen coefficient matrix ${\rm{\Delta }}t{\bf{A}}$ fail to fit within the stability region S of the numerical method. This alternative stability analysis is particularly insightful when the pseudospectra of ${\bf{A}}$ are highly dispersed. Unfortunately, computing pseudospectra is a computationally demanding task that is not affordable in real-time computations. Nevertheless, one can rely on two generic observations. First, if ${\bf{A}}$ is normal, its pseudospectrum is tightly clustered around the spectrum and the difference between transient and asymptotic stability behaviors is irrelevant. Second, pseudospectra tend to be particularly dispersed if ${\bf{A}}$ is both non-normal and "close" to a non-diagonalizable matrix.

Appendix A shows by direct calculation that the propagation matrix ${\bf{K}}$ is normal if and only if

In particular, if ${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and neither η = 0 nor ρ = 0, ${\bf{K}}$ could be both non-normal and non-diagonalizable. Moreover, numerical tests show that an additional requirement to produce largely dispersed pseudospectra is given by $\eta \approx \rho \gg {\eta }_{I}$. This empirical condition assures that the four eigenvalues are degenerate (see Table 1).

However, the entries of the propagation matrix ${\bf{K}}$ satisfy the dichroism condition (7). This condition guarantees that the "empirical condition" above is never satisfied and it prevents the pseudospectra from being dispersed. For this reason, the diagonalization step performed in Section 3.1 does not pose relevant problems in the stability analysis.

Some numerical evidence is presented in Figure 2, which displays the pseduospectra (for ${\rm{\Delta }}s=1$) of the three matrices

Equation (16)

The matrices ${\bf{K}}$1 and ${\bf{K}}$2 satisfy ${\boldsymbol{\eta }}\,\perp \,{\boldsymbol{\rho }}$ and $\eta =\rho \ne 0$, showing dispersed pseudospectra in Figures 2(a) and (b), respectively. Due to its (unphysically) large η and ρ, ${\bf{K}}$1 presents a remarkably scattered pseudospectrum, while ${\bf{K}}$2, which satisfies condition (7), shows a tighter pseudospectrum. Figure 2(c) shows that the pseudospectrum of the matrix ${\bf{K}}$3, which does not satisfy the condition $\eta \approx \rho $, is not dispersed. Experiments performed for different values of ${\rm{\Delta }}s$ lead to similar results.

3.3. Variation of Eigenvalues along the Integration Path

The stability analyses presented in Sections 3.1 and 3.2 neglect the dependence of the propagation matrix ${\bf{K}}$ on the spatial variable s. The following example shows that, in principle, stability issues may arise even when numerical integrations are based on A-stable methods. Janett et al. (2017a) give graphical illustrations of this phenomenon in terms of modifications of the stability region of the trapezoidal method.

Consider the scalar test case

Equation (17)

where $\lambda (t)$ is a real function. The numerical approximation y1 of $y({t}_{0}+{\rm{\Delta }}t)$ computed with the trapezoidal method, which is A-stable, reads

Equation (18)

If $\lambda (t)$ is negative for $t\in [{t}_{0},{t}_{0}+{\rm{\Delta }}t]$, then $| y({t}_{0}+{\rm{\Delta }}t)| \,\lt | {y}_{0}| $. For the numerical method to be stable, it is necessary that $| {y}_{1}| \lt | {y}_{0}| $ as well. However, this is not always guaranteed if $-\lambda ({t}_{0}){\rm{\Delta }}t\gt 2$. For instance, if ${\rm{\Delta }}t=-12/\lambda ({t}_{0})$ and $\lambda ({t}_{0}+{\rm{\Delta }}t)=\lambda ({t}_{0})/2\lt 0$, then ${y}_{1}=(-5/4){y}_{0}$. This means that stability can be lost for sufficiently large variations in $\lambda (t)$ and ${\rm{\Delta }}t$ big enough, and this despite the trapezoidal method being A-stable. This example shows that large variations in the coefficient λ affect the stability of the trapezoidal scheme. In particular, the stability region depends on the values $\lambda ({t}_{0}){\rm{\Delta }}t$ and $\lambda ({t}_{0}+{\rm{\Delta }}t){\rm{\Delta }}t$.

More generally, the stability region of a Runge–Kutta method depends on how much the quantity $\lambda ({t}_{0}+x{\rm{\Delta }}t){\rm{\Delta }}t$ varies for $x\in [0,1]$. Let us assume that the function $\lambda (t)$ is continuous.5 Continuous dependence on t implies that variations of $\lambda ({t}_{0}+x{\rm{\Delta }}t)$, for $x\in [0,1]$, can be controlled by choosing a sufficiently small ${\rm{\Delta }}t$. In turn, this implies that the smaller ${\rm{\Delta }}t$, the tighter the bound on the variations of the quantity $\lambda ({t}_{0}+x{\rm{\Delta }}t){\rm{\Delta }}t$, so that numerical stability can be recovered.

For the sake of completeness, one should mention that in the nonscalar case there are examples of non-constant matrices ${\bf{B}}(t)$ that satisfy $\sigma ({\bf{B}}(t))\subset {{\mathbb{C}}}^{-}$ for every t, and for which the solution to the IVP

satisfies ${\mathrm{lim}}_{t\to \infty }\parallel {\boldsymbol{y}}(t)\parallel =\infty $ (Josic & Rosenbaum 2008). In general, this happens when the matrix is non-normal and is related to its pseudospectra being dispersed. However, in light of the discussion presented in Section 3.2, it is unlikely that Equation (1) supports this kind of unstable solution.

4. Conversion to Optical Depth

To reduce variations of the propagation matrix ${\bf{K}}$ along the ray path, several authors (e.g., Rees et al. 1989; De la Cruz Rodríguez & Piskunov 2013) suggest to rewrite Equation (1) in terms of the optical depth τ. This idea was first exploited to devise numerical schemes for the transfer of unpolarized light, providing significant stability enhancements (see Appendix C).

To transplant Equation (1) to the optical depth regime, one can consider the map $g:[{s}_{0},{s}_{{\rm{f}}}]\to [{\tau }_{0},{\tau }_{{\rm{f}}}]\subset {{\mathbb{R}}}^{+}$, defined as the solution of the IVP6

Equation (19)

where7 ${\tau }_{{\rm{f}}}=g({s}_{{\rm{f}}})$. Since ${\eta }_{I}\gt 0$, g is strictly monotone increasing and thus a (differentiable) bijection from $[{s}_{0},{s}_{{\rm{f}}}]$ to $[{\tau }_{0},{\tau }_{{\rm{f}}}]$. Let ${\boldsymbol{Z}}\,:[{\tau }_{0},{\tau }_{{\rm{f}}}]\to {{\mathbb{R}}}^{4}$ be defined by

On the one hand, by direct differentiation,

On the other hand, from Equation (1),

By combining the two equations above, one obtains

Equation (20)

which is formally equivalent to Equation (1).

The matrix $\tilde{{\bf{K}}}={\bf{K}}/{\eta }_{I}$ satisfies

where, for $i=Q,U,V$,

and the modified emission vector is given by $\tilde{{\boldsymbol{\epsilon }}}={\boldsymbol{\epsilon }}/{\eta }_{I}$.

The eigenvalues of the propagation matrix $\tilde{{\bf{K}}}$ can be directly expressed in terms of the eigenvalues of ${\bf{K}}$, namely

Equation (21)

In light of Equations (6)–(8), the spectral radius $r(\tilde{{\bf{K}}})$ of $\tilde{{\bf{K}}}$ satisfies

and the real part of the eigenvalues of the propagation operator $-\tilde{{\bf{K}}}$ is always nonpositive.

The conversion to optical depth given by Equation (19) freezes the diagonal elements of $\tilde{{\bf{K}}}$ to 1. The variations of its off-diagonal coefficients can be estimated in the following two limiting cases.

Case 1: the continuum absorption is much larger than line processes, i.e., ${k}_{c}\gg {k}_{{\rm{L}}}{\phi }_{I}$, and ${k}_{c}\gg {\eta }_{i},{\rho }_{i}$ for $i=Q,U,V$. In this case, the off-diagonal coefficients of $\tilde{{\bf{K}}}$, and in turn the absolute value of their variations, tend to zero. Equations (21) and (6) imply that the eigenvalues of $\tilde{{\bf{K}}}$ are close to 1.

Case 2: the line absorption dominates over the continuum absorption, i.e., ${k}_{{\rm{L}}}{\phi }_{I}\gg {k}_{c}$. In this case, the variation along the ray path of the off-diagonal coefficients of $\tilde{{\bf{K}}}$ is basically independent of kc and kL. Equations (21) and (6) imply that the eigenvalues of $\tilde{{\bf{K}}}$ are also basically independent of kc and kL and their variations are only due to variations in the profiles given by Equation (3).

However, if the conditions of the two cases presented above are not met, it is not straightforward to infer conclusions on the values of the off-diagonal entries of $\tilde{{\bf{K}}}$, and strong variations in the propagation matrix may still be present (in particular, due to the dependence of the coefficients given by Equation (3) on variations of the magnetic field and of the bulk motions).

In conclusion, the conversion to optical depth usually reduces the amount of fluctuations of the propagation operator $-{\bf{K}}$ along the ray path, but this is not guaranteed in general.

5. Numerical Conversion to Optical Depth

To apply a numerical scheme to Equation (20) it is necessary to have a certain knowledge of the function g. From Equation (19), one has

Equation (22)

and numerical approximations of g can be obtained by replacing the integral with a numerical quadrature. It is absolutely crucial that this numerical approximation is strictly monotone increasing because one needs to access the values of its inverse ${g}^{-1}$. Replacing g with a numerical approximation could negatively affect the order of the method employed to solve the IVP (20). Janett et al. (2017a) explain that high-order solvers require a corresponding high-order numerical approximation of the integral in Equation (22). A very common approach to devise numerical quadrature rules is to replace integrands with interpolants that are successively integrated exactly. For instance, Gauss, Radau, Hermite, and Clenshaw–Curtis quadratures are based on this idea.

Here are some more concrete examples. The trapezoidal rule applied to Equation (22) reads

This quadrature is based on a linear interpolation of ${\eta }_{I}$ through the points $\{{s}_{0},s\}$ and is second-order accurate. Higher-order monotone quadrature schemes can be obtained by replacing linear interpolation with higher-order monotone interpolants.

A concrete high-order example is given by the monotone cubic Hermite quadrature, that, applied to Equation (22), leads to

where ${\tilde{\eta }}_{I}^{{\prime} }$ are suitable numerical approximations (of the first derivative ${\eta }_{I}^{{\prime} }$) that guarantee monotonicity. The approximation above is fourth-order accurate provided that the approximation ${\tilde{\eta }}_{I}^{{\prime} }\approx {\eta }_{I}^{{\prime} }$ is at least of second order (Dougherty et al. 1989). The approximation ${\tilde{\eta }}_{I}^{{\prime} }$ described by Steffen (1990) satisfies both conditions, whereas the one described by Fritsch & Butland (1984) guarantees monotonicity, but it is second-order accurate on uniform grids only (it drops to first-order on non-uniform grids).

Hermite interpolation is not the only option for higher-order monotone interpolation schemes. For instance, Auer (2003) and De la Cruz Rodríguez & Piskunov (2013) prefer to employ monotonic quadratic Bézier splines. However, the high-order convergence of Bézier interpolations is achieved only when the Bézier interpolants are forced to be identical to the corresponding degree Hermite interpolants, which do not guarantee monotonicity.

Finally, when the atmospheric model is exponentially stratified along the ray path, Mihalas (1978) suggests to replace ${\eta }_{I}$ with the exponential function8

with

After such a substitution, Equation (22) can be integrated exactly. The resulting map reads

The error introduced by this substitution is bounded by

and its accuracy clearly depends on the suitability of the exponential modeling.

6. Pragmatic Formal Solver

In practical applications, the propagation matrix ${\bf{K}}$ and the emission vector ${\boldsymbol{\epsilon }}$ are known only at a discrete set of usually nonequidistant grid points ${\{{s}_{i}\}}_{i=1}^{N}$. In such an instance, one aims at computing a numerical solution of Equation (1) that is first of all physically meaningful (i.e., stable) and that, second, has a good ratio between accuracy and computational cost. Ideally, the numerical method should rely as much as possible on the provided values of ${\bf{K}}$ and ${\boldsymbol{\epsilon }}$, although these functions can be evaluated at other depths s via interpolation, if necessary. This interpolation must be sufficiently accurate to preserve the order of convergence of the numerical scheme used for numerical integration (Janett et al. 2017b).

Since stiffness is a transient behavior, the analysis presented in the previous sections suggests to consider each interval $[{s}_{i},{s}_{i+1}]$ at a time and sequentially. In each interval, depending on the cell width ${\rm{\Delta }}s$ and on the magnitude of the eigenvalues of ${\bf{K}}$ at si and ${s}_{i+1}$, the approximation of ${\boldsymbol{I}}({s}_{i+1})$ is computed with either an explicit method ${{\boldsymbol{\Psi }}}^{E}$ (which is computationally inexpensive) or an A-stable method ${{\boldsymbol{\Psi }}}^{A}$, or an L-stable method ${{\boldsymbol{\Psi }}}^{L}$. Preferably, the methods ${{\boldsymbol{\Psi }}}^{E}$ and ${{\boldsymbol{\Psi }}}^{A}$ should be of the same order, while the method ${{\boldsymbol{\Psi }}}^{L}$ could be of lower order, because large attenuations usually prevent any propagation of information.

The following criteria can help in choosing between ${{\boldsymbol{\Psi }}}^{E}$, ${{\boldsymbol{\Psi }}}^{A}$, or ${{\boldsymbol{\Psi }}}^{L}$: (i) if the absolute value of the real part of the eigenvalues multiplied by the cell width is large, one should use ${{\boldsymbol{\Psi }}}^{L}$ to guarantee the correct exponential attenuation of the Stokes vector. Otherwise, (ii) the method ${{\boldsymbol{\Psi }}}^{E}$ is used whenever stable (to reduce computational cost), and (iii) if ${{\boldsymbol{\Psi }}}^{E}$ is not stable, one uses ${{\boldsymbol{\Psi }}}^{A}$ with the optional conversion to optical depth if ${{\boldsymbol{\Psi }}}^{A}$ loses stability due to the variations of the eigenvalues in the interval $[{s}_{i},{s}_{i+1}]$.

For example, this strategy can be implemented using Heun's method (which is also known as the explicit trapezoidal rule and has order 2) as ${{\boldsymbol{\Psi }}}^{E}$, the implicit trapezoidal rule (which also has order 2) as ${{\boldsymbol{\Psi }}}^{A}$, and the implicit Euler method (which has order 1) as ${{\boldsymbol{\Psi }}}^{L}$. These methods employ ${\bf{K}}$ and ${\boldsymbol{\epsilon }}$ at grid points only, avoiding the use of interpolated off-grid points' quantities. Computing the eigenvalues of ${\bf{K}}$ at a point s is roughly one-third as expensive9 as one step of ${{\boldsymbol{\Psi }}}^{E}$, whereas ${{\boldsymbol{\Psi }}}^{A}$ is roughly twice as expensive as ${{\boldsymbol{\Psi }}}^{E}$. The implicit Euler method is less expensive than ${{\boldsymbol{\Psi }}}^{A}$, but more than ${{\boldsymbol{\Psi }}}^{E}$. A second-order L-stable method would be at least as expensive as ${{\boldsymbol{\Psi }}}^{A}$, but since L-stability is only required when large exponential attenuations are present, one can opt for a lower-order scheme.

To assess the stability of Heun's method ${{\boldsymbol{\Psi }}}^{E}$, one should verify that

However, it is worth distinguishing the cases when $| {\phi }_{{{\boldsymbol{\Psi }}}^{E}}| $ is close to 1: if $\lambda {\rm{\Delta }}s$ is close to 0, ${{\boldsymbol{\Psi }}}^{E}$ can be trusted; however, if $\lambda {\rm{\Delta }}s$ is close to the boundary of the stability domain away from 0, it is advisable to switch to ${{\boldsymbol{\Psi }}}^{A}$, because ${{\boldsymbol{\Psi }}}^{E}$ may suffer from instability. To verify the stability of ${{\boldsymbol{\Psi }}}^{A}$ and decide whether to opt for the conversion to optical depth, one can repeat the same argument used for ${{\boldsymbol{\Psi }}}^{E}$ but using Formula (18) instead of ${\phi }_{{{\boldsymbol{\Psi }}}^{E}}$.

A practical example is given by Figure 3, which shows the evolution of the approximate Stokes vector for the Fe i line at 6301.50 Å computed with a FALC atmospheric model (Fontenla et al. 1993) supplemented with a constant magnetic field.10 The different rows refer to computational grids of increasing refinements and the approximate solution is calculated by the pragmatic numerical scheme suggested above.

Figure 3.

Figure 3. Each row displays the evolution of the Stokes components along the vertical direction for the Fe i line at 6301.50 Å in the proximity of the line core frequency. The Stokes profiles have been computed using a FALC model atmosphere on a sequence of increasingly refined grids. The approximated solution is calculated using the pragmatic approach described in Section 6. The black line depicts the reference solution, which has been computed with ${{\boldsymbol{\Psi }}}^{L}$ on a very fine grid. The initial condition is ${\boldsymbol{I}}={(2.121\times {10}^{-7},0,0,0)}^{\top }$. Different dot colors correspond to integrations with a different method. Blue dots indicate the use of ${{\boldsymbol{\Psi }}}^{E}$, yellow dots of ${{\boldsymbol{\Psi }}}^{A}$, orange dots of ${{\boldsymbol{\Psi }}}^{A}$ with conversion to optical depth, and purple dots of ${{\boldsymbol{\Psi }}}^{L}$. The algorithm switches between the different methods depending on the stability criteria. The use of ${{\boldsymbol{\Psi }}}^{E}$ increases with the refinement of the grid.

Standard image High-resolution image

The method ${{\boldsymbol{\Psi }}}^{L}$ is used if there is an eigenvalue whose real part is $\lt -7/{\rm{\Delta }}s$ (at si or ${s}_{i+1}$). The method ${{\boldsymbol{\Psi }}}^{E}$ is used if ${\phi }_{{{\boldsymbol{\Psi }}}^{E}}\lt 0.6$ or if the real part of both eigenvalues (at si and ${s}_{i+1}$) is $\gt -{10}^{-3}$. The method ${{\boldsymbol{\Psi }}}^{A}$ is converted to optical depth if $| {\phi }_{{{\boldsymbol{\Psi }}}^{A}}| \gt 0.8$. These parameters should not be considered as an ultimate choice, but they provide a concrete example. However, repeating the experiments with similar choices of parameters delivers similar results. The reference solution is computed using the implicit Euler method on a grid that contains 9999 points.

The experiments show that the pragmatic strategy effectively switches among the methods, delivering physically meaningful approximations independently from the coarseness of the grid. As predicted by the analysis, the use of ${{\boldsymbol{\Psi }}}^{L}$ (purple dots) decreases with the refinement of the grid: it is replaced by ${{\boldsymbol{\Psi }}}^{A}$(yellow and orange dots), which is in turn replaced by ${{\boldsymbol{\Psi }}}^{E}$ (blue dots). Table 2 summarizes the use (in percentage) of ${{\boldsymbol{\Psi }}}^{E}$, ${{\boldsymbol{\Psi }}}^{A}$ (without and with conversion to optical depth), and ${{\boldsymbol{\Psi }}}^{L}$ for each grid. These values have been approximated to the second digit.

Table 2.  Statistic of the Pragmatic Strategy

# of Grid Points ${{\boldsymbol{\Psi }}}^{E}$ a ${{\boldsymbol{\Psi }}}^{A}$ b ${{\boldsymbol{\Psi }}}^{A}$ ${{\boldsymbol{\Psi }}}^{L}$
40 36% 23% 23% 18%
70 38% 30% 25% 7%
140 43% 27% 28% 2%
200 51% 20% 29% 0%

Notes.

aWithout conversion to optical depth. bWith conversion to optical depth.

Download table as:  ASCIITypeset image

Although not shown, one must point out that the use of ${{\boldsymbol{\Psi }}}^{L}$ is necessary in order to deal with the stiffness of optically thick cells. This is partly visible in the fourth row, where $U$ shows overshoots. A similar numerical experiment based on ${{\boldsymbol{\Psi }}}^{E}$ and ${{\boldsymbol{\Psi }}}^{A}$ only presents oscillations in the spatial region $[-0.12\times {10}^{5},2.5\times {10}^{5}]$ if the grid is too coarse.

For comparison, Figure 4 shows the numerical evolution of the Stokes vector when this is computed, relying solely on ${{\boldsymbol{\Psi }}}^{E}$. With 140 points, this numerical solution is completely spurious because of numerical instability. With 200 points, the result is physically correct only after a certain depth. In particular, in the depth region $[-0.12\times {10}^{5},2.5\times {10}^{5}]$, this numerical solution oscillates wildly and the relative error with respect to the reference solution is of the order of 106.

Figure 4.

Figure 4. Repetition of the experiment from Figure 3, but using solely Heun's method to approximate the evolution of the Stokes components. Calculations are clearly affected by numerical instabilities.

Standard image High-resolution image

Finally, using the bound (9) on the spectral radius instead of computing the eigenvalues to decide which method to employ delivers similar results and is computationally (slightly) cheaper.

7. Supplemental Remarks

This section provides two additional considerations concerning the stability of the formal solution of the polarized radiative transfer.

7.1. Stability of DELO Methods

DELO methods belong to the class of exponential integrators: aiming at removing stiffness from the problem, the DELO strategy analytically integrates the diagonal elements of the propagation matrix (Guderley & Hsu 1972). Rees et al. (1989) first proposed the application of this technique to Equation (1), which has been very successful thanks to its stability properties. For this reason, the DELO strategy has since been chosen to develop higher-order methods: e.g., the DELO-parabolic (Murphy 1990; Janett et al. 2017a) and the DELO-Bézier (De la Cruz Rodríguez & Piskunov 2013) methods. DELO methods are currently widespread for the numerical evaluation of Equation (1).

The DELO strategy relies on the spatial scale conversion given by Equation (22) (which potentially introduces numerical errors) and it deals with the modified propagation matrix ${\mathscr{K}}={\bf{K}}/{\eta }_{I}-{\bf{1}}$, where ${\bf{1}}$ represents the 4 × 4 identity matrix. The stability functions of the DELO-linear and the DELO-parabolic methods satisfy condition (15). When the norm of the matrix ${\mathscr{K}}$ tends to zero (e.g., for a diagonal matrix ${\bf{K}}$), DELO methods tend to A-stability (Janett et al. 2017a) and, consequently, to L-stability. This fact explains the usual good performance of the DELO-linear method when dealing with very coarse grids and suggests its suitability as the L-stable method ${{\boldsymbol{\Psi }}}^{L}$ in the pragmatic formal solver described in the previous section.

7.2. Oscillations in the Evolution Operator

Here, a preliminary remark is required. When presenting the fourth-order A-stable cubic Hermitian method, Bellot Rubio et al. (1998) points to the improper sampling of the oscillations in the evolution operator elements as a reason for instability and inaccuracy. In particular, they investigate the case of strong lines, where the cubic Hermitian method flagrantly fails to reliably reproduce the emergent Q and U Stokes components when dealing with coarse spatial grids. In light of the stability analysis of Section 3, some considerations can be done.

One starts identifying the origin of the oscillations in the evolution operator elements. The formal solution of Equation (1) in terms of the evolution operator reads

where ${\bf{O}}$ is a 4 × 4 matrix. Under the assumption of a constant propagation matrix ${\bf{K}}$ in the layer $[{s}_{0},s]$, the evolution operator can be written as

If either ${{\rm{\Lambda }}}_{+}\ne 0$ or ${{\rm{\Lambda }}}_{-}\ne 0$, the evolution operator can be decomposed as

where ${\lambda }^{(i)}$ are given by Equation (6) and ${{\bf{N}}}_{i}$ are known 4 × 4 matrices (see Appendix 5 of Landi Degl'Innocenti & Landolfi 2004). From Equation (6), one recognizes that

The imaginary part of the eigenvalues ${\lambda }^{(3)}$ and ${\lambda }^{(4)}$ induces sinusoidal oscillations in the evolution operator elements, which correspond to the radiative transfer phenomena known as Faraday rotation and Faraday pulsation. These oscillations have spatial frequency ${{\rm{\Lambda }}}_{-}$, a factor dominated by the anomalous dispersion coefficients (see Table 1). In fact, a strong anomalous dispersion vector ${\boldsymbol{\rho }}$ induces high-frequency oscillations in the evolution operator elements. In particular, the component ${\rho }_{V}$ causes a rotation of the direction of maximum linear polarization, whereas the components ${\rho }_{Q}$ and ${\rho }_{U}$ induce a transformation from linear (circular) to circular (linear) polarization (Landi Degl'Innocenti & Landolfi 2004).

The presence of high-frequency oscillations alone is not a sufficient condition for instability when using an A-stable method. The additional requirement is the variation of the propagation matrix ${\bf{K}}$ along the integration path. Moreover, the stability improvement when using denser spatial grids noted by Bellot Rubio et al. (1998) is not due to the proper sampling of the oscillations in the evolution operator, but to the reduction of large variations of the propagation matrix ${\bf{K}}$ between consecutive grid points.

An illustrative example (not shown here) is given by the numerical evaluation of the formal solution when considering a constant propagation matrix with strong anomalous dispersion coefficients. In this case, an A-stable formal solver (e.g., the trapezoidal method) integrates Equation (1) without any instability issue.

8. Conclusions

This paper exposes the stability analysis of the numerical integration of the radiative transfer equation for polarized light. The main aim is to better understand the specific situations where instability issues appear and how to deal with them, rather than prescribing the ultimate stability criterion. This knowledge can be used to devise more robust formal solvers.

The first part focuses on the propagation matrix, identifying different structural properties, such as normality, diagonalizability, spectrum, and spectral radius.

The second part studies the stability properties of Runge–Kutta methods applied to Equation (1). Particular attention is paid to the assumptions and the limitations of the stability analysis, emphasizing their relevance in the formal solution for polarized light. Special care is paid to better understand the role of spatial variations in the propagation matrix.

It is shown that the conversion to the optical depth spatial scale, defined by Equation (19), usually mitigates variation of the propagation matrix elements along the integration path. Appendix C shows that numerical instabilities due to variations in the eigenvalues are a concrete problem when dealing with polarized light only. In the scalar case, the conversion to optical depth cancels the variation of the unique eigenvalue along the ray path. An entire section is dedicated to the numerical conversion to optical depth based on Equation (22). This approximation introduces numerical errors that could lead to a reduced order of accuracy of the formal solver. In practice, high-order formal solvers require a corresponding high-order numerical evaluation of the integral in Equation (22).

Finally, the structure of a paradigmatic pragmatic formal solver is given in terms of a switching technique. This numerical scheme chooses between different numerical methods at each step of the integration. It uses an inexpensive explicit method as long as the integration of the ODE is not limited by stability requirements and it switches to an implicit method when stiffness appears. In optically thick cells, the method switches to an L-stable method to correctly replicate exponential attenuations and to avoid numerical oscillations. The criterion for the switching is based either on the eigenvalues or on the spectral radius of the propagation matrix ${\bf{K}}$. The numerical tests are promising: the pragmatic strategy effectively switches among the methods and it delivers physically meaningful approximations independently from the coarseness of the grid.

It is important to point out that the stability results presented in this work rely on assuming that, prior to discretization, the propagation matrix ${\bf{K}}$ and the emission vector ${\boldsymbol{\epsilon }}$ are continuous functions. The effective performance of the pragmatic method on discontinuous atmospheric models remains to be explored. However, a switching technique based on choosing numerical methods depending on the local smoothness of the input data might be suitable to face discontinuities and high gradients.

The financial support by the Swiss National Science Foundation (SNSF) through grant ID 200021_159206/1 is gratefully acknowledged. The work of Alberto Paganini was partly supported by the EPSRC Grant EP/M011151/1. Special thanks are extended to L. Belluzzi and O. Steiner for reading and commenting on the paper. The authors also are grateful to the anonymous referee for providing valuable comments that helped to improve the article.

Appendix A: (Non-)Normality of the Propagation Matrix

A real square matrix ${\bf{A}}$ is normal if

Equation (23)

and this is true, e.g., for symmetric or skew-symmetric matrices. Normal matrices are diagonalizable and their spectrum is stable with respect to small perturbations of the matrix components (Trefethen & Embree 2005).

The propagation matrix given by Equation (2) satisfies

and hence it does not satisfy the normality condition in general. In particular, ${\bf{K}}$ is normal if and only if

In the cases of vanishing dichroism effects, i.e., η = 0, or of vanishing anomalous dispersion effects, i.e., ρ = 0, one recognizes that the matrix satisfies the normality condition.

Appendix B: Pseudospectrum

While the spectrum of a matrix is just the set of its eigenvalues, the pseudospectrum also depends on an additional parameter, a small number $\epsilon \gt 0$. Therefore, one usually refers to the epsilon-pseudospectrum. The epsilon-pseudospectrum ${{\rm{\Lambda }}}_{\epsilon }$ of a square matrix ${\bf{A}}$ is defined by (Trefethen & Embree 2005)

i.e., it is the set of $z\in {\mathbb{C}}$ that are eigenvalues of some matrix ${\bf{A}}+{\bf{E}}$ with $\parallel {\bf{E}}\parallel \leqslant \epsilon $. Here, the matrix ${\bf{E}}$ acts as a small perturbation to ${\bf{A}}$. Therefore, the pseudospectrum ${{\rm{\Lambda }}}_{\epsilon }({\bf{A}})$ always contains the epsilon-neighborhood of the spectrum ${\rm{\Lambda }}({\bf{A}})$. If one perturbs a normal matrix by a perturbation of operator norm at most epsilon, then the spectrum moves by at most epsilon. However, for a non-normal matrix, the pseudospectrum may be widely dispersed and a small perturbation may induce the spectrum to change a lot.

In order to better understand the concept of pseudospectra, one usually produces approximate pictures of the boundaries of ${{\rm{\Lambda }}}_{\epsilon }({\bf{A}})$ for various values of epsilon, modifying ${\bf{A}}$ by small random perturbations and looking at the spectra of these perturbations. The standard algorithm is to evaluate the smallest singular value11 of the matrix ${\bf{B}}=z{\bf{1}}-{\bf{A}}+{\bf{E}}$ for different values of z on a grid in the complex plane and then generate a contour plot from this data (Trefethen 1999). Different explicit examples are given in Figure 2.

These pictures are particularly useful for the stability analysis. Higham & Trefethen (1993) conclude that numerical instability around t in Equation (10) occurs when the pseudospectra of the frozen matrix ${\rm{\Delta }}t{\bf{A}}$ fail to fit the stability region of the numerical method. Therefore, one usually analyzes the stability of a numerical method by overplotting its stability region and the boundaries of ${{\rm{\Lambda }}}_{\epsilon }({\rm{\Delta }}t{\bf{A}})$ for different values of epsilon, as presented in Figure 2. Note that the stability condition based on pseudospectra is more conservative, or less liberal, with respect to the one based on eigenvalues.

Appendix C: Stability for Scalar Formal Solutions

The transfer of unpolarized light is described by the first-order inhomogeneous scalar ODE (Mihalas 1978)

Equation (24)

where I is the specific intensity, ${\eta }_{I}$ is the absorption coefficient, and epsilon is the emissivity. To simplify the notation, the frequency dependence of these quantities is omitted.

In terms of the optical depth τ, Equation (24) reads

Equation (25)

where $S=\epsilon /{\eta }_{I}$ is the so-called source function, g is defined in Equation (19), and $Z(\tau )=I({g}^{-1}(\tau ))$.

The fundamental difference between Equations (24) and (25) is that in the latter the linear coefficient is constant (and equal to −1), whereas the absorption coefficient ${\eta }_{I}$ in Equation (24) depends on s. This implies that to devise a stable numerical scheme for (25) it is sufficient to follow the discussion presented in Section 3.1, whereas for (24) one needs to take into account the variations of ${\eta }_{I}$ along the ray path, see Section 3.3. In particular, it is sufficient to employ an A-stable Runge–Kutta method to compute stable numerical solutions to Equation (25), whereas this may not be sufficient for Equation (24).

For the sake of completeness, the analytic solution to Equation (25) is given by

Therefore,

which can be approximated in a stable manner by replacing g and the integral on the right-hand side with numerical approximations. In this case, monotonicity of g guarantees L-stability.

Footnotes

  • Stimulated emission (which enters kL) is capable of producing an inversion of populations between two atomic levels. This could lead to a negative total absorption coefficient that yields an amplification of the radiation during the propagation. This phenomenon, which is at the basis of the devices such as lasers and masers, is completely negligible in solar applications and is not considered in this work.

  • This assumption is required by the Picard–Lindelöf Theorem to ensure that the IVP (17) has a solution, and that this solution is unique.

  • In the literature, g is usually defined as the solution of $g^{\prime} (s)=-{\eta }_{I}(s)$. However, the negative sign induces an unnecessary and possibly confusing change in the integration direction.

  • The subscript "${\rm{f}}$" stands for "final."

  • If ${\eta }_{I}$ is known at the grid points ${\{{s}_{i}\}}_{i=1}^{N}$, it is natural to employ the piecewise exponential model by adapting the parameter α to every interval $[{s}_{i},{s}_{i+1}]$.

  • This fraction decreases if Heun's method is replaced by a higher-order explicit Runge–Kutta scheme, because the latter inevitably requires the computation of more stages.

  • 10 

    The values of ${\bf{K}}$ and ${\boldsymbol{\epsilon }}$ have been computed with the RH code of Uitenbroek (2001).

  • 11 

    The singular values of a matrix ${\bf{A}}$ are the absolute values of the eigenvalues of the matrix ${{\bf{A}}}^{\top }{\bf{A}}$.

Please wait… references are loading.
10.3847/1538-4357/aab3d9