A MULTIRATE VARIABLE-TIMESTEP ALGORITHM FOR N-BODY SOLAR SYSTEM SIMULATIONS WITH COLLISIONS

and

Published 2016 February 18 © 2016. The American Astronomical Society. All rights reserved.
, , Citation P. W. Sharp and W. I. Newman 2016 AJ 151 64 DOI 10.3847/0004-6256/151/3/64

1538-3881/151/3/64

ABSTRACT

We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge–Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10−11.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

State-of-the-art algorithms for simulations of the solar system when collisions between test particles and massive bodies are permitted either use fixed timesteps or a combination of fixed and varying timesteps. The algorithm in Grazier et al. (2005) uses a small fixed timestep during close encounters and a larger fixed timestep otherwise. The algorithms in the integrators Mercury (Chambers 1999), Swifter (Kaufmann 2005), SyMBA (Duncan et al. 1998; Levison & Duncan 2000; Levison et al. 2011), QYMSYM (Moore & Quillen 2011), and GENGA (Grimm & Stadel 2014) use a varying timestep or an equivalent during close encounters and a fixed timestep otherwise.

We present and test an algorithm that uses a high-order variable-timestep explicit Runge–Kutta Nyström (ERKN) pair for the integration of all bodies at all times. Our algorithm is intended for accurate simulations near limiting precision.

Using the same integration formula for all bodies at all times means that the implementation of the algorithm is noticeably simpler than existing algorithms. No heuristic is needed for switching to and from the scheme for handling close encounters. There are fewer fine tuning constants in the algorithm with a value that must be fixed, and fewer input values to the algorithm the user must specify. These features make it easier to perform a thorough investigation of the algorithm's performance.

Of greater importance is the use of a variable timestep. When the timestep is fixed, the accumulated error in the position and velocity of a test particle can increase rapidly with time if the eccentricity is above some specified threshold. In addition, a fixed-timestep method cannot accurately integrate the orbit of a test particle close to the Sun unless the timestep is very small.

To illustrate the difficulty with eccentric orbits, we performed 200,000 simulations, each of the Sun, the Jovian planets, and a different test particle. One half of the test particles had an initial eccentricity of 0.5, the other half 0.8. Each simulation was performed three times, once each with the variable-timestep order 12 ERKN pair of Dormand et al. (1987), the order 12 formulae from the pair used as a fixed-timestep method, and the fixed-timestep order 13 Störmer method of Grazier et al. (2005). The fixed-timestep ERKN method used a timestep that was the average of the timesteps when the same particle was integrated using the ERKN pair. As recommended in Grazier et al. (2005), we used a timestep of four days for the Störmer method.

Figure 1 contains smoothed graphs of the L2 norm $\parallel E{\parallel }_{2}$ of the error in the position of the test particles after 30,000 days as a function of the initial a. The L2 norm of the n-vector ${\boldsymbol{w}}$ is defined as

Equation (1)

The graphs were smoothed using the MATLAB function filter. The error for each particle was calculated as the difference between the numerical solution and an accurate reference solution obtained from an integration in quadruple precision. All particles within an activity sphere of a planet at the end of one or more integration steps were excluded from the graphs.

Figure 1.

Figure 1. L2 norm of the error in the position of the test particles after 30,000 days as a function of the initial a for initial eccentricities of e = 0.5 (left) and e = 0.8 (right). E12VS—the order 12 variable-timestep ERKN pair, E12FS—the order 12 fixed-timestep ERKN method, S13FS—the order 13 fixed-timestep Störmer method.

Standard image High-resolution image

We observe from Figure 1 that $\parallel E{\parallel }_{2}$ for the variable-timestep ERKN pair E12VS varies little with a, and has similar values for e = 0.5 and e = 0.8. The error also varies little with a for the fixed-timestep Störmer method S13FS when e = 0.5. For e = 0.8, the graph of $\parallel E{\parallel }_{2}$ has a distinct elbow at $a\approx 8$. For larger a, $\parallel E{\parallel }_{2}$ varies little with a; for smaller a, $\parallel E{\parallel }_{2}$ increases rapidly as a decreases. E12FS fares worse than S13FS and has an elbow for e = 0.5 and 0.8.

We begin in Section 2 with the definition of the equations of motion, ERKN pairs, and their continuous approximations. We also give a summary of the scheme for the selection of the timestep for ERKN pairs. In Section 3, we present our multirate algorithm and follow this in Section 4 with a discussion of suitable input values to the algorithm. We give a summary of extensive numerical testing of our algorithm in Section 5 and end in Section 6 with a discussion of our results.

2. BACKGROUND

Let ${{\boldsymbol{R}}}_{i}(t),$ $i=1,\ldots ,\;{N}_{p}$ and ${{\boldsymbol{r}}}_{j}(t)$, $j=1,\ldots ,\;{N}_{t}$, be the position of the ith massive body and jth test particle, respectively, at time t, where ${N}_{p}$ is the number of massive bodies and ${N}_{t}$ is the number of test particles. The equations of motion for all ${N}_{p}\quad +\quad {N}_{t}$ bodies can be written as

Equation (2)

Equation (3)

where the dot operator denotes differentiation with respect to t and ${\mu }_{{}_{l}}$ is G times the mass of the lth massive body.

Equations (2) and (3) combined with initial conditions form the initial value problem

Equation (4)

An explicit ERKN pair to integrate Equation (4) calculates the approximations ${{\boldsymbol{y}}}_{i}$, ${\dot{{\boldsymbol{y}}}}_{i}$, ${\widehat{{\boldsymbol{y}}}}_{i}$, and ${\dot{\widehat{{\boldsymbol{y}}}}}_{i}$ on each integration step. The values ${{\boldsymbol{y}}}_{i}$ and ${\dot{{\boldsymbol{y}}}}_{i}$ are order p approximations to ${\boldsymbol{y}}({t}_{i})$, and $\dot{{\boldsymbol{y}}}({t}_{i})$, respectively, and ${\widehat{{\boldsymbol{y}}}}_{i}$ and ${\dot{\widehat{{\boldsymbol{y}}}}}_{i}$ are order $q\lt p$ approximations to ${\boldsymbol{y}}({t}_{i})$ and $\dot{{\boldsymbol{y}}}({t}_{i})$, respectively. The four approximations are calculated from the updated formulae

Equation (5)

Equation (6)

where $h={t}_{l}-{t}_{l-1}$ is the timestep, ${{\boldsymbol{f}}}_{1}={\boldsymbol{f}}({t}_{l-1},{y}_{l-1})$ and

The integers s, p, q and the coefficients a, b, ${b}^{\prime }$, $\widehat{b}$, $\widehat{{b}^{\prime }}$, c are chosen so the formulae in the pair have the required order and the pair has desirable properties. We refer to p as the order of the pair.

On each step with an ERKN pair, the timestep h is chosen so that the norm of the estimated local error satisfies the local error test. The local error in ${\boldsymbol{y}}({t}_{l})$ and $\dot{{\boldsymbol{y}}}({t}_{l})$ is estimated as ${\widehat{{\boldsymbol{y}}}}_{l}\quad -\quad {{\boldsymbol{y}}}_{l}$ and ${\dot{\widehat{{\boldsymbol{y}}}}}_{l}-{\dot{{\boldsymbol{y}}}}_{l}$ respectively. A step is accepted if the local error test ${\rm{le}}\leqslant {\rm{TOL}}$ is satisfied where ${\rm{le}}=\mathrm{max}\{\parallel {\widehat{{\boldsymbol{y}}}}_{l}-{{\boldsymbol{y}}}_{l}{\parallel }_{\infty },\parallel {\dot{\widehat{{\boldsymbol{y}}}}}_{l}-{\dot{{\boldsymbol{y}}}}_{l}{\parallel }_{\infty }\}$, and ${\rm{TOL}}\gt 0$ is the local error tolerance supplied by the user. The norm $\parallel \cdot {\parallel }_{\infty }$ is the ${L}_{\infty }$ norm and is defined for the n-vector ${\boldsymbol{w}}$ as

Equation (7)

Other norms such as the L2 defined in Equation (1) can be employed. We use the ${L}_{\infty }$ norm because it is less sensitive to decreases in the number of components of ${\boldsymbol{y}}$ and $\dot{{\boldsymbol{y}}}$ when test particles are removed from a simulation.

During an integration, the timestep ${h}_{{\rm{new}}}$ for a new step is calculated from the timestep h for the step just taken using the formula

Equation (8)

where $0\lt \beta \lt 1$. The constant β acts as a safety factor that ensures most steps are accepted. We had $\beta =0.9$ in all of our testing, other values could have been used. Formula Equation (8) is employed on all attempted steps except the first for which the user supplies the timestep since the local error estimate is not available. On the last step of an interval, ${h}_{{\rm{new}}}$ is adjusted so that the integrator does not step past the right end point of the interval.

Our algorithm requires the continuous approximations ${{\boldsymbol{y}}}_{l}(t)$ and ${\dot{{\boldsymbol{y}}}}_{l}(t)$ to ${\boldsymbol{y}}(t)$ and $\dot{{\boldsymbol{y}}}(t)$, respectively, for $t\in [{t}_{l-1},{t}_{l}]$. These approximations, commonly called interpolants, are of order ${p}^{\star }\leqslant p$ and defined as

Equation (9)

where $\tau =(t-{t}_{l-1})/h$, ${s}^{\star }\geqslant s$, and ${\gamma }_{j}(\tau )$ are not the derivatives of ${\beta }_{j}(\tau )$. As the notation implies, ${{\boldsymbol{f}}}_{j}$, j = 1,...,s, are the same as the corresponding ${{\boldsymbol{f}}}_{j}$ in Equations (5) and (6). If ${s}^{\star }\gt s$, the values ${{\boldsymbol{f}}}_{j}$, $j=s+1,\ldots ,{s}^{\star }$, must be calculated. Since the ${{\boldsymbol{f}}}_{j}$ are independent of τ, these extra values of ${{\boldsymbol{f}}}_{j}$ must be found just once per step. The interpolant can then be evaluated for any number of τ without calculating more ${\boldsymbol{f}}$ values.

Many ERKN pairs have been published. Dormand et al. (1987) presented a new order 12 pair (p = 12). They tested the pair and other high-order pairs on Kepler's two-body problem with orbital eccentricities in the range of 0.1–0.9. Dormand et al. (1987) concluded that their new pair, which we denote by DEP12, was the most efficient pair near limiting precision. Sharp et al. (2013) presented new order 10 (p = 10) and 12 pairs. They tested the new pairs and DEP12 on four N-body models from solar system dynamics, as well as Kepler's two-body problem with orbital eccentricities of 0.1, 0.5, and 0.9, and the PLEI N-body problem of Hairer et al. (1987). Sharp et al. (2013) concluded that their order 12 pair was on average 10% more efficient than DEP12 and that their order 10 pair could be more efficient than their order 12 pair on problems with rapid changes in the solution. Despite the greater efficiency of these new pairs, we used DEP12 because interpolants are not available for the new pairs.

The pair DEP12 has over 250 coefficients. Some of these coefficients require more than 100 digits to represent, and the complete set of coefficients spreads over at least five pages. The coefficients are available online as part of the integrator rknint (Brankin et al. 1989), see algorithm 670 at http://netlib.org/toms/.

3. ALGORITHM

In this section, we first give an overview of our algorithm and then discuss the details.

3.1. Overview

We begin a simulation by dividing the ${N}_{t}$ test particles into ${N}_{g}$ groups of equal size. If ${N}_{g}$ does not divide ${N}_{t}$ exactly, the first ${N}_{g}-1$ groups are made as large as possible and of equal size; the ${N}_{g}$th group contains the remaining test particles. The selection of ${N}_{g}$ is discussed later.

Next we attempt to integrate the massive bodies and the test particles in the first group from t = 0 to $t={\rm{\Delta }}t$ where ${\rm{\Delta }}t$ is specified by the user and is far greater than the typical timestep. The timestep h on each step of the integration is chosen so the local error test is satisfied. The timestep for the final step on the interval is further constrained to ensure $t={\rm{\Delta }}t$ at the end of the final step for the interval. After each step is completed, we remove those test particles in the group that have been ejected from the solar system or have collided with a massive body. The attempt to integrate the group to $t={\rm{\Delta }}t$ is stopped if all of the test particles in the group have been removed or an error condition has been raised in the integrator.

Once the first group has been processed, we attempt to integrate the second group from t = 0 to $t={\rm{\Delta }}t$. This includes a re-integration of the massive bodies since the orbits of the test particles depend on the position of the massive bodies. As with the first group, after each integration step, we remove test particles in the group that have been ejected from the solar system or have collided with a massive body. The remaining groups are treated in a similar way. After the last group has been integrated to $t={\rm{\Delta }}t$, we integrate all groups with at least one test particle one group at a time from $t={\rm{\Delta }}t$ to $t=2{\rm{\Delta }}t$. The simulation continues until either all test particles have been removed, the final value ${t}_{f}$ of t is reached, or an error has occurred.

Each of the ${N}_{g}$ groups of test particles is integrated independently of the other groups. This means each group will have its own sequence of timesteps, making our algorithm multirate. The different sequences of timesteps will lead to a divergence of the position of the same massive body in different groups. During a simulation we calculate the difference between the position of the massive bodies for the ith group, $i\gt 1$, and the first group, and use linear least squares to fit the power law $\alpha {t}^{\beta }$ to the norm of the difference. If the power law indicates that the difference is large enough to eliminate all resident information, we stop the simulation.

Even though different timestep sequences are used for different groups, the timestep for all steps and all groups is chosen so that the norm of the local error estimate is bounded above by the same fixed value (the local error tolerance). Hence, the divergence of the position for non-chaotic motion is slow. For the simulations in Section 5 below, a representative power law for the norm of the difference in a planet's position is ${10}^{-18.4}{t}^{1.67}$, where t is in days and the difference is in au. This norm after 10 million and 100 million years is 0.0037 au and 0.17 au, respectively, small compared to the typical semimajor axis of test particles.

3.2. Details

We use up to three conditions that a test particle must satisfy before it is regarded as ejected from the solar system. These conditions include that the heliocentric distance of the test particle is at least ${R}_{{\rm{ej}}}$, the test particle is unbound from the solar system, and it is on an outward bound trajectory.

After establishing that a test particle has not been ejected, we check if the test particle collided with the ith massive body, $i=1,\ldots ,{N}_{p}$, using the steps described below. In the description, ${d}_{{ij}}^{2}(t)$ is the square of the distance between the ith massive body and the jth particle at time t, and ${R}_{c,i}^{2}$ is square of the collision sphere radius for the ith massive body. We use d2ij and not dij to reduce the number of square roots. There are at least two choices for the collision radius: the physical radius or the radius of the gravitational cross-section of Safronov & Zvjagina (1969).

We first check if the test particle is inside the collision sphere for the ith massive body at the end of the step i.e., if ${d}_{{ij}}^{2}({t}_{l})\leqslant {R}_{c,i}^{2}$. If it is, we remove the particle and go to the next particle. If the inequality is not satisfied, we check if the particle would have hit the ith massive body during the step. To do this, we first check if the inequalities

Equation (10)

Equation (11)

are both satisfied. If they are not, the distance between the test particle and the massive body did not have a local minimum on the integration step, and we move on to the next massive body or test particle.

If the inequalities, Equations (10) and (11), are satisfied, we check the inequality

Equation (12)

The left side of Equation (12) is a lower bound on the square of the distance between the particle and the ith massive body when the distance has a local minimum on the step.

If Equation (12) is not satisfied, the simulation moves on to the next massive body or test particle. If Equation (12) is satisfied, we find the minimum distance between the massive body and the test particle on the step by solving

Equation (13)

using an iterative method. The test particle is removed if the square of the minimum distance is no larger than ${R}_{c,i}^{2}$. To evaluate $d[{d}_{{ij}}^{2}(t)]/{dt}$, we use the interpolants Equation (9) to approximate the components of the position and velocity of the massive body and then form $d[{d}_{{ij}}^{2}(t)]/{dt}$. If ${s}^{\star }\gt s$, which it is for our algorithm, we calculate ${{\boldsymbol{f}}}_{j}$, $j=s+1,\ldots ,{s}^{\star }$, at the start of the iterations for the current test particle.

We considered many possible iterative schemes and chose Krogh's zero. This method starts with a bracket on the root sought, and then uses linear interpolation for the first iteration and quadratic interpolation for subsequent iterations. The method employs heuristics that reduce the number of function evaluations. Fortran and C implementations of zero are available at http://netlib.org/math/index.html as part of the math77 library. A user guide for zero is available at http://netlib.org/math/docpdf/ch08-01.pdf. Page three of the user guide lists the total number of function evaluations used by zero and 12 other methods on the 15 test problems of Alefeld et al. (1995). The best of the 12 other methods uses 26%–28% more function evaluations than zero for convergence tolerances of 10−7, 10−10, 10−15, and 0 ("find as accurate a solution as possible"). We used zero with a convergence tolerance of zero in our algorithm.

If zero converges, we remove the test particle if it collided with the massive body and move on to the next test particle. We also move on to the next test particle if there was no collision. The routine zero has never failed to converge in our test runs. Nevertheless, failure is possible and if it occurs, we print an error message along with the position of the test particle and massive body, and go to the next test particle.

In addition to performing the integrations and checks for ejection and collisions, our algorithm does bookkeeping. This includes maintaining a count on the number of test particles left in the simulation and each group at time t, and updating arrays that hold information about the test particles. We also record information about the state of the integration for each group.

4. INPUT VALUES

The user-specified inputs to our algorithm are ${p}^{\star }$, ${\rm{\Delta }}T$, ${h}_{0}$, ${\rm{TOL}}$, and ${N}_{g}$. We now discuss suitable values for the the first four inputs. The selection of a suitable value for ${N}_{g}$ is discussed in the next section.

An interpolant of order ${p}^{\star }$ introduces an error of $O({h}^{{p}^{\star }+1})$, which we call the interpolation error. We want this error to be insignificant compared to the error already present in the numerical solution at the end of the step. The standard values for ${p}^{\star }$ when solving general initial value ordinary differential equations is $p-1$ or p, see, for example, Baker et al. (1996). For either value of ${p}^{\star }$, the accumulated error in ${{\boldsymbol{y}}}_{l}$ and ${\dot{{\boldsymbol{y}}}}_{l}$ after many timesteps will be significantly larger than the interpolation error. Near the start of the integration, the accumulated error would not have had time to grow significantly with t. If ${p}^{\star }=p-1$, the interpolant error would then be significant. We therefore use ${p}^{\star }=p$. This choice means more CPU time is required to form and evaluate the interpolant than for ${p}^{\star }=p-1$. This extra work is a small fraction of the total work for a simulation because as we demonstrate in the next section that the interpolant is needed on a small percentage of the timesteps. The coefficients of the interpolant with ${p}^{\star }=p$ are available online in the file http://www.math.auckland.ac.nz/~sharp/rkncoeff.dat. This file is that referred to by Baker et al. (1996).

There are a wide range of acceptable values for ${\rm{\Delta }}T$. The only restriction is that it must be sufficiently large so that forcing the integrator to step exactly to ${\rm{\Delta }}T$, $2{\rm{\Delta }}T$, $3{\rm{\Delta }}T$, ..., has little effect on the integration. In our simulations, we typically use 1000 or 10,000 years for ${\rm{\Delta }}T$.

The scheme for selecting the timestep is very effective in adjusting the timestep to its optimal value in just a few steps. Hence the CPU time required to complete a long simulation is insensitive to the initial timestep ${h}_{0}$ and the same value can be used for all long simulations.

For the simulations we are interested in, there is a limited range of values for the local error tolerance ${\rm{TOL}}$. The tolerance must be chosen so that the integrations are being done at or near limiting precision. Too large a value would mean the truncation error dominated the round-off error and we were not getting as much accuracy as we could. Too small a value would mean that the round-off error was dominating the truncation error. We experimented with ${\rm{TOL}}={10}^{-i}$, $i=12,13,14,15$ and found that ${\rm{TOL}}={10}^{-13}$ was a good compromise.

The main source of round-off error in the propagated solution at limiting precision is the addition in the update formula, Equations (5) and (6), that adds the h terms to ${{\boldsymbol{y}}}_{i-1}$ and ${\dot{{\boldsymbol{y}}}}_{i-1}$ to get the values at the end of a step. We used the standard technique of compensated summation, see, for example, Section 4.3 in Higham (2002), to reduce the round-off error and found no noticeable reduction in the total error on long simulations.

5. NUMERICAL TESTS

In this section, we present a summary of our extensive testing of our multirate algorithm. In all tests, the massive bodies were the Sun and the planets, Earth through to Neptune. The test particles were randomly distributed in the Jupiter–Saturn zone at the start of a simulation. The units of time and distance were days and astronomical units respectively.

The aim of our first two simulations was to assess the performance of the scheme for varying the timestep. Both simulations had 250 test particles. The first simulation was over 100,000 years and the second over 100 million years.

During the simulation of 100,000 years, the timestep varied by four orders of magnitude from $3.13\times {10}^{-3}$ days (approximately five minutes) to 35.4 days and the average was 23.9 days. Five test particles collided with Jupiter and one with Saturn. The timestep had a local minimum at each collision and the global minimum was one of these minima. The results for the simulation of 100 million years agreed well with those for the simulation of 100,000 years. In particular, the timestep varied from $1.96\times {10}^{-3}$ to 36.1 days and the average was 26.1 days.

Our next set of simulations assessed the dependence of the CPU time on the number of groups ${N}_{g}$. For a given tf, the value of Ng that minimizes the CPU time is a compromise between the following two competing factors. Since the same timestep is used for all test particles in a group, increasing ${N}_{g}$ will reduce the number of test particles integrated with a small timestep when a test particle undergoes a close encounter with a massive body. This reduces the CPU time for a simulation. This is countered, partly or wholly, by the increase in the number of times the massive bodies are integrated (they are integrated once for each group). To gain insight about the optimal value for ${N}_{g}$, we performed simulations of 1000 test particles over ${t}_{f}={10}^{i}$ years, $i=3,4,\ldots ,8$, for ${N}_{g}=1,2,5,10,20,50$. For each ${t}_{f}$, we recorded the CPU time for each ${N}_{g}$ and found the minimum ${t}_{{\rm{CPU,min}}}$ of these times. We then normalized the CPU times for the given ${N}_{g}$ by dividing by ${t}_{{\rm{CPU,min}}}$. For example, when ${t}_{f}={10}^{3}$ years, the CPU times were 41.3 s $({N}_{g}=1)$, 34.4 s (2), 30.3 s (5), 29.5 s (10), 30.3 s (20), and 34.2 s (50). We have ${t}_{{\rm{CPU,min}}}=29.5\;{\rm{s}}$ and the normalized times 1.40, 1.17, 1.03, 1.00, 1.03, and 1.16 (2dp).

The normalized CPU times are listed in Table 1. An entry of "−" means we stopped the simulation before it reached ${t}_{f}$ because it was clear that the ratio would be too large to change our conclusions. We observe from Table 1 that for small ${t}_{f}$, ${N}_{g}=10$ is optimal, although ${N}_{g}=5$ or 20 are almost as good. As ${t}_{f}$ increases, the optimal ${N}_{g}$ decreases and is 1 when ${t}_{f}={10}^{8}$ years. The optimal ${N}_{g}$ decreased because many of the test particles had been ejected or removed and the integration of the massive bodies required a larger fraction of the CPU time. Decreasing ${N}_{g}$ decreases this fraction. For simulations of a large number of test particles, the optimal ${N}_{g}$ can be estimated from an initial integration of a small number of test particles, such as 1000 as we used.

Table 1.  Normalized CPU Times for the Simulations of the Sun, the Planets Earth through Neptune, and 1000 Test Particles Initially in the Jupiter–Saturn Zone

${t}_{f}$ (years) ${N}_{g}$
  1 2 5 10 20 50
103 1.40 1.17 1.03 1.00 1.03 1.16
104 1.38 1.10 1.02 1.00 1.03 1.17
105 1.51 1.15 1.03 1.00 1.01 1.12
106 1.44 1.11 1.00 1.03 1.04 1.24
107 1.00 1.03 1.00 1.08
108 1.00 1.13 1.34 1.82

Note. An entry of "−" means we stopped the simulation before completion because it was clear that the normalized time would be too large to change our conclusions.

Download table as:  ASCIITypeset image

Another aspect of using groups is deciding how to divide test particles into groups. One possibility is to use the initial orbits of the test particles to group those that are likely to have similar evolutionary histories. For example, test particles could be grouped according to their initial eccentricity and semimajor axis.

The last simulation we report on is of 100,000 test particles over 100 million years. We used 100 cores on the high-performance computer Pan at the University of Auckland's Centre for eResearch. Each core had 1000 test particles and a copy of the massive bodies. On each core, we used ${N}_{g}=10$. We know from Table 1 that ${N}_{g}=10$ is not optimal for ${t}_{f}={10}^{8}$ years. We used ${N}_{g}=10$ because it provided a more thorough test of our algorithm than ${N}_{g}=1$ did. For the same reason, we used the physical cross-section of the massive bodies and not their gravitational cross-section, and required the three conditions with ${R}_{{\rm{ej}}}=50\;{\rm{au}}$ stated in Section 3.2 to be satisfied before ejecting a test particle.

Across the 100 cores, the CPU time required to complete an integration varied from 2.58 to 3.42 days and had a mean of 3.05 days. The variation in CPU time was caused mostly by the differences in the orbital evolution and not by the timing uncertainty.

Figure 2 is a histogram of the signed relative error in the energy across the cores at the end of the simulation. We observe that the histogram is close to being symmetric. The mean of the signed relative error is $-5.8\times {10}^{-12}$. We used linear least squares to fit the power law $\alpha {t}^{\beta }$ to the error for each core. The mean of β across the 100 cores was 0.68. Hence our method does not satisfy the optimal power law ${t}^{1/2}$ derived by Brouwer (1937). Despite this, the mean of the end point energy error in our simulation is comparable to that of the Störmer method of Grazier et al. (2005), which does achieve the ${t}^{1/2}$ growth. We believe our method achieves comparable accuracy because the error on a single step is significantly smaller than that of the Störmer method.

Figure 2.

Figure 2. Histogram of the signed relative error in the energy at $t={10}^{8}$ years for 100 simulations each of 1000 test particles. Each data point is the signed relative error for one core.

Standard image High-resolution image

By the end of the simulation, 94,093 test particles had been ejected from the solar system and 3591 had collided with a massive body. Of these collisions, 99.3% were detected at the end of a step. This percentage, being close to 100%, suggests the detection of collisions across a step could be omitted. Doing so would save little CPU time because the mean number of iterations performed by zero on each core was just 811.

6. DISCUSSION

We presented a variable-timestep algorithm for long, accurate simulations of the solar system when collisions between test particles and massive bodies are permitted. Varying the timestep meant the orbits of test particles close to the Sun or on eccentric orbits were calculated accurately, thus avoiding two potential difficulties with fixed-timestep algorithms. This gives more assurance about the validity of conclusions drawn from the results of a simulation. Our test results showed that the error in the energy was very small.

Our algorithm has many applications. One such application, as we illustrated in the previous section, is an investigation of the collisions between the terrestrial planets and small bodies. Horner & Jones (2008, 2009) investigated how effective Jupiter is at preventing asteroids and centaurs from hitting Earth. To enhance the impact rate on Earth, Horner and Jones set Earth's radius at one million kilometers. In our simulations of the previous section, we used the realistic value of 6370 kilometers (the Safronov & Zvjagina 1969 radius is often significantly larger than this value) and found that just one test particle hit Earth over 100 million years. This is too small a number to make any sensible conclusion and far more test particles, perhaps ${10}^{6},$ would be needed. This number is feasible with our algorithm.

Our algorithm can also be used to perform statistical tests on simulation results. For example, Grazier et al. (2014) performed simulations with 2000 test particles to investigate the delivery of volatiles to the outer asteroid belt. The simulations differed in the mass used for Jupiter and the zone in which the test particles started. Our algorithm is fast enough that multiple sets of 2000 test particles for each mass and zone could be simulated and the information from these simulations used to assess the conclusions could be drawn from the original sample.

Although not needed for the simulations of Section 5, our method will accurately integrate close encounters between massive bodies. This is so because the scheme to select the timestep in an integration does not distinguish between massive bodies and test particles. Our method does not permit collisions between massive bodies. We believe our method can be extended using the interpolants and the solver zero to permit these collision.

The authors thank the referee for comments and acknowledge Fred Krogh's help with the iterative method zero. W.I.N. acknowledges support from the UCLA Academic Senate for this collaboration. The authors acknowledge the contribution of the NeSI high-performance computing facilities and the staff at the Centre for eResearch at the University of Auckland. New Zealand's national facilities are provided by the New Zealand eScience Infrastructure (NeSI) and funded jointly by NeSI's collaborator institutions and through the Ministry of Business, Innovation and Employments Infrastructure programme. http://www.nesi.org.nz. The authors also acknowledge the use of the computational servers in the Department of Mathematics at the University of Auckland.

Please wait… references are loading.
10.3847/0004-6256/151/3/64