Articles

REDUCING SYSTEMATIC ERROR IN WEAK LENSING CLUSTER SURVEYS*

, , , , , , , and

Published 2014 April 21 © 2014. The American Astronomical Society. All rights reserved.
, , Citation Yousuke Utsumi et al 2014 ApJ 786 93 DOI 10.1088/0004-637X/786/2/93

0004-637X/786/2/93

ABSTRACT

Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing κ-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ∼3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ∼2000 peaks based on our Subaru fields.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

Weak lensing is a fundamental tool of modern cosmology. A weak-lensing map provides a weighted "picture" of the projected surface mass density and thus a route to identifying clusters of galaxies selected by mass (e.g., Miyazaki et al. 2002a; Dahle et al. 2003; Schirmer et al. 2003; Hetterscheidt et al. 2005; Wittman et al. 2006; Gavazzi & Soucail 2007; Miyazaki et al. 2007; Shan et al. 2012). Subtle issues limit the application of weak-lensing maps as sources of cluster catalogs. A serious astrophysical limitation is the projection of large-scale structure along the line of sight (see e.g., White et al. 2002; Hamana et al. 2004, 2012). An observational limitation is the presence of systematic errors in the maps (Geller et al. 2010; Kurtz et al. 2012). Here we focus on a method of reducing these errors.

Wittman et al. (2001) and Miyazaki et al. (2002a) first identified rich clusters from weak-lensing maps. Since then larger and larger weak-lensing surveys with increasingly sophisticated reduction techniques have led to a steep increase in the number of clusters initially detected in weak-lensing maps (Hetterscheidt et al. 2005; Wittman et al. 2006; Schirmer et al. 2007; Gavazzi & Soucail 2007; Miyazaki et al. 2007; Maturi et al. 2007; Bergé et al. 2008; Dietrich et al. 2007; Bellagamba et al. 2011; Shan et al. 2012). Shan et al. (2012) used the Canada–France–Hawaii Telescope (CFHT) to carry out the largest area survey to date, covering 64 deg2. Among the 301 weak-lensing peaks with signal-to-noise ratio (S/N) greater than 3.5, only 126 have a corresponding brightest cluster galaxy within 3' of the weak-lensing peak. Shan et al. (2012) conclude that many of the weak-lensing peaks, even above this threshold, are probably noise.

In a detailed reanalysis of the 2.1 deg2 GTO field Miyazaki et al. (2002a, 2007) and Kurtz et al. (2012) show that the highest peak of the B-mode map (in this map the source ellipticities are rotated by 45° and there should be no signal) has S/N ∼4.25, and there are four peaks in the B-mode map with S/N >3.7. It is interesting that Shan et al. (2012) find a similar number of high-significance peaks per unit area in their B-mode maps. Analysis of the B-mode map prompted Kurtz et al. (2012) to set their threshold for weak-lensing cluster detection at 4.25. Above this threshold 2/3 of the weak-lensing peaks correspond to individual massive clusters.

Here we investigate possible underlying sources of the high peaks in the B-mode maps. Our goal is the reduction of the amplitude and frequency of these peaks as a basis for construction of more robust catalogs of weak-lensing cluster detections. We apply our revised procedures to two fields observed with the Subaru Telescope: the GTO field of Miyazaki et al. (2002a, 2007) and a small portion of the DLS F2 field reobserved with Subaru.

In Section 2 we review the standard analysis of the Subaru imaging data. In Section 3 we review the construction of the galaxy catalog. In Section 4 we describe the basic weak-lensing analysis we use. In Section 5 we develop a revised procedure for reducing systematics in the B-mode map. We demonstrate that there is excess power in the B-mode map on small and large scales relative to noise maps. Two procedures are effective in suppressing systematics in the B-mode map: (1) warping the image onto absolute sky coordinates and (2) introducing a large-scale (10') cutoff in the smoothing kernel applied to construct the weak-lensing map. We define the weak-lensing peaks and investigate thresholds in Section 6. In Section 7 we note some differences in our analysis of the GTO and DLS Subaru observations. Section 8 provides an analytic expression for the probability of finding the highest peak in the noise maps as spurious peaks in a weak-lensing map. In Section 9 we test the lensing maps of our two fields by comparing the peaks with systems identified in dense redshift surveys. We demonstrate that a threshold S/N = 4.56 produces a robust catalog of peaks with few false positives.

Unless otherwise stated, we adopt the standard concordance cosmology (h = 0.72, Ωm = 0.27, ΩΛ = 0.73) and the AB magnitude system throughout this paper.

2. IMAGING DATA AND IMAGE REDUCTION

In this section we describe a revised reduction procedure that reduces systematic errors in weak-lensing maps. As examples of the revised procedure, we use two regions imaged with Suprime-Cam on the Subaru Telescope (Miyazaki et al. 2002b). The GTO field covers ∼2 deg2, centered on (16:04, +43:12), and was observed by Miyazaki et al. (2002a). We re-observed the western ∼1 deg2 portion, (9:16, +30:00), of the Deep Lens Survey F2 field (DLS F2; Wittman et al. 2002). The new observations were taken in 0farcs75 seeing. The GTO field consists of nine Suprime-Cam pointings; the DLS observations consist of four Suprime-Cam pointings. Table 1 shows the numbering scheme for Subaru pointings in both the GTO and DLS fields.

Table 1. Observation Log

Name R.A. Decl. Exp. timea (s) Observation Date Filter Seeing (arcsec) Limit mag
GTO_0 16:04:44.5 +43:11:12 2100 2001 Apr 23, 25 RC 0.66 25.38
GTO_1 16:07:39.0 +43:12:19 2400 2001 Apr 23, 25 RC 0.77 25.28
GTO_2 16:07:39.0 +43:37:37 1800 2001 Apr 23 RC 0.72 25.44
GTO_3 16:04:44.6 +43:36:30 1800 2001 Apr 24 RC 0.71 25.24
GTO_4 16:01:46.8 +43:37:37 2250 2001 Apr 24, 25, May 18 RC 0.65 25.17
GTO_5 16:01:46.8 +43:12:19 2250 2001 Apr 24, 25 RC 0.70 24.90
GTO_6 16:01:46.8 +42:47:01 1800 2001 Apr 24, 25 RC 0.66 25.19
GTO_7 16:04:43.0 +42:47:01 1800 2001 Apr 25 RC 0.65 25.18
GTO_8 16:07:39.0 +42:47:01 1800 2001 Apr 25 RC 0.64 25.32
DLSF2_f0 09:16:30.9 +29:17:40 3600 2008 Jan 8 i' 0.79 25.50
DLSF2_f1 09:16:31.0 +29:39:40 3600 2008 Jan 8 i' 0.72 25.52
DLSF2_f2 09:16:31.0 +30:01:40 3600 2008 Jan 8 i' 0.72 25.33
DLSF2_f3 09:16:31.0 +30:23:41 3600 2008 Jan 8 i' 0.76 25.51

Notes. The GTO pointings fill a 3 × 3 square. They are numbered so that the central field is 0 and numbers go from 1 in the central left (east) field clockwise to 8 in the southeast corner. The DLS pointings fill a 1 × 4 rectangle: 0 is the southernmost pointing, and the numbers increase to the north. For the GTO field the numbering is the same as in Kurtz et al. (2012). Detail of the limiting magnitude as described in Section 3. aThis column shows the exposure time after the rejection (Section 2).

Download table as:  ASCIITypeset image

We retrieved data from the SMOKA data archive server (Baba et al. 2002). These data satisfy the condition FILTER=W-C-RC and $\rm {PSF\_SIGMA}<0.9$ to obtain good image quality. $\rm {PSF\_SIGMA}$ is the typical FWHM (arcsec) of stellar images evaluated by SMOKA using SExtractor (Bertin & Arnouts 1996). If the total exposure exceeds 2000 s, we select frames with the smallest PSF_SIGMA and reject frames with larger PSF_SIGMA even if PSF_SIGMA is ≲ 0farcs9. As a result, we reject about 6 of 11, 10 of 15, and 1 of 5 exposures for GTO_0, GTO_4, and GTO_5, respectively. Table 1 gives the total exposure times, observation dates, filters, typical seeing, and effective limiting magnitude for each subfield. Imaging observations for the DLS F2 field were made with Suprime-Cam on 2008 January 1. Each of the four subfields is a 3600 s exposure taken under stable seeing conditions. Table 1 lists the FWHM of the point-spread function (PSF) on stacked images in each of the four subfields.

The basic image reduction follows Miyazaki et al. (2002a, 2007) and Kurtz et al. (2012). However, we make one major revision: we use independent astrometric data to register the images accurately with respect to the celestial coordinate while avoiding the introduction of any additional image warp.

The preliminary reduction of the raw image on each CCD is standard. We use SExtractor to identify objects on the overscan-subtracted, flat-fielded, sky-subtracted image. We identify stellar objects from their easily identifiable linear size–magnitude relation. We manually select the stellar object catalog using this relation.

After completing the preliminary reduction, there are three additional steps in obtaining the final mosaic of stacked images; one of the steps is new. First, we determine the accurate location of each CCD in instrument coordinates (Section 2.1). Second, we apply the image alignment warping procedure of Miyazaki et al. (2002a, 2007) (Section 2.2). Finally, we introduce a new additional warp "Registration onto the celestial coordinate" (Section 2.3).

2.1. Solving the Basic Mosaic Geometry

We parameterize the accurate location of each CCD in instrument coordinates by the displacement and the rotation of each CCD (Δx, Δy, Δϕ)c, the telescope pointing offset between dithered exposures (ΔX, ΔY, ΔΦ)e, and the optical distortion of the wide-field corrector lens. The distortion parameters are well modeled by a fourth-order polynomial function:

Equation (1)

where R and r are distances from the optical axis in units of pixels on the CCD. R is the original instrument coordinate, and r is the distortion-free coordinate.

To construct a mosaic, we need a flux calibration to correct for the differences in sensitivity from chip to chip (Δfc). These differences originate from differences in amplifier gain, differences in the normalization of flat fielding, and from corrections for the transmission of the atmosphere from exposure to exposure Δfe. We derive this calibration by comparing the fluxes for unsaturated stars in overlapping regions of the frames. We call these parameters the "mosaicking rule."

Operationally, we reduce the rotational component (Δϕ, ΔΦ) to the linearized form of the rotational matrix consisting of four components ((ϕij), (Φij)), where i, j ∈ (1, 2). Because this matrix can treat not only rotation but also expansion, we include the first coefficient of the distortion parameter in this matrix. Varying the basic mosaic geometry parameters (Δx, Δy, ϕij, Δf)c, (ΔX, ΔY, Φij, Δf)e, (b, c, d), we minimize

Equation (2)

where $\boldsymbol {\eta }_{i}^{(e)}$ and $f_{i}^{(e)}$ are the eth exposure's position in distortion-free coordinates and the corrected flux for ith unsaturated star and σi, i ∈ (η, f) are the typical error in the measurements of position and flux, respectively. We fix parameters for the first exposure of the series and the left bottom chip, i.e., $(\Delta x, \Delta y, \Delta \phi, \Delta f)_{\rm 0}= {\boldsymbol 0}$, $(\Delta X, \Delta Y, \Delta \Phi, \Delta f)_{\rm 0}= {\boldsymbol 0}$ as constraints.

This mosaicking rule is calculated by imcat (Kaiser et al. 1995). The best-fit parameters are obtained by minimizing the positional difference of unsaturated control stars (70–100 stars per CCD) for each exposure.

The residual alignment error in this procedure is ∼0.5 pixel rms (0farcs1), still large compared with typical distortions of the sources in weak lensing. Many stacking pipelines adopt this "typical stacking" warp only. We note that a typical stellar size is 0farcs6; it should thus be possible to align the stellar positions to ≲ 0farcs06, 1/10 of the stellar size. The accuracy of the "typical stacking" is insufficient and probably below the level that can be obtained from the data.

2.2. Jelly Focal Plane Warp

Residual positional differences from exposure to exposure in the typical stacking procedure in Section 2.1 may result from non-symmetric features of optical distortion (possibly due to imperfect optical alignment) not considered in the model. Differential atmospheric dispersion is another source of asymmetry.

We parameterize the residuals relative to one another by a polynomial function of the field position; we expect the residuals to be continuous over the field of view. The residuals are

Equation (3)

where $\boldsymbol {\eta }=(\xi, \eta)$. We then obtain the coefficients alm, e and blm, e by minimizing the variance of the residuals. Each individual CCD image is "warped" using this polynomial correction prior to the stacking. This process reduces the alignment error to ∼0.05 pixel (0farcs014). Miyazaki et al. (2002a, 2007) and Kurtz et al. (2012) used this transformation following Kaiser et al. (1998). We refer to this transformation as "the original warp," and we call the results of this warp "the original stacking." The result is similar to Miyazaki et al. (2002a, 2007) and Kurtz et al. (2012).

2.3. Registration onto the Celestial Coordinate

The third transformation is new, and we apply it as the final step in the reduction. The result of Section 2.2 is that all exposures except for the first one are registered on the first exposure with an accuracy of ∼0.05 pixels, the tolerance we seek. However, there is no guarantee that the geometry of the first exposure is well matched to the celestial coordinate.

In practice, the absolute rotation of the instrument rotator is less well known than its rotational angle relative to its zero point. There is no mechanical system to determine the zero-point rotational angle for the rotator accurately for Suprime-Cam; the rotational angle relative to the origin is, however, very well known. The possibility of this zero-point offset suggests that the final Suprime-Cam image will be rotated relative to the absolute celestial coordinate.

To align the final image with the celestial coordinate frame (not some artificial distorted frame, but a physical frame), we use an external catalog, USNO-A2.0 (Monet 1998). This astrometric standard catalog has an accuracy of 0farcs4, although USNO-A2.0 is not deep enough to match the Suprime-Cam catalogs. The overlapping magnitude range includes moderately bright stars; these stars are saturated but do not have blooming pixels. It is thus difficult to measure their total intensity, but it is possible to measure their position because they have circular symmetry. We do not use the Sloan Digital Sky Survey (SDSS) star catalog because it does not cover enough of the sky to allow us to extend our procedure outside the GTO field. We confirm that using the SDSS external astrometric catalog makes no significant difference in the warp relative to use of the USNO-A2.0.

In the same manner as the transformation in Section 2.2, we parameterize the residuals (Δx, Δy) of bright stars (they are slightly separated from the linear stellar sequence but have a comparable positional determination to the unsaturated stars provided that there are no blooming pixels) in the first exposure with respect to the USNO-A2.0 celestial coordinate as a polynomial function of field position $\boldsymbol {x}=(x,y)$ (Equation (3)). This procedure yields absolute astrometric object positions accurate to ∼0farcs2. The internal relative alignment of stars remains at the level of ∼0.05 pixels. We refer to this transformation as "the new warp," and we call the result of this warp "the new stacking."

We present a schematic picture to describe these three steps of the image reduction in Figure 1 and summarize the three-step image reduction pipeline (Sections 2.1 and 2.2) compared to the original one and typical one in Table 2.

Figure 1.

Figure 1. Schematic of the three steps in the mosaic warp.

Standard image High-resolution image

Table 2. Summary of the Warp Rules

  New Original Typical Registration
Distortion f(R) Yes Yes Yes ∼0.5 pixel
Jelly-warp Yes Yes No ∼0.05 pixel
Align to sky Yes No No ∼0.05 pixel
Astrometry ∼0.2 arcsec Uncontrolled Depending on pipelines  

Download table as:  ASCIITypeset image

Once we have all of the parameters that describe these three steps, we carry out the image warp all at once rather than applying the transformation step by step. This integration of the procedures is necessary to minimize the artifacts due to the interpolation of pixels. We estimate pixel values via third-order bi-polynomial interpolation of 4 × 4 source pixels. We employ the arithmetic mean for stacking.

The easiest way to remove cosmic rays is median stacking. However, median stacking requires adjusting the PSF for each exposure to the worst case. The smoothing smears shape information, which is the key observable for weak-lensing analysis. To avoid this dulling, we use the arithmetic mean in the stacking. The arithmetic mean stacking also has the advantage of conserving flux in stacking without smoothing. To minimize the contamination from cosmic rays, we use the median stacked image to detect objects.

A cosmic ray may coincide with a galaxy in the mean image. To evaluate the effect of cosmic-ray coincidence with galaxy images on the shear measurement, we measure the event rate of cosmic rays using dark images. We use dark images taken between 2001 April 1 and 2008 April 1 with OBJECT name "DARK" from the SMOKA archive server. We perform SExtractor on these images and then count the number of cosmic rays. The derived event rate of cosmic rays is 0.01189 ± 0.00005 (arcmin−2 s−1). Using the exposure time of our stacked image, 1800–3600 s, we expect 21–42 cosmic-ray events per square arcminute in our stacked images. This probability corresponds to 1.3–2.5 objects coincident with a cosmic ray per square arcminute assuming a typical aperture size for a galaxy (3 × 1 s). The contribution of cosmic rays to the shear measurement is reduced to less than ∼2/30 ∼ 7%. This contribution is not bias but a random error. Figure 2 shows a 1' × 1' region including the cosmic rays. We show the difference between the mean stacked image and the median image. Ellipses indicate the footprints of the galaxies. As illustrated in this figure, our evaluation of the coincidence rate is reasonable. Thus, we conclude that we can ignore the cosmic-ray effect even when using the arithmetic mean. In Appendix A we include an evaluation of the mass map with a sigma-clipped mean and demonstrate that it makes no difference to the highest peaks in the map.

Figure 2.

Figure 2. 1' × 1' region of GTO. We show the difference between the mean stacked image and the median image. Cosmic rays and remnants of bright stars dominate the image. Ellipses show the footprints of the galaxies.

Standard image High-resolution image

3. GALAXY CATALOG

To construct the galaxy catalog from the mosaic image, we use the imcat tools "hfindpeaks" (object detection) and "apphot" (photometry). We use the half-light radius "rh" as the size of an object. Using this size, we can distinguish galaxies from stars by requiring that rh for galaxies satisfy the relation

Equation (4)

where $r_{h}^{*}$ and $\sigma _{r_{h}^{*}}$ are the half-light radius of a stellar image and its rms, respectively.

We merge our catalog of stars with the SDSS star catalog to calibrate the photometric zero point (see Equations (5)). Typically, the number density of stars to the depth of SDSS is several arcmin−2; the precision of the astrometric catalog is ≲ 1''.

We determine the photometric zero points for each subfield from the SDSS photometric catalog following Utsumi et al. (2010). The transformation equation between the photometric systems is

Equation (5)

To assess the depth of our images, we follow the standard procedure of SDFRED to derive the limiting magnitude (Yagi et al. 2002; Ouchi et al. 2004). We give the 5σ limiting magnitude in a 2'' photometric aperture in Table 1.

To remove low-S/N regions around sufficiently bright stars, we need a procedure for constructing masks. Because the mask procedure reduces the effective region of survey area, the mask radii should be as small as possible. However, it is hard to extrapolate the PSF profile into a faint tail, which is more significant for brighter stars with a Gaussian or Moffat profile. Thus, we construct a wide dynamic range PSF profile from the actual data. Stars brighter than 20th magnitude are usually saturated in a Suprime-Cam typical exposure with ∼100 s integration. We measure the actual PSF of Suprime-Cam over a wide dynamic range by compiling the PSF of faint and bright stellar objects. We divide the stellar samples into subsamples segregated by R-band magnitude, moderate faint (20–20.5 mag, S/N ∼ 300), intermediate (18–18.5 mag, S/N ∼ 2000), and bright (13.75–14.25 mag, S/N ∼ 100,000), combine them by taking the arithmetic mean, and then measure their radial profiles. Although we take a mean profile of bright stars, we also reject exceedingly bright pixels by choosing an appropriate threshold for each image in order to avoid saturated areas or blooming regions. We note that we do not use the moderate faint, intermediate, and bright subsamples in the weak-lensing analysis. We use only stars with 20 ≲ m ≲ 22 in the weak-lensing PSF anisotropy correction. We evaluate the PSF shape measurement error from the rms of the stellar ellipticities. The measured error scales as the square root of the photon count. This means that the dominant error comes from statistical photon error. With this magnitude range, the error on PSF shape measurement should be ∼1% for 20 mag and ∼2% for 22 mag. Also, we confirm that a choice of the faint limit of star selection does not affect the result.

For stars with R <14, there are not enough stars to determine the mean PSF. Thus, we consider an optical model of Suprime-Cam to determine the size of the surrounding halo. The halo surrounding a bright star is a reflection from optical glass components in front of the CCDs. The nearest and second-nearest optical elements close to the CCDs consist of the entrance window to the dewar and the filter; their distances from the CCDs are 10.0 mm and 34.5 mm, respectively. For both elements, the thickness and effective reflection index are 15.0 mm and 1.460280, respectively. The focal ratio Suprime-Cam is 1.86. Thus, light reflected on the first surface, the backside of the dewar window, should be located 71farcs3 away from the original position. Light from the other surfaces appears 144'', 212'', and 284'' away from the original position.8 This model is consistent with the data, and we use it to derive the mask radii.

We summarize masks in Appendix B. Adopted mask radii for brighter stars are shown in Table 7. We mask some stars with R < 11 by visual inspection to remove obvious artifacts. Tables 8 and 9 show the masks.

4. BASIC WEAK-LENSING ANALYSIS

To construct an initial weak-lensing map, we follow the procedures described in Miyazaki et al. (2002a, 2007) and Kurtz et al. (2012). In this section we review the selection of sources and the construction of the mass, noise, and B-mode maps. In discussing the B-mode map, we motivate the investigation (Section 5) of sources of systematic error in this map.

4.1. Sources

We use 23 < m < 26 galaxies with detection significance ν > 10 as source galaxies. We eliminate high-ellipticity objects with |e| > 0.6 because these objects are sometimes blended. This magnitude range gives an average magnitude of 24.5 for the sources in both the GTO and DLS fields. The fraction of rejected objects with the |e| > 0.6 is ∼0.3%. We confirm that these objects affect the B-mode at the <1% level. The average source density we use in the weak-lensing analysis is 34.5 arcmin−2 and 31.6 arcmin−2 for the GTO and DLS fields, respectively. Figure 3 shows number density maps of the source galaxies for the weak-lensing analysis. The result depends on the individual field. At the 90% confidence level for the GTO field, the source density and the depth are correlated, but there is no significant correlation with the seeing. In the DLS field, we only see an anti-correlation between the source density and the seeing. We discuss the source density variation and the effect on the systematic error in Appendix C, which shows that there is no systematic error dependency on the field-to-field variation (Miyazaki et al. 2007; Furusawa et al. 2008).

Figure 3.

Figure 3. Surface number density of weak-lensing sources. For the GTO field (left) the sources have 23 < Rc < 26; for the DLS field (right) the sources are in the range 23 < i' < 26.

Standard image High-resolution image

We show a histogram of galaxy size as the FWHM for both fields in Figure 4. We note that the only size cut we make is based on the galaxy–star separation (Equation (4)). Furthermore, we do not apply a weighting scheme.

Figure 4.

Figure 4. Histograms showing the galaxy size and stellar size (in terms of the FWHM) for both fields. The solid line shows results for the GTO field, and the dashed line shows those for the DLS field.

Standard image High-resolution image

Because we do not have measured redshifts for the source galaxies, we use the fitting formula from Schrabback et al. (2010) for galaxy redshifts as a function of magnitude in the range 23 < i814 < 25. With this relation we estimate an average redshift of 1.2 for the DLS field where we have i-band data. For the GTO field where we have only R-band data, there is no suitable redshift– magnitude relation. We thus assume the same average redshift as the DLS field. If the actual redshift is 1.0 rather than the assumed 1.2, we introduce an ∼10% error in the estimate of the distance ratio. We note that this error in mass estimation is only important for absolute calibration.

To derive the ellipticities of the sources, we use getshapes, efit, and ecorrect in imcat. We evaluate the PSF anisotropy correction using faint stars (20 ≲ m ≲ 22; 50 ≲ S/N ≲ 300) in the field of view, and we model the PSF distortion as a fifth-order bi-polynomial as a function of position. The Pγ correction follows KSB+ (Hoekstra et al. 1998), who adopt a galaxy-size-dependent correction. The actual γ-field is discretized and noisy because it can only be estimated at the positions of galaxies. Note that we do not weight the estimated shear value according to a galaxy's magnitude. We apply a Gaussian smoothing filter on the γ field following Oguri & Hamana (2011). At first, we adopt a smoothing length, θG = 1farcm5, following Kurtz et al. (2012).

We show a test of bias in the γ dependence in Appendix D.

4.2. Constructing the κ-S/N Map

We can now construct the κ-S/N map to search for mass concentrations based on the significance of the lensing signal. We derive the κ map directly from the γ map of the sources in Section 4.1. We divide this map by the noise map to obtain the κ-S/N map.

We use Monte Carlo realizations to construct the noise maps. We rotate the orientation of each source galaxy by a random angle. The randomized catalog is then the basis for the noise map. We repeat the Monte Carlo process 100 times and then calculate the rms at each grid point of the noise map to estimate the noise at that point. The κ-S/N map then reflects the noise from intrinsic ellipticity noise and from variation in the number of galaxies.

In the following sections (Sections 4.3 and 4.4), we examine the systematic error in our weak-lensing map by looking at the S/N distribution of the noise and B-mode maps. We demonstrate that the noise map obeys the expected Gaussian S/N distribution; however, the original B-mode map does not.

4.3. Testing the Noise Map

In the absence of signal, the S/N distribution for the noise map should match a standard normal distribution, which has no free parameters. We test this expectation by constructing a "Noise S/N map" for each of the 100 Monte Carlo realizations. Pixel values on each of these maps are strongly correlated spatially because the maps are smoothed with the Gaussian kernel. However, a series of pixel values at a fixed position in the 100 "Noise S/N maps" should obey a normal distribution.

We show the S/N histogram for randomized realizations in Figure 5. The random realizations have some scatter that we use to evaluate the presence of systematic errors in the B-mode map. To quantify the scatter, we define the "Deviation":

Equation (6)

where Ni = dN/d(S/N)i is the number of pixels per d(S/N)i bin in the observed (obs) and in the standard normal distribution (norm). The error is the standard deviation. We obtain the merit function as a kind of reduced chi-squared, $\tilde{\chi }^{2}$, from a sum of the squared deviations dividing by the number of degrees of freedom (the number of bins):

Equation (7)

Figure 6 shows the normalized cumulative distribution of $\tilde{\chi }^{2}$ for all random realizations. This wide range, $0\le \tilde{\chi }^{2}\lesssim 65$, gives the maximum allowable difference between the noise and B-mode maps.

Figure 5.

Figure 5. Top: S/N distributions for 100 noise maps in the GTO field, which are obtained by randomizing orientations of source galaxies. Different points indicate different realizations. The dashed line shows the expected normal distribution. Bottom: deviation from the normal distribution (Equation (6)). The sum of these deviations divided by the degree of freedom is a measure of the systematic error.

Standard image High-resolution image
Figure 6.

Figure 6. Normalized cumulative distribution of the merit function $\tilde{\chi }^2$ (Equation (7)) from the 100 realizations of the noise maps.

Standard image High-resolution image

4.4. Testing the B-Mode Map

In this section, we quantify the quality of the B-mode map. The standard B-mode map results from rotating each shear |γ|exp (2iθ) by π/4, i.e., γ1 + iγ2 → −γ2 + iγ1, where θ is the position angle for each source. Because weak lensing does not produce a B-mode, we can evaluate the level of systematic error inherent in the mass reconstruction procedure by comparing the B-mode map with the noise map (Schneider et al. 2002).

Figure 7 shows the pixel distribution for the B-mode maps with the original, typical, and new stacking (Table 2). The S/N distribution has a large deviation relative to the standard normal distribution. The $\tilde{\chi }^{2}$ = 257 (1204) for the original (typical) stacking.

Figure 7.

Figure 7. Top: histogram of S/N values for B-mode maps derived from the new result (circles), as well as original (triangles) and typical (squares) stacking. The dashed line shows the expected normal distribution if there is no systematic error. Bottom: same as Figure 5 but for the three kinds of B-mode.

Standard image High-resolution image

A possible origin of these B-mode artifacts is misalignment during the stacking procedure. Assuming that we are stacking circular Gaussian objects, a 10% positional inaccuracy generates ∼1% artificial ellipticities. The positional difference applies to individual galaxies and their neighbors. The smoothing procedure then leads to detection of these ∼1% ellipticities as a lensing signal. Thus, stars must be registered internally with an accuracy of 1% level (∼0.05 pixels).

Next, we test procedures for reducing this kind of artificial signal in the B-mode map: (1) SkyFrame, (2) wide FFTBox, (3) masking the edge of the field, (4) higher order PSF anisotropy correction, and (5) smoothing cutoff.

5. REDUCING THE B-MODE SIGNAL

Systematics issues that might produce an artificial signal in the B-mode map include errors in the physical coordinate system, excess large-scale power, boundary effects, and errors in the PSF anisotropy correction.

SkyFrame. If the stacked image is bent with respect to the physical coordinate system, the E-mode signal contaminates the B-mode. The tilt of the coordinates introduces a rotation of the sources.

The "new stacking" (Section 2.3) eliminates this bending. The "new stacking" image is aligned relative to the sky coordinate within ∼1 pixel; the internal registration accuracy is ∼0.05 pixels.

Wide FFTbox. There are two ways to perform the KS93 analysis (Kaiser & Squires 1993): (1) convolve the shear field with the complex kernel, or (2) perform the convolution in the Fourier domain. Mathematically these procedures are identical. However, in practice for operation in the Fourier domain, boundary effects can be a problem. To avoid boundary effects present in the original reduction (Kurtz et al. 2012), we embedded the shear signal for the E/B-mode maps within a calculation box twice the size of the field with zero padding outside the map area.

Masking the edge of the field. At distances more than 15' from the optical axis the field of view of Suprime-Cam is vignetted (Miyazaki et al. 2002b). The distortion is also large, ≳ 10'', at the edge of the field. We thus try masking the region outside 15'.

Smoothing cutoff. To examine the origin of the remaining B-mode signal, we examine the power spectrum of the B-mode.

There is a significant excess in the power spectrum of the B-mode map relative to the noise map on large scales (≳ 10'). This excess is comparable with the power in the E-mode (Figure 8). The origin of this signal in the B-mode is unclear, but it should not be present.

Figure 8.

Figure 8. Power spectrum of the noise maps (short-dashed and dotted lines) and B-mode maps constructed with various procedures (all other curves as noted in the figure). The power spectrum for the B-mode has an excess relative to the noise maps on smaller scale less than the typical separation of stars and on scales larger than 10'. For scales smaller than 1farcm5, smoothing decreases the amplitude of the power spectrum. A higher order PSF correction has no impact. For scales of 1'–10', there is no significant excess in the B-mode power spectrum. For scales larger than 10', all procedures fail to reduce the B-mode excess. Because the typical scale of a cluster of galaxies is a few arcminutes, these scales can safely be filtered out. The best result is the short-dashed line. We can see clear suppression on the large scale.

Standard image High-resolution image

To suppress this large-scale systematic error in the B-mode map, we truncate the smoothing kernel at a scale of 10'. Figure 7 shows the result. The truncation reduces the B-mode signal.

Higher Order PSF anisotropy correction. To correct the PSF anisotropy induced by atmospheric disturbance or optical aberration, we applied a fifth-order polynomial PSF model as a function of position. If the order of the polynomial is not sufficient to remove the systematic variations in the PSF, a B-mode is produced. To investigate this issue, we test a sixth-order polynomial.

5.1. Summary of the Tests

Table 3 shows the result of these procedures. The "SkyFrame" registration reduces the B-mode power on small scales, and the "Smoothing Cutoff" reduces the power on large scales (Figure 8). These two procedures have the largest effect on reduction of systematics in the B-mode map.

Table 3. Summary of the Tests

Tests $\tilde{\chi }^{2}$
Typical stacking image 1204.7
Original resulta 257.7
SkyFame 90.7
SkyFrame+Wide FFTBox 81.9
SkyFrame+Wide FFTBox+6th 84.3
SkyFrame+Wide FFTBox+Masking the edge 86.3
SkyFrame+Direct Convolution+Smoothing cutoff 54.4

Notes. Result values from the merit function defined as Equation (7) according to adopted tests. aKurtz et al. (2012).

Download table as:  ASCIITypeset image

The "WideFFTBox" also reduces the B-mode, but not significantly. We thus use direct convolution instead of the fast Fourier transform (FFT) implementation.

"Masking the edge of the field" and "Higher order PSF anisotropy correction" have little effect for the GTO field. "Masking the edge of the field" does not suppress systematics in the B-mode map. The same is true of the higher order corrections for PSF anisotropy.

Based on these results, we include

  • 1.  
    SkyFrame image
  • 2.  
    Smoothing Cutoff at 10'
  • 3.  
    Direct convolution instead of using Wide FFT Box

in the construction of the maps. We show the result of this procedure in Figure 7. The pixel distribution of the B-mode S/N map indicates that residual systematics are reduced to a reasonably low level if we adopt appropriate procedures.

To assess whether the deviation of the B-mode signal is consistent with random realizations, we compare $\tilde{\chi }^{2}$ values with the distribution for each random realization. The maximum $\tilde{\chi }^{2}$ is larger than what we derive from the B-mode signal for the GTO field (Figure 6), $\tilde{\chi }^{2}=54.4$, much less than the original 257. Thus, we confirm that our revised lensing map is substantially less affected by systematic error. Figure 9 shows the resulting mass map with the "new warp," and Figure 10 shows the resulting S/N distribution for both the mass and B-mode map.

Figure 9.

Figure 9. Mass maps for the GTO field overlaid on Subaru images. Contours are drawn from 2σ threshold with an interval of 0.5σ. Boxes show identified peak positions along with peak number and its significance. Masked regions for brighter stars are also overlaid.

Standard image High-resolution image
Figure 10.

Figure 10. S/N distributions for E-mode (triangles) and B-mode (circles) maps for the GTO fields. Lower panel shows the deviation from the normal distribution for the B-mode map.

Standard image High-resolution image

5.2. The Impact of SkyFrame

The most efficient procedure for reducing the B-mode systematics is "SkyFrame." Figure 11 shows the displacement vector of positional difference for stars between the distortion-free coordinate on the original warp and the stereographic projection of the celestial coordinate from the USNOA2.0 catalog at the tangent point. Although the displacement is small, a few pixels, there is a coherent pattern. Smoothing during construction of the map can introduce coherent patterns that dominate the noise from random ellipticities. These patterns can introduce the small-scale B-mode signal. It is also possible to generate false E-mode signal from this coherent pattern.

Figure 11.

Figure 11. Displacement vector map of stellar positional differences between the undistorted detector coordinate (Section 2.3) and the stereographic projection of the celestial coordinate from the USNOA-2.0 catalog at the tangent point for the GTO subfield. The box shows approximately 32' × 27', which is the field of view of Suprime-Cam. The length of the arrows indicates the positional difference in the detector coordinate multiplied by 1000. The displacement is complex and includes rotational components at the level of the pixel scale.

Standard image High-resolution image

There are two possible explanations for the residual pattern in the original warp: (1) an anisotropic distortion resulting from optical aberration, and/or (2) a disturbance in the curvature of the wave front due to atmospheric effects. The globally rotated component results from a small rotation of the image rotator. Higher order patterns vary from field to field.

Yoshino et al. (2007) evaluated the anisotropy in the distortion from the corrector lens in Suprime-Cam with a positional accuracy of 0farcs4. They found that there is an ∼0farcs8 difference in the isotropic distortion pattern between the right and left edges of the field of view. These distortions are comparable with the residual of the displacement from the distortion-free coordinate on the original image to the projected celestial coordinate.

These patterns are different for different observed fields (Figures 11(a) and (b)). The difference between fields is the same as the difference from exposure to exposure because all frames are simply registered to the first frame in the series of exposures. These issues can be the source of the patterns we find.

A non-static factor like an atmospheric disturbance cannot explain the patterns. de Vries et al. (2007) show that longer exposures yield progressively lower ellipticities of the PSF (as $1/\sqrt{t_{\rm exp}}$), suggesting that the coherent differences shown in Figure 11(a) should not be caused by atmospheric disturbance.

6. DEFINING THE WEAK-LENSING PEAKS

At least two issues are important in defining the weak-lensing peaks in the κ-S/N map. The smoothing length is important because it reduces the shape noise. Furthermore, the smoothing in part determines the sensitivity to cluster detection as a function of mass and redshift. The threshold for peak finding is also important. The fraction of false positives is sensitive to the threshold. Here we discuss these two issues.

6.1. Smoothing Length

In the previous sections, we constructed weak-lensing mass maps with a fixed smoothing length of 1farcm5. A smaller smoothing length introduces more noise in the map resulting from shape noise. A larger smoothing length (larger than a few arcminutes) smooths out the cluster signal; the angular diameter of typical clusters for redshifts 0.2 < z < 0.7, the range of sensitivity of the weak-lensing maps, is several arcminutes.

Previous studies adopt a range of smoothing lengths (Wittman et al. 2001, 2006; Miyazaki et al. 2002a, 2007; Hetterscheidt et al. 2005; Schirmer et al. 2007; Gavazzi & Soucail 2007; Maturi et al. 2007; Bergé et al. 2008; Dietrich et al. 2007). To examine the effect of the smoothing length, we construct E/B-mode maps and random maps with three different smoothing lengths: θG = 0farcm5, 1farcm0, and 1farcm5.

Figure 12 shows the relation between $\tilde{\chi }^{2}$ and θG. The $\tilde{\chi }^{2}$ of the E-mode increases with the smoothing length. The $\tilde{\chi }^{2}$ of the B-mode also increases.

Figure 12.

Figure 12. Relation between the merit function $\tilde{\chi ^{2}}$ and the smoothing scale θG. Points with errors denote mean values of $\tilde{\chi ^{2}}$ and its standard deviation. The solid line shows (θG/0.33[px/'])2/2, the square of the sigma of the Gaussian smoothing kernel in pixel units.

Standard image High-resolution image

The averaged $\tilde{\chi }^{2}$ value among 100 random realizations is $\tilde{\chi }^{2} > 1$, and it increases with the square root of the smoothing length θG. This result occurs because the smoothing reduces the effective number of independent pixels. For small θG the behavior of $\tilde{\chi }^{2}$ comes from the pixelization caused by dividing the smoothing kernel into coarse pixels. The shortest smoothing length, θG = 0farcm5, corresponds to only ≈1.5 pixels. The coarse sampling cannot reproduce the Gaussian smoothing kernel well; thus, the amplitude of the scaling relation may be changed and increased. This kind of pixelization effect is hard to describe analytically. Because the range, 0farcm5 < θG < 1farcm5, might be affected by this pixelization effect, we do not correct for the number of effective pixels. We use the scale-dependent $\tilde{\chi }^{2}(\theta _{\rm G})$ as a measure of the $\tilde{\chi }^{2}$.

We estimate the optimal smoothing length for detecting clusters as a function of mass and redshift. Following Hamana et al. (2004), we analytically evaluate the expected S/N for weak-lensing halos with universal Navarro–Frenk–White (NFW) profiles. Because more distant clusters have smaller angular diameter, the S/N is redshift dependent. Furthermore, the typical scale radius of an NFW cluster depends on its mass; thus, the S/N is also mass dependent. We compute the S/N as a function of smoothing scale θG. The S/N for less massive clusters (M14 ≡ 1014h−1M; M14 = 3.0 for z = 0.5, M14 = 1.0 for z = 0.2) reaches a maximum at θG ≈ 1farcm0; more massive clusters are detected at greater S/N with a smoothing length larger than θG ≈ 1farcm0. However, the difference between 1farcm0 and 1farcm5 is small. At a fixed cluster mass, smaller smoothing lengths yield higher S/N at larger redshift. Because these effects are small, we choose θG = 1farcm5.

6.2. Peak Definitions and Thresholds

We use SExtractor to locate peaks in the E-mode map; we require at least three connected pixels (>2σ) to define a peak. Selecting the appropriate threshold for a clean peak catalog is a subtle issue. Figure 13 shows the cumulative histogram of the highest peaks found in each noise map as a function of S/N. This figure shows the likelihood that significant peaks can be found accidentally. Statistics describing the highest peak for the 100 realizations are shown in the figure and summarized in Table 4.

Figure 13.

Figure 13. Cumulative distribution of the highest peak S/N values for 100 realizations of noise maps for the GTO field. The solid line shows the expectation from the normal distribution. This figure illustrates the probability to find "the false peak" accidentally with respect to the S/N threshold for the mass map. The simple mean and standard deviation for the 100 realizations are quoted in this figure.

Standard image High-resolution image

Table 4. S/N and Peak Statistics as a Function of the Smoothing Length θG

  S/N Values Max Peak Detected Peak
θG $\tilde{\chi }^{2}_{B}$ $\tilde{\chi }^{2}_{E}$ σ μ σ (S/N) $({\rm S/N})_{99\%}$ N $N_{\rm 99\%}$
0.5 1.94 29.9 1.01 4.07 0.25 4.82 5.05 2 0
1.0 15.1 175 1.05 3.83 0.22 4.48 4.64 4 4
1.5 54.4 545 1.11 3.72 0.30 4.62 4.57 4 4

Notes. "S/N values" shows statistical properties for the S/N histogram, the merit functions for E/B-mode, and the standard deviation of pixel values. "Max Peak" shows statistical properties for the highest peak value (S/N) and the threshold given the most extreme peak among the highest peaks in the entire set-of-100 noise map ($({\rm S/N})_{\rm 99\%}$). "Detected Peak" is the number of detected peaks corresponding to the adopted threshold.

Download table as:  ASCIITypeset image

We compare the cumulative peak counts with the expected standard normal distribution (green line). Figure 13 shows that the highest peak statistics of the noise maps are not well described by the theoretical curve. In other words, the highest peak statistics do not obey the ideal random Gaussian field. Thus, we must derive these statistical quantities from Monte Carlo simulations.

This discrepancy between the noise maps and the ideal case is more significant for smaller smoothing lengths. Based on these tests, we select different thresholds for each smoothing length. We consider two kinds of threshold: the 3σ threshold and the 99% threshold. The 3σ threshold is derived by assuming the standard normal distribution for the highest peaks; the 99% threshold is derived from "the most extreme peaks" among the highest peaks in the entire set-of-100 noise map. Because it is based on the real data, the 99% threshold is much more reliable than the 3σ threshold, and we use it throughout.

We summarize the 3σ and 99% thresholds in Table 4. As expected, these thresholds differ for smaller smoothing lengths, but they are similar for θG = 1.5.

The numbers of detected peaks for each threshold are listed as $N_{3\sigma }, N_{99\%}$ in Table 4.

Next we consider the peaks in the E-mode map. With the smallest smoothing, there are no peaks. The shape noise of intrinsic ellipticities raises the threshold. The situation is different for θG = 1.0 and 1.5 because the larger smoothing involves more sources. The lensing signal becomes more significant for larger smoothing lengths.

As a quantitative representation of the significance of the weak-lensing signal, we evaluate $\tilde{\chi }^{2}_{E}/\tilde{\chi }^{2}_{B}$. For the B-mode, $\tilde{\chi }^{2}$ should be small because there should be no signal; for the E-mode, $\tilde{\chi }^{2}$ should be larger, reflecting the lensing signal.

The actual values of $\tilde{\chi }^{2}_{E}/\tilde{\chi }^{2}_{B}$ for θG = 1.0 and 1.5 are 11.6 and 10.0 for the GTO field. These similar values imply that there is no strong preference for smoothing lengths between 1.0 and 1.5 in our data.

7. THE DLS KAPPA MAP

The procedures of Sections 5.1 generally apply to both the GTO and DLS field maps. Here we discuss the few issues that are particular to the DLS map.

For the DLS map, the $\tilde{\chi }^{2}$ for the original procedure is 134; for the new one it is 49.1 (Figure 14). The new value, although reduced from the original one, is not within the range for the random realizations (32 is the maximum value). The S/N distribution for the B-mode has a slight excess on the negative side.

Figure 14.

Figure 14. Same as Figure 10 but for the DLS field.

Standard image High-resolution image

We use i'-band imaging in the DLS field rather than the RC band. We cannot determine the PSF anisotropy near the edge of the field because the star/galaxy separation is poor. Thus, we trim objects located at the outer edge (>15' of the center of field). In contrast, we do not trim the edges of the GTO field.

Even after this trimming, there is still a small excess in the S/N distribution for the B-mode on the negative side. However, the plus side seems to be well matched to the standard normal distribution. If we mask the three prominent negative excesses, the B-mode S/N distribution is in good agreement with the standard normal distribution. Thus, the global correction may be sufficient and the residual local excess may be related to the most prominent peaks in the E-mode map. Here we present the resulting κ-S/N maps for the DLS field (Figure 15).

Figure 15.

Figure 15. Same as Figure 9 but for the DLS field.

Standard image High-resolution image

8. DISTRIBUTION OF PEAKS IN NOISE AND B-MODE MAPS

It is useful to have an analytical expression for the distribution of the highest peaks in order to test both the noise and B-mode maps. These expressions can then be used to estimate the fraction of false peaks in the κ-S/N maps.

We derive a probability density function that enables us to evaluate the probability of finding the highest peak in the noise map as a spurious peak in the κ-S/N map. We evaluate this probability as a function of the S/N. The probability density function that a random value x obeying the normal distribution becomes the maximum value among n samples is

Equation (8)

where erfc(x) is the complementary error function: ${\rm erfc}(x) = 2/\sqrt{\pi } \int ^{\infty }_{x}e^{-t^{2}}dt$. The first part of the product gives the probability of obtaining a random value of x, and the latter part describes the probability that the remaining n − 1 random values are <x.

However, our map creation procedure is more complex because it applies Gaussian smoothing, which correlates pixel values spatially. We have to check the effect of the spatial correlation prior to evaluating the probability. We apply the smoothing function on the κ map in the form exp (− (θ/θG)2), where $\theta _{\rm G}/\sqrt{2}$ is the standard deviation of the smoothing kernel. Using the standard deviation as a measure of the correlation length, we reduce the total number of pixels in the mass map by a factor of $(\theta _{\rm G}/\sqrt{2})^{2}$. For example, in the GTO field where the total number of pixels is 300 × 234 with a 0farcm33 pixel−1 scale, the effective number of pixels as a result of the smoothing becomes $300\times 234 \times (0.33 \times \sqrt{2}/\theta _{\rm G})^{2}$. After adopting θG = 1farcm5, the effective number of pixels is 6795.

Figure 16 shows both the analytic curve and the Monte Carlo results based on 10,000 random realizations of the noise map. The analytic expression is an excellent representation of the Monte Carlo results.

Figure 16.

Figure 16. S/N value distribution for maximum peaks among 10,000 noise map realizations for a field covering 2.12 deg2 (histogram) and an analytical model (curve). The effective pixel number is the number of actual pixels divided by the smoothing length in pixel units.

Standard image High-resolution image

In summary, Figure 16 shows that the smoothing procedure only reduces the number of independent pixels. Using this effective number in the probability function, we can evaluate the confidence level corresponding to a given S/N threshold. For example, a threshold of S/N = 4.66 corresponds to a 99.7% confidence level for the 2.12 deg2 B-mode free massmap based on 1farcm5 smoothing. Similarly, S/N = 4.50 and 5.80 are the same 99.7% level confidence levels for a 1 and 1000 deg2 degree survey, respectively.

Using this same formalism, we can derive an expression for the fraction of spurious peaks in the κ-S/N map above a given threshold. The probability of obtaining the (m + 1)th value in the descending set of ordered random values as a function of S/N is

Equation (9)

where nCm = n!/(nm)!m!. With this general probability function, we can evaluate the expectation value for the number of peaks found in the noise field with a given S/N threshold:

Equation (10)

According to this formula, we should find on average 2.7 and 25.3 spurious peaks exceeding (S/N)th = 4.5 and 4.0, respectively, for a 1000 deg2 survey with 1farcm5 Gaussian smoothing.

In conclusion, the analytical expression we derive is useful for future error analysis of large surveys. If we place an S/N =4.5 threshold on the B-mode-free kappa (S/N) map, we expect these high significant peaks to be uncontaminated at the 99.7% confidence level for a 1 deg2 field; only 2.7 spurious peaks should be found on average for a 1000 deg2 survey. Figure 17 shows the S/N threshold as a function of survey area.

Figure 17.

Figure 17. S/N threshold dependence on survey area.

Standard image High-resolution image

9. THE NATURE OF THE WEAK-LENSING PEAKS

To test the reliability of the peak catalogs for the GTO (Figure 9) and DLS fields (Figure 15), we next search for corresponding systems of galaxies. Weak lensing only traces the surface mass density. Thus, our peaks may be single isolated clusters or superpositions of less massive groups along the line of sight. Here we test the peaks against dense foreground redshift surveys following Geller et al. (2005) and Kurtz et al. (2012).

9.1. Redshift Surveys

Dense redshift samples are available for both the GTO field (Kurtz et al. 2012) and the DLS field (Geller et al. 2005, 2010; Hwang et al. 2012). The redshift surveys were carried out with the Hectospec (Fabricant et al. 1998, 2005), a 300-fiber robotic instrument mounted on the MMT. The spectra cover the wavelength range 3500–10000 Å with a resolution of ∼6 Å. The typical error in an individual redshift (normalized by (1 + z)) is 27 km s−1 for emission-line objects and 37 km s−1 for absorption-line objects. The surveys include redshifts for galaxies with R ≲ 20.6 in the 4 deg2 DLS field and redshifts for galaxies with rpetro < 21.3, ri > 0.4, and gr > 1.0 in the GTO 2 deg2 field. With these samples, we can identify massive clusters of galaxies with redshift z ≲ 0.6. The surface number densities of the spectroscopic samples are 0.6 and 1.1 arcmin−2 for the GTO and the DLS fields, respectively.

To identify potential galaxy systems associated with the detected weak-lensing peaks, we examine the redshift distributions in cones of 3' and 6' radius centered on the location of each of the weak-lensing peaks.

We examine peaks in the redshift histograms. We define the significance of each peak as the difference between the number of galaxies N and mean number of galaxies 〈N〉 expected in the cone at that redshift divided by the standard deviation σ: (N − 〈N〉)/σ. For neighbors satisfying |Δz| < 0.004(1 + z), we calculate a velocity dispersion from the square root of the unbiased variance. We use the bootstrap to measure the error in the velocity dispersion.

Tables 5 and 6 list the weak-lensing peaks for the GTO and DLS fields, respectively. The tables list the lensing peak rank and significance, the celestial coordinates of the peaks, the rest-frame velocity dispersion and its error, the number of peak members and its significance, and the mean redshift. The tables also include additional information from the literature.

Table 5. Peak List for the GTO Field

Rank ν R.A.2000 Decl.2000 σp, 6 N6 z6 σp, 3 N3 z3 NED X-Ray Distance
(km s−1) (km s−1)
0 5.95 16:02:50.826 +43:35:51.34 992.0 ± 94.5 40(8.6σ) 0.415(#1) 1014.1 ± 113.2 24(11.8σ) 0.416(#1) 0.420a, XraySb 9.10 ± 1.32c 1farcm156
1 5.07 16:07:58.418 +43:38:23.99 d     d     0.2830e    
2 4.94 16:03:01.380 +42:46:32.00 655.8 ± 94.8 29(12.0σ) 0.540(#0) 728.3 ± 127.7 18(13.9σ) 0.540(#0) 0.556a, 0.5391e, XraySb 20.0 ± 1.5c 1farcm931
3 4.76 16:05:24.866 +42:45:34.17 497.9 ± 136.5 10(16.1σ) 0.731 562.8 ± 151.1 8(26.2σ) 0.731      
4 4.38 16:05:39.982 +43:22:13.70                  
5 4.29 16:04:37.615 +43:27:34.65             XraySf    
6 4.19 16:04:41.282 +42:37:54.73                  
7 4.17 16:07:44.303 +42:37:45.58                  
8 4.12 16:04:10.464 +42:39:14.47 411.9 ± 74.9 9(18.1σ) 0.673(#3) 508.6 ± 102.4 6(28.1σ) 0.673(#3)      
9 3.85 16:05:15.727 +42:39:14.39                  
10 3.70 16:01:46.456 +42:56:06.40 421.9 ± 59.1 15(4.5σ) 0.256 366.1 ± 59.7 10(7.2σ) 0.287      

Notes. The column "Rank" is sorted by the lensing significance ν. σp shows a measured velocity dispersion, a number of galaxies on the cone, and its mean redshift for 6' and 3' cones, respectively. NED: objects classified as "Galaxy Clusters" or "X-ray sources (XrayS)" in NED within 3'. The columns "X-ray" and distance show a flux of X-ray in the literature and a positional difference between detected peak and X-ray source. aWHO: (Wen et al. 2009, 2010). b1WGA: ftp://cdsarc.u-strasbg.fr/cats/IX/30/ReadMe. cX-ray flux is converted from count rate in 0.24–2.0 keV with a constant correction factor of 1.5 × 10−11 erg cm−2 s−1 in the units of 10−14 erg cm−2 s−1 (White et al. 2000). dThis peak is not well sampled at this location in the redshift survey. eGHO: (Gunn et al. 1986). fKLG2009: (Kocevski et al. 2009).

Download table as:  ASCIITypeset image

Table 6. Peak List for the DLS Field

Rank ν R.A.2000 Decl.2000 σp, 6 N6 z6 σp, 3 N3 z3 NED X-ray Distance
(km s−1) (km s−1)
0 6.18 09:16:52.817 +30:00:28.44 497 ± 50 47(7.8σ) 0.318 468 ± 61 23(9.3σ) 0.319      
        486 ± 75 25(7.0σ) 0.535 446 ± 79 18(13.7σ) 0.535      
1 4.98 09:15:54.449 +29:37:08.25 636 ± 133a 26(7.2σ) 0.531 386 ± 64a 12(9.2σ) 0.531 0.53b,c 4.99 ± 1.32(2.97)d 0farcm797
2 4.65 09:16:04.967 +30:28:08.30 524 ± 145 5(30.2σ) 0.647 524 ± 147 5(54.1σ) 0.647   9.52 ± 2.24(4.60) 0farcm786
                      1.75 ± 1.21(0.65)d 1farcm677
3 4.46 09:15:54.292 +30:03:08.24                  
4 4.24 09:17:12.960 +30:18:28.09 811 ± 124 19(5.3σ) 0.535 859 ± 162 12(9.1σ) 0.535   29.1 ± 10.4(5.88)d 0farcm961
5 3.86 09:16:20.559 +29:15:48.60 1026 ± 188 13(11.2σ) 0.365 486 ± 187 8(17.7σ) 0.367      
        818 ± 99 25(7.0σ) 0.536 968 ± 169 12(9.1σ) 0.536      

Notes. Same as Table 5 but peak list for the DLS field. NED: objects classified as "Galaxy Clusters" or "X-ray sources (XrayS)" in NED within 3 arcmin. aThis peak coincides with a foreground bright star and thus prevents us from measuring accurate velocity dispersion. bAWM2009 (Abate et al. 2009). cKKD2009 (Kubo et al. 2009). dX-ray flux in the 0.2–12 keV (0.2–2 keV) band for extended source (ext >0 arcsec) in the units of 10−14 erg cm−2 s−1 (Xmm-Newton Survey Science Centre 2010).

Download table as:  ASCIITypeset image

9.2. The GTO Field

We list peaks in Table 5 that have S/N ⩾ 3.7 as in Miyazaki et al. (2007). However, we concentrate on peaks with S/N ⩾ 4.56 because our analysis of the noise maps shows that this higher threshold yields a more robust catalog.

The most significant peak, 0, has an obvious counterpart in the redshift histogram at a mean redshift z = 0.419 (Figure 18(a)). Figure 18(b) shows the spatial distribution of cluster members within Δz = 0.004 of the mean and within 3' of the weak-lensing peak. It is clear that these galaxies are concentrated around the lensing peak. There is also extended X-ray emission associated with this system; the SDSS red sequence survey (Wen et al. 2009) also identifies it.

Figure 18.

Figure 18. Peak GTO00. (a) Redshift distribution for Peak GTO00. The filled histogram shows objects within a cone of 3' radius centered on the weak-lensing peak; the open histogram shows objects within a 6' cone. Bins in redshift are 0.002(1 + z) wide. (b) Close-up view for Peak GTO00. The image is 6' × 6'. Contours show the weak-lensing peak. Small yellow circles identify galaxies in the redshift peak centered at a mean z = 0.416.

Standard image High-resolution image

The lensing peak is slightly offset from the galaxy distribution. Assuming that the brightest cluster member is the BCG, its offset is ∼1', comparable with smoothing length.

Figure 19, corresponding to peak 1, shows a concentration of bright galaxies, but the redshift histogram does not show an obvious peak. This discrepancy may result from undersampling at this location in the redshift survey (see peak 7 in Figure 2 of Kurtz et al. (2012)). Gunn et al. (1986) reported that this concentration of galaxies is a cluster with z = 0.2830. The redshift histogram does show some galaxies around this redshift.

Figure 19.

Figure 19. Peak GTO01. (a) Same as Figure 18(a) but for Peak GTO01. (b) Close-up view for Peak GTO01.

Standard image High-resolution image

Peak 2 has a significance of 4.92 (Figure 20). This peak has a clear counterpart in the redshift histogram at a mean z = 0.540. X-ray observations also show extended emission. Figure 20(b) shows the distribution of the suspected cluster members on the sky. The lensing peak is shifted by several arcminutes from the peak of galaxy distribution. The reason for the shift is not clear, but we suspect that because the peak is near the edge of the imaging field, there may be distortion in the κ-S/N map. This peak may also be affected by a neighboring bright star.

Figure 20.

Figure 20. Peak GTO02. (a) Same as Figure 18(a) but for Peak GTO02. (b) Same as Figure 18(b). The mean redshift of the peak in (a) corresponding to the weak-lensing system is z = 0.540.

Standard image High-resolution image
Figure 21.

Figure 21. Peak GTO03. (a) Same as Figure 18(a) but for Peak GTO03. (b) Same as Figure 18(b). The significant peak in the redshift histogram (a) corresponding to the lensing peak is z = 0.731.

Standard image High-resolution image

Peak 3 has a significance of 4.76 and, remarkably, corresponds to a peak in the redshift survey at z = 0.73 (Figure 20). Although the spectroscopic sample is small, there is an excess in the galaxy count within 3' of the peak.

Above the S/N = 4.56 threshold, all of the peaks (with the possible exception of peak 1) correspond to systems of galaxies. We suspect that the failure to detect peak 1 in the redshift survey results from undersampling. The offsets in position between the lensing peaks and the lenses are consistent with the smoothing length except in a case where edge effects may be important.

Below the S/N = 4.56 threshold many fewer peaks correspond to systems of galaxies as we expect based on analysis of the noise maps. Table 5 lists possible systems associated with S/N ⩾3.7 peaks for comparison with previous work.

9.3. The DLS Field

In the DLS field (Figure 15), the most significant peak (0) has S/N = 6.18 (Figure 22). The redshift histogram (Figure 22(b)) shows two different concentrations centered at z = 0.320 and z = 0.537. This peak is apparently a superposition of clusters along the line of sight. So far, this peak is not covered by X-ray observations.

Figure 22.

Figure 22. Peak DLS00. (a) Same as Figure 18(a) but for Peak DLS00. (b) Same as Figure 18(b). This weak-lensing peak corresponds to a superposition of structures in the redshift survey at redshifts z = 0.319 (yellow circles) and z = 0.535 (sky-blue circles).

Standard image High-resolution image

Peak 1 (Figure 23) has S/N = 4.98, but it unfortunately coincides with bright stars. The center of the weak-lensing peak is masked by these bright stars. The redshift histogram shows a peak at z = 0.531, but the velocity dispersion for the 3' cone, σ ∼ 386 km s−1, is too low to account for the lensing peak: the 636 km s−1 dispersion for the 6' cone is sufficient to account for the detection. The coincident star compromises accurate measurement of the velocity dispersion.

Figure 23.

Figure 23. Peak DLS01. (a) Same as Figure 18(a) but for Peak DLS01. (b) Same as Figure 18(b). The mean redshift of the peak in (a) corresponding to the weak-lensing system is z = 0.531.

Standard image High-resolution image

Two other previous studies (lensing and X-ray; Abate et al. 2009; Kubo et al. 2009) reported that there is a cluster at z = 0.53. X-ray observations show extended emission, suggesting that the velocity dispersion may be underestimated. The redshift distribution also shows a peak at z = 0.183. Thus, the system at z = 0.53 may be boosted by the foreground large-scale structure.

The third significant Peak 02 (Figure 24) has S/N = 4.56 and coincides with five galaxies around z = 0.65 in the spectroscopic data (Figure 24(b)) even though such high-z galaxies are rare in the redshift survey. There is also extended X-ray emission associated with this peak.

Figure 24.

Figure 24. Peak DLS02. (a) Same as Figure 18(a) but for Peak DLS02. (b) Same as Figure 18(b). The mean redshift of the peak in (a) corresponding to the weak-lensing system is z = 0.647.

Standard image High-resolution image

In the DLS field, the results are less clear than in the GTO field. One of the peaks (2) is a cluster, and one (0) is a clear superposition of clusters. The interpretation of peak 1 is unclear, but we believe that the peak is not noise. In Table 6 we also list lower significance peaks and possibly associated systems of galaxies for comparison with previous work.

10. COMPARISON WITH EARLIER RESULTS

The new maps of the GTO field and our 1 deg2 portion of the DLS field contain seven weak-lensing peaks above our threshold; these peaks are probably all associated either with a single system of galaxies or with a superposition along the line of sight. In other words, the analysis suggests that the lensing signal above our threshold is a reasonably robust measure of features of the large-scale structure of the universe.

Next, we compare the results for the lensing peaks with previous work. The issues relevant for the GTO and DLS field are slightly different. In the GTO case, we analyze the same data as Kurtz et al. (2012), but we apply new procedures to reduce systematic error. In the DLS subfield, we apply essentially the same reduction procedure as the GTO field. Our Subaru DLS data are deeper and are taken in better conditions than the original data analyzed by Wittman et al. (2006).

For the GTO field, Kurtz et al. (2012) find two peaks with S/N ⩾4.5; we also find these peaks at S/N ⩾4.56 along with two additional ones. This difference occurs because the suppression of systematic errors changes the local peak values in the κ-S/N map. There is no change in the average sensitivity of the map.

The difference in the peak height distribution for the B-mode map of Kurtz et al. (2012) and our map demonstrates again that the systematic error suppression is effective. Kurtz et al. (2012) found four peaks with S/N ⩾3.7 in their B-mode map. We find only two peaks with this amplitude in our new B-mode map (S/N = 3.90, 3.87). The highest B-mode peak now has S/N = 3.90 rather than 4.23.

Our data cover about 25% of the original DLS field F2. Wittman et al. (2006) identified one peak and Kubo et al. (2009) detected three peaks in this region of F2. All of these peaks have S/N <3.9, substantially below our revised threshold.

The original DLS F2 data were taken on the Kitt Peak 4 m in the R-band image with PSF <0farcs9; final co-added images reached a depth of R ∼ 26, and the source galaxy density is ∼20 arcmin−2. The Subaru source density is ∼30 arcmin−2 to roughly the same magnitude limit. The lower source galaxy density may result from the poorer seeing at Kitt Peak. In this case, the Subaru map is more sensitive mainly as a result of the increase in source density. We find three peaks above the threshold S/N = 4.56; one of these is a cluster at z = 0.64.

Obviously, at S/N below our revised threshold, there are peaks that correspond to systems of galaxies (Tables 5 and 6). As we would expect, the fraction of "real" peaks declines with decreasing S/N.

11. CONCLUSION

Weak lensing is a powerful tool of modern cosmology. Its application to the detection of clusters of galaxies is limited to some extent by subtle systematic errors in the construction of the κ-S/N maps. Careful analysis of the B-mode maps where there should be no signal provides a route to reduction of these systematic errors. We use a set of noise maps along with the B-mode maps for two fields observed with the Subaru telescope to test procedures for the reduction of systematics.

Our tests of the B-mode maps against noise maps demonstrate that the following image processing procedures significantly reduce systematic error in the final κ-S/N map (Table 3):

  • 1.  
    Registration of positions onto the absolute celestial coordinate grid, and ($\tilde{\chi }^2=257.7\rightarrow 90.7$);
  • 2.  
    Application of a smoothing cutoff at 10' to remove excess large-scale power ($\tilde{\chi }^2=90.7\rightarrow 54.4$).

By comparing the S/N distribution for noise and B-mode maps with the merit function ($\tilde{\chi }^2$), we demonstrate that these revised procedures significantly reduce the systematics that otherwise appear in the B-mode map. We then take advantage of these revised procedures to show that weak-lensing cluster detection is insensitive to the smoothing length for the κ-S/N map in the range 1farcm0–1farcm5. Also on the basis of this analysis, we suggest use of the 99% threshold derived from the "most extreme peaks" among the highest peaks of the noise maps rather than the more standard 3σ threshold. We provide an analytic expression for the distribution of these highest peaks that can be used to estimate the fraction of false peaks in the κ-S/N maps as a function of the detection threshold.

We test our systematic error reduction procedure and our high peak analysis by analyzing the GTO field and a portion of the DLS F2 field; together these fields cover 2.52 deg2. A threshold of 4.56 corresponds to the 99.5% confidence level from the highest peak analysis for our 1farcm5 smoothing length. We expect these peaks to be uncontaminated by false peaks over this total area. The data show seven peaks above this threshold; all of these peaks correspond to galaxy overdensities. Six of these peaks correspond to clusters of galaxies; one is a superposition of systems. Five of these peaks are well sampled by dense foreground redshift surveys providing reliable estimates of the rest-frame line-of-sight velocity dispersions of the clusters. One peak in the GTO field is in a region sparsely sampled by the redshift survey, but a concentration of galaxies is evident in the imaging data. For one peak in the DLS field, foreground stars probably lead to an underestimate of the velocity dispersion from the spectroscopic data, but the peak is associated with extended X-ray emission. Taken together, these results substantiate the efficacy of our revised procedures.

The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ∼2000 peaks based on our Subaru fields.

We thank the anonymous referee for careful reading of the manuscript and insightful suggestions.

Masafumi Yagi provided a technique for analytical treatment of the maximum value distribution. Also we acknowledge useful discussions with Satoshi Kawanomoto, Hisanori Furusawa, and Yutaka Komiyama about Suprime-Cam. We thank Tadayuki Kodama, Makoto Hattori, Masanori Iye, Nobunari Kashikawa, and Kazuhiro Shimasaku for fruitful discussions. We thank the MMT remote observers Perry Berlind and Mike Calkins for operating the Hectospec, and we thank Susan Tokarz for reducing the Hectospec data.

Y.U. acknowledges financial support from the Japan Society for the Promotion of Science (JSPS) through JSPS Research Fellowships for Young Scientists and from the Department of Astronomical Sciences of the Graduate University for Advanced Studies (SOKENDAI) through Research incentive.

The Smithsonian Institution supports the research of M.G., D.G.F., and M.J.K.

This work was also supported in part by the FIRST program "Subaru Measurements of Images and Redshifts (SuMIRe)," World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and Grant-in-Aid for Scientific Research from the JSPS (23740161).

The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this sacred mountain.

Facilities: Subaru (Suprime-Cam) - Subaru Telescope, MMT (Hectospec) - MMT at Fred Lawrence Whipple Observatory

APPENDIX A: SYSTEMATIC ERROR IN MASS MAPS BASED ON A SIGMA-CLIPPED IMAGE

To remove cosmic rays, investigators often use either a median stacked or a sigma-clipped mean stacked image. As discussed in Section 2, median stacking without PSF adjustment fails to perform accurate photometry.

Here we test a mass map based on a sigma-clipped mean image to show that using a sigma-clipping method does not result in any gain in sensitivity given our coincidence rate of 7%. We constructed two kinds of sigma-clipped images with rejection thresholds of 3 and 5. We again carried out shape measurements for sources in each image for the objects in our final analysis. We then apply the same mass-map-making procedure to these lensing catalogs.

The highest rank objects remain in the resulting mass map, but systematic error affects the lower ranked peaks. When we measure the merit function for these kappa maps derived from the sigma-clipped image, the merit function for the 3σ image is 62.5; for the 5σ map it is 62.8. The corresponding value of the merit function in a simply averaged stacked image was 54.4. Thus, even the sigma-clipped image introduces systematic error in the weak-lensing analysis.

We suggest that the failure of these cosmic-ray rejection procedures in the weak-lensing analysis results from the unstable shape of the PSF from one exposure to the next. For example, in one of the subfields of the GTO field, GTO_0, images have seeing in the range 0.55–0.76 arcsec. With this data set, the difference among the PSF profiles exceeds the 5σ threshold.

Finally, we also test another practical method to eliminate cosmic rays: LACOSMIC (van Dokkum 2001); the resulting merit function is 66.0, which is consistent with the result from the sigma-clipped mean image, showing that at the 7% coincidence level, no systematic effect is detected.

APPENDIX B: MASKS

Here we present details of masks described in Section 3. Table 7 shows the mask for brights stars based on the external catalog USNO-A2.0 (Monet 1998). Tables 8 and 9 show the coordinates for the additional manually constructed masks.

Table 7. Masking Radius around Bright Stars

Magnitude Range Mask Radius (arcsec)
R < 9.0 210
9.0 < R < 10.0 140
10.0 < R < 12.0 70
12.0 < R < 14.0 35
14.0 < R < 16.0 18
16.0 < R < 18.0 6

Download table as:  ASCIITypeset image

Table 8. Manually Masked Regions around Bright Stars in the GTO Field

R.A.2000 Decl.2000 Radius (arcsec) Origin
240.11876 42.9889 270 Filter reflection
240.327 42.7408 270 Filter reflection
240.44033 42.842131 220 Filter reflection
240.45127 43.480879 270 Filter reflection
240.48183 43.629343 41.4 Incorrect position
240.77826 42.616311 270 Filter reflection
241.06437 43.414799 99 Dewar window reflection
241.16981 43.42311 36 Incorrect position
241.26726 43.561169 99 Dewar window reflection
241.3551 43.711462 270 Filter reflection

Download table as:  ASCIITypeset image

Table 9. Manually Masked Regions around Bright Stars in the DLS Field

R.A.2000 Decl.2000 Radius (arcsec) Origin
138.888195 29.792573 120 Dewar window reflection
139.343445 29.452575 120 Dewar window reflection
139.100387 30.040381 120 Dewar window reflection
139.254212 30.374562 120 Dewar window reflection
139.412442 30.320542 120 Dewar window reflection

Download table as:  ASCIITypeset image

APPENDIX C: EFFECT OF VARIATION IN THE SOURCE NUMBER DENSITY ON B-MODE

The number densities of our lensing catalog vary from subfield to subfield. We evaluate the number count (dN/dm) for each field to see how seeing and/or depth affect the source density (Figures 25 and 26). In the GTO field, where the seeing of all the subfields is quite uniform, we only see variation in the rollover?magnitude. Therefore, the source density correlates with the depth. In the DLS field, however, the seeing varies among the subfields. The overall amplitude of the dN/dm curve changes because we adopt the size cut following from our star/galaxy separation.

Figure 25.

Figure 25. Source galaxy number count (N(> m)) for each field.

Standard image High-resolution image
Figure 26.

Figure 26. Same as Figure 25 but for DLS.

Standard image High-resolution image

To explore the effect of variation in the source galaxy number density on the B-mode, we examine the peak distribution function (PDF) of peaks in Figure 27 for each subfield. Peaks are identified by pixels having higher or lower values of ν than all eight surrounding pixels, as suggested by Jain & Van Waerbeke (2000). We divide the GTO field into nine subfields that correspond to each pointing. The solid line with error bars shows the PDF for the entire field divided by 9 to normalize to the subfield area. The error bar is the square root of frequency assuming Poisson errors divided by 9. All other points show B-mode values for each subfield. Although one of the points has S/N =1 and has a large upward deviation, ∼3σ, this deviation is consistent with the statistical error. The probability of finding one point from our data points (21 bins for nine subsamples) is ∼0.5% (2.8σ). Thus, we confirm that none of the subfields show any significant deviation from the B-mode for the entire field, although the galaxy number density changes from subfield to subfield.

Figure 27.

Figure 27. Peak distribution function (PDF) of peaks defined in Jain & Van Waerbeke (2000). Circles with connecting lines show the PDF for the entire field, and other points show the PDF for each subfield in the semi-log y plot.

Standard image High-resolution image

APPENDIX D: TESTING BIAS DEPENDENCE

We test the average shear 〈γ〉 on both fields to explore systematics following Heymans et al. (2012); Comparat et al. (2013). The results are (γ1, γ2) = (− 0.0005, 0.0044) ± (0.0007, 0.0007) for GTO and (− 0.012, −0.006) ± (0.0012, 0.0012) for DLS. The standard deviation for each shear component is 0.33. These results are slightly offset from zero. We next examine the effect of the offset on the weak-lensing reconstruction. We adopt θG = 1.5 arcmin smoothing for making the mass map. Then the effective galaxy number is $\pi \theta _G^2 n_{g}\approx 210$. The corresponding statistical error in the shear measurement is $0.33/\sqrt{68}=0.02$, suggesting that the bias in the shear signal has a negligible effect on the mass reconstruction. To verify that large-scale shear patterns do not contribute to our cluster measurements, we test a mass map for the DLS field with the constant shear correction that the average shear becomes zero. As a result, peak S/N values on the new mass map do not show large differences (Δ(S/N) < 0.1). It shows that our cluster significances are mostly insensitive to the large-scale shape environment. This result comes from our choice of setting the truncation of smoothing at 10'. We also check the dependence as a function of magnitude, S/N, and the size of galaxies. Figures 28 and 29 show the result. We try to fit this result with a constant shear model and a model in which the shear is a linear function of these parameters. According to the chi-square test, they are almost consistent with unity for both models. The largest reduced chi-square value is that for the e0 component with the constant model for DLS, which at a reduced chi-square of 2.0 has a 7% likelihood of occurring randomly. This value is not high enough to conclusively disfavor the constant model. The model with a linear scaling also shows only a slight improvement, to a reduced chi-square of 1.82 (equivalent to a 13% likelihood). This value means that the linear model is also acceptable at the same level. Because there is no strong reason to adopt this kind of shear correction model as a function of magnitude, S/N, and size, we do not apply further correction here.

Figure 28.

Figure 28. Averaged shear on the GTO field as a function of magnitude, rg, and ν. The solid line shows g1, and the dashed line shows g2. The horizontal dashed lines show the level of statistical error with θG = 1.5 arcmin smoothing.

Standard image High-resolution image
Figure 29.

Figure 29. Same as Figure 28 but for the DLS field.

Standard image High-resolution image

APPENDIX E: THE ZERO-LAG CORRELATION FUNCTION

In order to assess the level of PSF-related systematics in the shear catalog, we examine the star–galaxy cross-correlation function ξsg = 〈egale*〉, where egal is the PSF anisotropy corrected ellipticity for galaxies and e* is the ellipticity for stars (e.g., Heymans et al. 2012). Here we focus on the zero-lag star–galaxy correlation ξsg(θ = 0), using the model of the PSF ellipticity to determine e* at the location of each galaxy:

Equation (E1)

If the PSF anisotropy corrected galaxy ellipticity does not correlate with PSF, ξsg, ±(0) should be consistent with zero. The derived ξsg, ±(0) for each field are consistent with zero to within 3σ. We conclude that our PSF correction does not perform over- or under-correction.

Footnotes

  • Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

  • Miyazaki et al. (2002b) says that the distance between the window and the filter is 14.5 mm, but this turns out to be modified to be 9.5 mm in the actual implementation of Suprime-Cam. The translation of the flat glass (filter), however, does not affect the optical performance.

Please wait… references are loading.
10.1088/0004-637X/786/2/93