Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A publishing partnership

Articles The following article is Free article

THE ATACAMA COSMOLOGY TELESCOPE: DATA CHARACTERIZATION AND MAPMAKING

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and

Published 2012 December 7 © 2013. The American Astronomical Society. All rights reserved.
, , Citation Rolando Dünner et al 2013 ApJ 762 10 DOI 10.1088/0004-637X/762/1/10

0004-637X/762/1/10

ABSTRACT

We present a description of the data reduction and mapmaking pipeline used for the 2008 observing season of the Atacama Cosmology Telescope (ACT). The data presented here at 148 GHz represent 12% of the 90 TB collected by ACT from 2007 to 2010. In 2008 we observed for 136 days, producing a total of 1423 hr of data (11 TB for the 148 GHz band only), with a daily average of 10.5 hr of observation. From these, 1085 hr were devoted to an 850 deg2 stripe (11.2 hr by 9fdg1) centered on a declination of −52fdg7, while 175 hr were devoted to a 280 deg2 stripe (4.5 hr by 4fdg8) centered at the celestial equator. The remaining 163 hr correspond to calibration runs. We discuss sources of statistical and systematic noise, calibration, telescope pointing, and data selection. For the 148 GHz band, out of 1260 survey hours and 1024 detectors in the array, 816 hr and 593 effective detectors remain after data selection, yielding a 38% survey efficiency. The total sensitivity in 2008, determined from the noise level between 5 Hz and 20 Hz in the time-ordered data stream (TOD), is $32\,\mu \mathrm{K}\sqrt{\mathrm{s}}$ in cosmic microwave background units. Atmospheric brightness fluctuations constitute the main contaminant in the data and dominate the detector noise covariance at low frequencies in the TOD. The maps were made by solving the least-squares problem using the Preconditioned Conjugate Gradient method, incorporating the details of the detector and noise correlations. Simulations, as well as cross-correlations with Wilkinson Microwave Anisotropy Probe sky maps on large angular scales, reveal that our maps are unbiased at multipoles ℓ > 300. This paper accompanies the public release of the 148 GHz southern stripe maps from 2008. The techniques described here will be applied to future maps and data releases.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

Over the past two decades, precision measurements of the cosmic microwave background (CMB) have led to remarkable advances in our understanding of cosmology. Combined with other observations, they have produced constraints on models of the universe to percent level accuracy (e.g., Komatsu et al. 2011). Primary CMB anisotropy has been measured to cosmic variance precision by Wilkinson Microwave Anisotropy Probe (WMAP) up to multipoles of approximately 500 (Larson et al. 2011). Observations at finer angular scales (Friedman et al. 2009; Reichardt et al. 2009b, 2009a, 2012a; Veneziani et al. 2009; Kessler et al. 2009; Sievers et al. 2009; Sharp et al. 2010; Shirokoff et al. 2011; Das et al. 2011a; Keisler et al. 2011), corresponding to the damping scale of the anisotropies, have led to even tighter parameter constraints, which are improving with each new data set. Additionally, CMB measurements at smaller angular scales are sensitive to the contribution of point sources, which help to reveal the nature of early galaxies or active galactic nuclei (Vieira et al. 2010; Marriage et al. 2011b), to the thermal Sunyaev–Zel'dovich (SZ) effect, which can provide independent constraints on cosmological parameters (Sunyaev & Zel'dovich 1970; Marriage et al. 2011a; Sehgal et al. 2011; Planck Collaboration VIII 2011; Reichardt et al. 2012b), and to gravitational lensing effects on the CMB (Seljak 1996; Zaldarriaga & Seljak 1999; Das et al. 2011b; Sherwin et al. 2011).

The Atacama Cosmology Telescope (ACT) is located at 22°57'35''S, 67°47'13''W on Cerro Toco at an altitude of 5200 m in the Atacama Desert of northern Chile. Its main purposes are to map the millimeter wave sky at arcminute scales, sampling multipoles up to l ≃ 104, and detecting and characterizing foregrounds, including galaxy clusters through their SZ signature, and millimeter galaxies.

Between 2007 and 2011, ACT was equipped with the Millimeter Bolometric Array Camera (MBAC), which observed simultaneously in three bands: 148 GHz, 218 GHz, and 277 GHz. The bands were chosen to avoid major atmospheric emission lines and to sample the SZ decrement, null, and increment. Each band had a dedicated set of optics and a detector array composed of 1024 pop-up TES bolometers (Benford et al. 2003) coupled to the optical signal, called "live detectors," plus 32 "dark detectors," which were not coupled to the sky. For details about the instrument, including effective band centers for different source spectra, see Swetz et al. (2011).

Since first light on 2007 October 23, the telescope has had four observing seasons producing more than 90 TB of data. Here, we describe the 8.7 TB of data taken at 148 GHz in 2008, including the techniques used for data characterization, as well as the data reduction process used to obtain the final maps. The description of the 218 GHz and 277 GHz data was delayed for a later paper because modeling the noise at those bands is more complicated due to the enhanced atmospheric loading, together with some specific properties of the system. However, a similar data reduction pipeline is used to analyze the full data set.

The maps obtained from these data have an angular resolution of 1farcm37 (Hincks et al. 2010) and a noise level that ranges between 25 and 50 μK arcmin. The calibration of the data to WMAP is discussed in Hajian et al. (2011). The power spectrum of the maps is presented in Fowler et al. (2010) and Das et al. (2011a), with corresponding constraints on cosmological parameters in Dunkley et al. (2011). Cluster detections through their SZ signature are presented by Hincks et al. (2010) and Marriage et al. (2011a), while extragalactic source detections are given in Marriage et al. (2011b). Multi-wavelength follow-up for these clusters is presented in Menanteau et al. (2010) as well as the discovery of a massive cluster at z = 0.87 (Menanteau et al. 2012, El Gordo). Their cosmological interpretation is discussed in Sehgal et al. (2011). The first direct detection of gravitational lensing of the microwave background was made using these maps in Das et al. (2011b). This in turn demonstrates for the first time that microwave background data on their own favor cosmologies with an accelerating expansion (Sherwin et al. 2011).

This paper is organized as follows. A summary of observations is given in Section 2. In Section 3, we provide background for understanding the sky signal as recorded in the time-ordered data stream (TOD). In Sections 45, and 6, respectively, we characterize the atmospheric, detector, and systematic noise found in the TOD. Data calibration into units of sky temperature fluctuations, δTCMB, is described in Section 7. Section 8 describes the pointing solution, while Section 9 explains the detector time-constant determination method. Data selection is described in Section 10, providing final statistics on the amount of data used for making the maps. The mapmaking method is discussed in Section 11. We conclude in Section 12 and present a step-by-step summary of the data pipeline from raw data to maps. The Appendix provides further details about finding and removing correlated modes from the data.

This paper accompanies the public release of the data through the NASA's LAMBDA site.32

2. OBSERVATIONS

ACT observes the sky by scanning the telescope in azimuth at a constant elevation of 50fdg5 as the sky moves across the field of view in time, resulting in a stripe-shaped observation area. With this scan strategy, the instrument observes through a constant air mass, the cryogenics remain stable, the telescope's shape remains constant, and the local environment and instrumental offsets are sampled in a consistent way. The time constants of the detectors, together with mechanical factors, limit the scan speed and turnaround acceleration. Table 1 gives a summary of the scan parameters used in all seasons. The lower acceleration introduced in 2008 was needed to reduce vibrations in the optics (see more details in Section 6).

Table 1. Scan Parameters

Season 2007 2008 2009–2010
Elevation 50fdg5 50fdg5 50fdg5
Scan width 9fdg6 7fdg0 7fdg0
Period 19.4 s 10.2 s 10.2 s
Speed 1.0 deg s−1 1.5 deg s−1 1.5 deg s−1
Max. accel. 8.1 deg s−2 3.3 deg s−2 3.3 deg s−2
Data file length 15 minutes 15 minutes 10 minutes

Download table as:  ASCIITypeset image

The observations are repeated at complementary central azimuth angles to capture both rising and setting skies. This cross-linking technique improves the determination of CMB modes because the scan projections on the sky are in more than one direction.

Each detector is sampled at a rate of 398.72 Hz and the data are stored in 15 minute long data files. They are then merged with the rest of the housekeeping data, which include the azimuth and elevation encoder readings and time of day. The data are compressed to one-third of their original size for storage, using a lossless compression algorithm (SLIM).33 A sample TOD is shown in Figure 1 for a night with good weather conditions, meaning that precipitable water vapor (PWV) remained below 1 mm. The PWV is a good indicator of 148 GHz opacity and overall data quality.

Figure 1.

Figure 1. Example TOD from one detector at 148 GHz from 2008 October 21. This was a good observing night with a PWV of 0.22 mm. The slow drift is dominated by changes in the atmosphere brightness. The high-frequency noise is dominated by detector noise. Units are mK in CMB equivalent units at 148 GHz. The plot displays 3.6 × 105 samples at an interval of 2.5 × 10−3 s. The telescope was scanning while these data were taken.

Standard image High-resolution image

Despite some variation over the season due to changes in sunrise and sunset times, a typical day of observations at the site was as follows. The cryogenic system was recycled every day before providing roughly 14 hr of observing time. The cryogenic cycle began around 11:00 in the morning local time and lasted for 9.5 hr, so that MBAC was cold by about 20:30. At 20:40, warm-up movements for the motors and gears were run, and at 20:50 the detectors were biased and the first detector calibration data were obtained (see Section 7.1). The observations started at 21:00, usually by scanning the rising sky. Around 2:30 more detector calibration data were collected and the scan was shifted to the west to observe the same region that was previously observed while rising. Observing in the west has an additional advantage from a hardware safety perspective: in the event of a telescope failure, MBAC would be pointing away from the Sun at its rising. Observations normally ended around 10:40, nearly 2 hr after sunrise above the mountains.34 At this point, final detector calibration data were taken, before the telescope was sent to its home position and the cycle was restarted. When the observable region of the sky contained a planet, it was normally scanned every other night for calibration and beam measurements. This was not done every single night to avoid producing a gap in the CMB map. All of the operations listed above were automated and could be performed remotely.

The 2008 season began on August 11 and ended on December 24, with a total of 136 available nights and 124 nights with successful observations, resulting in 1423 hr of observations per frequency band, corresponding to 26 TB of data in total. From the total observed nights, four had bad weather conditions, leaving 120 nights with usable science observations. The overall calendar time efficiency, including day time, was 44%. The median PWV during observations across the season, measured at zenith of the Atacama Pathfinder Experiment (APEX) facility (Güsten et al. 2006), was 0.49 mm. Given that the ACT site is about 140 m higher than the APEX site, we use a correction factor PWVACT = 0.88 PWVAPEX to estimate the PWV at the ACT site. Figure 2 shows a histogram of the PWV during the season.

Figure 2.

Figure 2. Histogram showing the PWV during the 2008 season, measured at the zenith of the Atacama Pathfinder Experiment (APEX) facility. The median value is 0.49 mm.

Standard image High-resolution image

Observations were made in two areas of the sky. The equatorial stripe, centered at a declination of 0° (60° and 300° in azimuth at 50fdg5 elevation), and the southern stripe, centered at a declination of −53° (150° and 210° azimuth at 50fdg5 elevation), covering a wide range in right ascension. Out of the total time observed, 1260 hr correspond to survey observations, and the remaining 163 hr correspond to various calibration measurements. Table 2 lists the boundaries of the observation areas and the number of hours available in each area before data selection.

Table 2. Observation Summary for Seasons 2008 (at 148 GHz)

Season Decl. (Min, Max) R.A. (Min, Max) Area (deg2) Hoursa
Southern stripe
2008 (−57fdg15, −48fdg1) (20h43m, 7h53m) 850 1085
Equatorial stripe
2008 (−2fdg12, 2fdg34) (10h21m, 14h48m) 280 175

Note. aTotal hours before data selection.

Download table as:  ASCIITypeset image

3. SIGNAL PROPERTIES

In this section, we characterize the expected TOD signal from point sources and extended sources, for comparison to the random and systematic noise described later in Sections 5 and 6.

3.1. Point Sources

For our purposes, we will define a point source as any object with an angular size comparable to or smaller than the beam size of the telescope. The beam full width at half-maximum (FWHM), θ1/2, is 1farcm37 for 148 GHz. Planets are the most important point sources as their high surface brightness makes them useful for both calibration and beam measurements. Distant galaxies can also be approximated as point sources and are helpful for the pointing solution.

As the telescope scans and the sky rotates, a single detector traces a zigzag on the sky, sampling the point source every time it intersects with the beam. The number of times a point source appears depends on the scan period, the beam size, and the central scan azimuth. A point source appears as a succession of "blips" in the time stream. The shape of these blips is a slice through the telescope point-spread function and depends upon the angular separation between the center of the beam and the location of the source. The angular speed of the scan on the sky is given by $\dot{\theta } = v_{\mathrm{scan}}\cos {(50{\mbox{$.\!\!^\circ $}}5)}$. The 2008 scan speed of vscan = 1fdg5 s−1 implies an angular sky speed of 57farcm2 s−1. Assuming a Gaussian-like beam of equivalent width and neglecting the transit speed of the source, the 3 dB cutoff frequency is

Equation (1)

This means that the contribution from point sources to the TOD is limited to frequencies below 18 Hz, as can be seen in Figure 3.

Figure 3.

Figure 3. Power spectra of signals in the 2008 season compared to the data. The solid curve is the average power spectrum from 737 live detector TODs from one 15 minute 148 GHz data file, during which the PWV was 0.5 mm. The rise at low frequencies is the noise contribution from the atmosphere. The slight rise for frequencies above 2 Hz is due to detector excess noise. The sharp rise at the higher end of the spectrum is due to noise aliasing. The dashed curve shows the simulated CMB signal. The oscillations in the simulated CMB signal are due to enhanced power at harmonics of the scan frequency. The dot-dashed curve estimates the point source contribution (considering only one hit at the beam center) assuming a Gaussian beam. For comparison, the thin and thick dotted curves show the expected response for 218 GHz and 277 GHz, with beam sizes of 1farcm00 and 0farcm91, respectively. The amplitude of the point source power spectra corresponds to approximately the signal of Saturn.

Standard image High-resolution image

Given our scan speed, nominal elevation, sky rotation, and sampling rate, we expect nearly 10 samples per beam on the sky at 148 GHz. The time constant of the detectors is fast enough that its effect in the beam shape is small, as will be discussed in Section 9.

3.2. Extended Sources

Using CMB simulations, we can estimate the expected TOD response to extended sky features. Averaging the power spectra from many such synthetic TODs, we can estimate the CMB power spectrum in TOD space. Figure 3 shows the average power spectrum of a simulated 148 GHz observation of the CMB sky and the average data power spectrum from one 15 minute file. Because of Silk damping, the CMB has a relatively sharp cutoff in its characteristic size: unlike the point-source observations shown in the figure, the CMB has little signal at frequencies higher than 10 Hz.

Given ACT's angular scan speed, the TOD frequency associated with a sky feature of angular size θf is

Equation (2)

where the factor of two in the denominator comes from considering that the angular scale is the size of a positive or negative temperature bump, which is half of a wavelength on the sky. Given that multipole moment ℓ relates to angular scale as ℓ ≃ π/θf for scales small compared to the full sky, an approximate conversion between multipole and TOD frequency is

Equation (3)

Note that this relation is only a rule of thumb; the actual mapping of TOD frequency into multipole space depends on the specific scan strategy. This relation shows that the CMB power spectrum in the TOD can be shifted in frequency by changing the scan rate. As a reference, the TOD frequency corresponding to ℓ = 3000 was 7.9 Hz in the 2008 season.

4. ATMOSPHERIC EMISSION

The atmosphere is a strong emitter and absorber at the bands of interest, chiefly due to the excitation of the vibrational and rotational modes of water vapor. For this reason, the PWV is strongly correlated to the level of optical loading on the detectors. The Atmospheric Transmission at Microwaves (ATM) model (Pardo et al. 2001) provides an estimate of the loading as a function of the PWV level. At the median value of 0.49 mm during the 2008 observations, the loading is approximately 0.5 pW, which corresponds to an equivalent Rayleigh–Jeans temperature of 6.4 K at the nominal elevation.

During the season, the median atmospheric temperature drift over 15 minute observations was 0.22 K (Rayleigh Jeans equivalent units as measured by MBAC), with lower and upper quartiles at 0.10 K and 0.43 K, respectively.

Turbulence induces a spatial structure in the atmospheric signal. According to the Kolmogorov model of turbulence (Tatarskii 1961), the power spectrum of the fluctuations in a large three-dimensional volume is proportional to q−11/3, where q is the wavenumber. The projected signal observed on the sky can have either a q−11/3 dependence, if the wavelength is small enough that the turbulence can be treated as three dimensional, or a q−8/3 dependence, if the wavelength is large compared to the thickness of the atmospheric layer supporting the turbulent motions (Church 1995; Lay & Halverson 2000).

Figure 4 shows examples of the atmospheric signature in TOD power spectra. Figure 4(a) displays the average TOD power spectra from four groups of observations with the telescope not scanning ("stare" observations). When scanning, the knee—where the power law meets the white noise level—increases in frequency by around 1 Hz and harmonics of the scan frequency leak into the spectrum. Before averaging the power spectra from different observations (data files), the average power spectrum from the dark detectors was subtracted to isolate the atmospheric signal from instrumental 1/f noise. TODs were binned as a function of the PWV level measured by APEX during the period of data acquisition. It is clear how the atmosphere signal grows with PWV, shifting the knee toward higher frequencies. On the other hand, the power-law index stays rather constant near the two-dimensional regime value. The average power spectrum from the dark detectors is shown for comparison. It is dominated by the thermal fluctuations of the cryostat. The fact that the "dark-detector" power spectrum appears higher than the others in the frequency range between the knee and 10 Hz is the result of the power subtraction mentioned above. This indicates that the 1/f plus readout noise dominates over the atmospheric plus detector noise in that frequency range. The knee frequency ranges from 1 to 5 Hz for stare observations depending on the weather conditions.

Figure 4.

Figure 4. Atmospheric signature in TOD power spectrum. (a) Average power spectra for four groups of non-scanning TODs selected by PWV level. The average power spectrum from the dark detectors was subtracted from each group power spectrum and is shown for comparison. The dashed lines show a power-law fit to each spectrum. The legend indicates the mean PWV and the power-law index from the fit for each group. Logarithmic binning was used to reduce the noise in the plot. (b) Power-law index as a function of frequency for the average power spectra from three groups (blue (•), green (▾), and red (▴)) of observations selected by their power-law index at frequencies close to the knee, all belonging to the third PWV selected group in (a). Each value was obtained by fitting a power law to the corresponding power spectrum in a small frequency range (in logarithmic space); the listed frequencies are the mean values of the associated frequency range. The error bars show the dispersion of the indices from the members of each group. The same is shown in brown (${\scriptstyle \blacksquare }$) for the dark detector average power spectrum for comparison.

Standard image High-resolution image

The departure from a pure power law shown in Figure 4(a) implies that the power-law index varies with frequency. By fitting power laws in small frequency ranges, shown as dashed lines in Figure 4(a), we were able to measure this dependency and group the observations as a function of their power spectrum slopes at frequencies near the knee. We found that, for similar PWV conditions, the slopes can vary significantly, with indices ranging between the two-dimensional and three-dimensional regimes, as shown in Figure 4(b). Low frequencies, which are related to large scales in the sky, are dominated by the two-dimensional regime, with a power-law index of approximately α = −2.4. At frequencies around 1 Hz the power spectrum from some observations (blue dots in Figure 4(b)) follows a steeper power law, suggesting that, under certain weather conditions, the three-dimensional regime dominates at these smaller scales. The slopes tend to increase with wind speed. Higher wind speeds are expected to shift larger features of the turbulent pattern toward higher TOD frequencies. As large scales are dominated by the two-dimensional regime, this would cause the opposite effect of reducing the slopes at higher frequencies, in contradiction with what is observed. This result suggests that higher winds might be associated with intrinsic properties of the turbulent layer, such as its width or height, producing a steeper power law (Church 1995; Lay & Halverson 2000).

Atmospheric structures larger than 24'—the field of view of a detector array—appear as a common mode among detectors, while smaller features can produce sub-array "correlated modes," as described in the Appendix. Given that the distance to the turbulent layer is commonly less than a kilometer (Robson et al. 2002), it becomes out of focus, smearing out to scales of nearly 10', in agreement with our optical simulations. This corresponds to roughly a third of the array size. Then, given the scan speed, such signals appear at frequencies between 2 or 3 Hz in the TOD. In general, the common mode of the detectors is a good estimator of the atmospheric signal, but the estimate can in principle be improved by dividing the array to account for sub-array atmosphere structure. Our attempts to detect coherent motions of atmospheric features across the array, as would appear in a moving frozen-sheet model of the turbulent layer, were not successful. Instead, we suppress the atmospheric noise by solving for the strongest atmospheric modes in the time stream during the mapmaking process, as discussed in Section 11.

5. SYSTEM SENSITIVITY: UNCORRELATED NOISE

Above the atmosphere knee frequency, the TOD is dominated by broadband random noise. This noise is generated at the detectors and readout circuitry and it is essentially uncorrelated among detectors. After de-projecting the correlated modes from the atmosphere and systematics (see Section 6 and the Appendix), this noise can be measured by averaging the TOD power spectral density over the desired range of frequencies.

The total sensitivity of the array, expressed as the noise equivalent temperature (NET), is given by

Equation (4)

where NETi is the NET of each working detector. Thus, the typical sensitivity per detector can be defined as $\mathrm{NET}_{\mathrm{typ}} = \mathrm{NET}_{\mathrm{tot}}\,\sqrt{N_{\mathrm{det}}}$, where Ndet is the number of the "effective" detectors, as defined in Section 10.

Table 3 lists the typical values of the total and average NET for a mid-frequency range (5–20 Hz) and a high-frequency range (100–120 Hz), in equivalent temperature units for CMB fluctuations. Uncertainties show the dispersion among 15 minute TODs. The noise at high frequencies denotes an increase from the mid-frequency range. This increase is driven by intrinsic properties of the bolometers, as described in Marriage (2006), Zhao et al. (2008), and Niemack et al. (2008). However, our signal band is limited to below 30 Hz by the telescope scan speed and the beam size (see Figure 3). Thus, the mid-frequency range is the best estimate of the instrument sensitivity.

Table 3. Median Value of the Typical Sensitivity per Detector and the Total Sensitivity for the Array at Two Frequency Ranges after Removing the Main 28 Modes of Correlated Noise

Range NETtyp [$\mu \mathrm{K}\sqrt{\mathrm{s}}$] NETtot [$\mu \mathrm{K}\sqrt{\mathrm{s}}$]
  5–20 Hza  786 ± 28 31.7 ± 2.7
100–120 Hz 1052 ± 21 42.7 ± 3.2

Notes. The units are equivalent temperature for CMB fluctuations. aThe signal band is ⩽30 Hz so these entries best estimate the instrument sensitivity.

Download table as:  ASCIITypeset image

As given in Table 4, the typical noise variance per detector is 0.62 ± 0.04 mK2 s. The total noise can primarily be separated into in-band detector noise, aliased detector and readout noise, and photon noise, all of which add in quadrature.

Table 4. Noise Contribution Summary

Noise Source NET25–20 Hz (mK2 s)
In-band detector noise 0.30 ± 0.10
Aliased noise 0.17 ± 0.08
Optical loading 0.15
Total (typical) noise 0.62 ± 0.04

Download table as:  ASCIITypeset image

To reduce the effects of noise aliasing, the detectors are first sampled at 15.15 kHz, then a digital four-pole Butterworth low-pass filter with a cutoff frequency of 122 Hz is applied, and finally the data are resampled at 398.72 Hz. The detector noise bandwidth is limited to 8 kHz by a 700 nH inductor in series with each TES.

To determine the effects of optical loading and aliasing, we performed a dark test in which we opened the cryostat, put a 4 K (reflective) cover over the detectors, and collected data at a variety of sampling rates. The typical noise level in the dark was 0.46 ± 0.14 mK2 s, where the uncertainty shows the dispersion among detectors. The units are equivalent to the ones in Table 3. After fitting the noise as a function of sampling frequency, the in-band detector noise yielded 0.30 ± 0.10 mK2 s and the total aliased noise contribution was 0.17 ± 0.08 mK2 s. The latter includes aliasing from both detector and readout noise.

The readout noise is dominated by SQUID noise and preamplifier noise. Fully sampled, the readout noise is estimated to be around 1.2 × 10−4 mK2 s (based on 50 MHz measurements), which is expected to increase by roughly a factor 400 when sampled at 15 kHz, reaching ≈0.05 mK2 s. The SQUID noise aliasing was significantly reduced in season 2010 by reducing the readout bandwidth. Taking all this into consideration, we estimate that slightly more than half of the total aliased noise contribution is aliased detector noise, which is consistent with the detector noise aliasing analysis presented in Niemack (2008). The other half is SQUID noise.

The dark tests also revealed that the optical loading contributes 0.15 mK2 s of photon noise. By measuring the saturation power of the detectors at different atmospheric conditions throughout the season, and comparing them to the saturation power in the dark, we found that the optical loading is 2.24 pW when the PWV is 0 mm. This loading is dominated by spillover contributions, emission from optics, and dry atmosphere emission. The spillover contribution was reduced by 0.36 pW in the 2010 season by adding a baffling structure around the secondary mirror. In the same way, we determined that water in the atmosphere contributes another 0.7 pW/mm of loading, so in nominal conditions (PWV = 0.49 mm) the total optical loading is 2.59 pW. This contributes to the noise as photon noise: for a fully incoherent detector coupling to both polarizations, the noise in units of power squared per unit frequency is given by

Equation (5)

where h is Planck's constant, ν = 149.2 ± 3.5 GHz is the central frequency,35 Δν = 18.4 GHz is the bandwidth, and P = 2.59 pW is the power absorbed by the detector (Zmuidzinas 2003). The first term in the equation corresponds to photon shot noise while the second term corresponds to noise from photons arriving in bunches, as is expected for thermal radiation when the occupation number is high. This leads to a photon NEP of $3.0\times 10^{-17}\,\mathrm{W}/\sqrt{\mathrm{Hz}}$, or roughly 0.17 mK2 s in CMB temperature units, in agreement with our measurements.

Table 4 shows a summary of the different uncorrelated noise contributions that determine the system sensitivity.

Figure 5 shows the mean noise between 5 and 20 Hz for all data files in the 2008 season. Note that there is only a weak correlation between PWV and detector NET, indicating that optical noise is not the dominant noise source. This is the total noise that goes into the maps and can be reduced only by increasing the observation time. Noise estimates from the maps can be found in Marriage et al. (2011b) and Das et al. (2011a).

Figure 5.

Figure 5. Total array NET during the 2008 season in the mid-frequency range (5–20 Hz) for 148 GHz. The values are grouped in days and the error bars are the standard deviations within each day. The PWV is also plotted for reference (dashed line). Values are calibrated to CMB equivalent units. Before computing the sensitivity, eight multi-common modes, four row-correlated modes, four column-correlated modes, and the residual twelve modes with highest singular values were removed (see the Appendix for details). The noise improvement after September 24 came from turning off some oscillating detectors, which were contaminating neighbors.

Standard image High-resolution image

6. SYSTEMATIC NOISE

In addition to the atmospheric and random noises, the celestial signal is contaminated by several systematic noise sources. The important ones are thermal drift of the cryostat, mechanical accelerations which couple both optically and thermally (by causing detector temperature oscillations), electromagnetic pickup, and magnetic pickup. All of these effects cause zero-lag correlations among different detector TODs or can be directly measured using the dark detectors. Detectors are distributed in 32 "columns" and 33 "rows," where the last row contains the dark detectors. Each column shares the same time-multipexed SQUID readout circuit (de Korte et al. 2003) within which each row is read simultaneously. Thus, there is one dark detector per readout circuit, serving to assess systematic noise from it. Moreover, the high redundancy of live detectors across the array can also be used to assess systematic effects (see the Appendix).

6.1. Thermal Drift

The current signals from the bolometers are amplified in 100-SQUID series arrays (SAs) operated at 3 K (Swetz et al. 2011). Slow temperature changes in the SAs produce slow drifts in the TODs. During the first 10 hr of the night, the SA temperature drifts down by 250 mK, and rises up by 150 mK in the last 2 hr. In terms of equivalent sky temperature at 148 GHz, this corresponds to a drift of nearly 4 K in signal. In frequency space, the drift imprints a 1/f signature on the data, which meets the detector noise level at a knee of nearly 1 Hz. Despite small differences in the responses of different SA amplifiers, most of this signal appears as a common mode to all the detectors. As the coupling occurs at the readout circuit, this signal is also present in the dark detectors, simplifying its identification and eventual removal.

In most cases this signal is subdominant to the atmosphere signal, but when the PWV is low enough the thermal drift becomes comparably significant. This is not evident in Figure 4(a) since the examples of low-atmospheric power get averaged down when grouping many TODs.

In contrast, drifts in detector temperature cause no measurable effect because it is servo-controlled to within less than 1 mK error.

6.2. Mechanical Accelerations

Mechanical accelerations, occurring mainly at the scan turnaround, can couple optically and thermally to the detector and result in an undesired instrumental response. The optical coupling can be produced by a mechanical motion of the detector coupling layer, which is a 40 μm thick silicon layer placed 100 μm away from the detectors for optical impedance matching. The coupling efficiency is sensitive to the distance between the coupling layer and the detectors, so small vibrations can cause significant effects, especially at higher frequency bands where the coupling layer is thinner and the coupling efficiency is more sensitive to changes in the distance. The effect is also larger near the center of the detector array, presumably because the coupling layer vibrates in its fundamental mode. In the TOD, vibrations manifest themselves as a series of spikes visible in the common mode at the scan turnarounds, with opposite signs for opposite directions, leading to lines in the power spectra at several harmonics of the scan frequency. In the 148 GHz "waterfall plot" of Figure 6, the vibrations contribute to the scan harmonics and some resonant lines at higher frequencies. The latter are also seen for stare observations, and their central frequencies are shared among all three arrays, suggesting that they correspond to natural frequencies of the whole system. To mitigate the mechanical effect, the turnaround acceleration was reduced from its value in the 2007 season to the value shown in Table 1 after 2008 October 8. The coupling layers for 218 GHz and 277 GHz were removed after the 2008 season.

Figure 6.

Figure 6. Frequency-space waterfall plot containing the power spectrum from all live detectors from the 148 GHz band, before and after removing 20 correlated modes. This TOD was obtained under nominal scanning conditions (fscan = 0.1 Hz), on 2008 November 20. The data are calibrated to CMB temperature equivalent units. Each horizontal line in the two-dimensional plot represents the power spectrum of a single detector. The black horizontal lines separate detectors from different columns. Most of the low frequency lines are scan harmonics. They are explained by a combination of effects including spatial variations in the atmosphere brightness through the nearly triangular scan pattern, thermal oscillations of the cryostat and coupling layer oscillations. The thick lines around 5 Hz (see columns 2–5) are also present in stare observations. They are thought to be resonant frequencies of the system. Note that the science band is between about 1 and 20 Hz. After removing 20 correlated modes from every detector TOD, harmonic features are significantly reduced, as is the low-frequency power from the atmosphere and from thermal drifts. When making maps, the correlated modes are identified and properly de-weighted to produce an unbiased map solution.

Standard image High-resolution image

Thermal perturbations of the cryostat, revealed in spectral analysis of the bath temperature, also add to the low frequency scan harmonics seen in Figure 6. This effect is column dependent because the coupling occurs at the readout circuit level and differs from the coupling layer effect in that the signal is not stronger at the center of the array.

In general, high frequency spectral lines in the TOD from different detectors destructively interfere when projected into map space, mostly canceling out when many observations are combined. On the other hand, low-frequency harmonics of the scan may produce non-negligible bar-like features in the maps perpendicular to the scan direction, if not treated properly.

6.3. Electromagnetic Pickup

Electromagnetic pickup couples to the readout circuit, producing various signatures in the data. These signals appear correlated among subgroups of detector TODs, particularly among detectors from the same column or row. We call these "column" or "row" correlations, respectively, each corresponding to a different source of electromagnetic pickup. Narrowband signals appear correlated among detectors within the same column. They couple in somewhere after the first SQUID stage of the readout circuit, as the subsequent circuitry is shared by the detectors in a column. On the other hand, broadband signals appear correlated among detectors within the same row or even a few rows apart. We believe these are caused by rapidly varying signals, such as spikes from strong current transients: given our time-multiplexed readout scheme, in which the same row is read from all columns simultaneously, these signals become correlated among detectors in the same row.

Figure 7 shows the detector correlation matrix for a given 15 minute TOD file, where every element in the matrix corresponds to the correlation between the TODs of detectors i and j, which index the elements of each axis. For instance, the diagonal elements correspond to i = j, so they are all equal to one. In Figure 7(a), adjacent detectors on each axis belong to the same column in the array, with a black line separating detectors from different columns, while Figure 7(b) is organized by rows. We can clearly see the kinds of correlations described above affecting either columns or rows of detectors. As would be expected from a stochastic distribution of transients, the patterns seen in the row-correlated matrix change in time. To quantify the correlations, a quality factor is defined as the mean of the squared off-diagonal elements of the correlation matrix:

Equation (6)

where ci, j is the correlation between the TODs of detectors i and j, and N is the number of detector TODs. Lower Q factors indicate less correlated noise.

Figure 7.

Figure 7. Correlation matrices for a 15 minute data stream obtained on 2007 December 7. The 551 active detectors are each numbered, starting from the upper left of the correlation matrix, in order of columns (left panels) or rows (right panels). The remaining 473 detectors were cut from this particular data stream due to inadequate performance (see Section 10.3). The solid lines divide the 32 individual columns (left panels) or rows (right panels); the different widths between the solid lines reflect the varying number of active detectors in each column or row (cf. Figure 8). The magnetic pickup and the atmospheric signal have been removed from the time stream by subtracting a best-fit sinusoid for the former and by removing eight multi-common modes for the latter (see the Appendix). No frequency-space filters were applied except from the anti-aliasing filter deconvolution. The top panels show these filters only, with a quality factor Q = 0.081 (Equation (6)). The bottom panels additionally have dark-correlated modes removed, with an obvious large suppression of detector correlation and a quality factor of Q = 0.015.

Standard image High-resolution image

The column-correlated signals were significantly reduced by carefully shielding the system from electromagnetic noise in the environment. The row-correlated signals, instead, were found to be related to the switching power supplies feeding the readout electronics. They were replaced by linear power supplies at the end of the 2008 season.

If not treated properly, broadband row-correlated signals can produce features in the maps, mostly seen as features perpendicular to the scan direction for that particular observation.

6.4. Magnetic Pickup

The readout circuit of the detectors uses SQUIDs, which are very sensitive to magnetic fields. Despite significant magnetic shielding, some residual pickup remains as the telescope sweeps through Earth's magnetic field. Although the scan is almost triangular, the observed signal is chiefly sinusoidal, which can be explained by eddy current losses and hysteresis in the magnetic shielding. The magnetic signal amplitude and phase are fairly stable across detectors in the same column, but differ significantly between columns. We believe that these phase shifts are related to the complexity of SQUIDs and electrical loops in the system. A typical amplitude of this signal is equivalent to nearly T ≃ 7 mK in CMB units.

As this is a readout-related signal, it can be measured using the dark detectors and removed by fitting and subtracting a sinusoid. Moreover, the scan frequency is significantly below the science band and the magnetic pickup is heavily down weighted in the mapping.

7. CALIBRATION

Our calibration takes into account the detector properties and electronics, camera and telescope optics, and atmospheric conditions. It consists of three steps. The first adjusts for variations in detector responsivity due to changing atmospheric loading and atmospheric opacity. Next, a detector–detector gain term is applied to account for relative variations in optical coupling across the camera. Finally, a single normalization is applied to all the data to account for the overall system efficiency for celestial sources. This procedure is similar to the one presented in Switzer (2008).

7.1. Calibration Variations with Time

The nightly variations in atmospheric loading and daily cryogenic cycling necessitate a rebiasing of the detectors at the beginning of each night's observations, which affects the detector response. Additionally, the opacity of the atmosphere varies from night to night. Changes in loading and opacity throughout the night produce smaller variations in the system response.

In order to account for the variations in system response between nights, a responsivity calibration is derived from the analysis of I − V curves taken during the nightly rebiasing. An I − V curve is the relation between the output current response (I) of the detector's SQUID readout circuitry to a slowly ramping detector bias voltage (V). For the 148 GHz band, the median deviation in responsivity between nights as measured by the I − V curves is 3.0%.

A second test, known as a bias step (Niemack 2008), is performed three times per night to detect changes in detector responsivity through the night. This is achieved by recording the detector response to a series of small, square-wave pulses applied to the detector bias voltage. The responsivities can be obtained by analyzing the detector responses to this signal and are found to be in agreement with responsivities obtained from the I − V curve analysis. For the 148 GHz band, the median deviation in responsivity over a night, as measured by the bias step, is 1.0%. For more information about the ACT detectors, the biasing routine, and responsivity, see Fisher (2009), Battistelli et al. (2008), Swetz et al. (2011), and Zhao et al. (2008). The time dependence on the atmospheric conditions will be discussed below together with the overall system calibration.

7.2. Time-independent Relative Detector Calibration

To determine the detectors' relative gain coefficients, we use the large common-mode signal provided by the variations in atmospheric brightness. This is analogous to a flat-field calibration done with an optical CCD. For each 15 minute data file, we compute the gain factor that best fits the detector drift to the common mode drift. The dispersion of this factor per detector over the season, averaged over all detectors, is better than 2% rms for the 148 GHz array. This includes the variability of the time-dependent calibration step. We correct for this by multiplying each detector time stream by its corresponding gain factor.

7.3. Overall System Calibration

The conversion to sky temperature also depends on the atmospheric transmission. This is estimated with the ATM model (Pardo et al. 2001), which uses PWV measurements from APEX corrected for the ACT site and other Atacama-specific parameters. When PWV measurements are not available, the season-average transmission ($\mathcal {T} = 0.976$) is used. For the 148 GHz band, the rms of the transmission during the 2008 season was 2.3%. Given that large atmosphere loading can excite non-linearities in the detector responsivities, an extra degree of freedom is given to the planet fit as a function of PWV, producing a final correction to the calibration factor. The fit is done comparing Uranus flux measurements obtained under different atmospheric conditions. The resulting transmission yields

Equation (7)

where θalt is the observation altitude, τd = 0.0093 is the "dry opacity," τw = 0.0190 mm−1 is the "wet opacity," τx = 0.0138 mm−1 is the fit parameter, w is the PWV measurement from APEX, corrected for the ACT site in millimeters, and $\bar{w} = 0.44\,\mathrm{m}\mathrm{m}$ is a pivot PWV used in the fit. Here, both τd and τw are fixed parameters provided by the ATM model.

The overall map calibration is compared to the WMAP seven-year map, by correlating multipoles in the range 400 < ℓ < 1000, providing an uncertainty of 2% in temperature (Hajian et al. 2011). Putting all pieces together, the overall system calibration for 148 GHz in the 2008 season is

Equation (8)

where C = 19.41 K/pW.

8. POINTING

The pointing solution is decomposed into relative pointing between detectors and absolute boresight pointing, both of which we discuss here.

8.1. Relative Detector Pointing

The relative pointing between detectors was determined by modeling the beam in the time streams of planet observations. This analysis made use of approximately 30 observations of Saturn at the normal CMB observing altitude of 50fdg5, in two azimuth ranges corresponding to the rising and setting of the planet.

The telescope scans are slow enough that each detector samples the vicinity of the peak response to a planet several times in a single observation. It is thus possible to model the beam position and shape in two spatial dimensions, project this to a detector signal using the telescope encoder information, and fit the data in the time domain. For simplicity, we use a two-dimensional Gaussian as the beam model. The fit produces azimuth and altitude offsets for each detector relative to the telescope pointing encoders, as well as measures of the peak response, beam FWHM, and optical time constants.

When combining different planet observations, we first aligned them by applying an offset correction to each one. The position of each detector is taken as the mean of the positions from all observations, after rejecting outliers. This produces relative detector positions for each array that are in good agreement with design expectations (Fowler et al. 2007). A comparison between the offsets for rising and setting observations (which differ in azimuth by approximately 95°) shows no significant rotation or shear of the array relative to the local altitude and azimuth axes. This constrains the tilt in the telescope azimuth axis to be smaller than 1' and indicates that rotation of the array with respect to azimuth is negligible.

The uncertainty in the relative pointing is no greater than 1farcs5 for all three arrays. Given the large number of detectors in each array, this small uncertainty in the relative detector positions amounts to a negligible contribution to the pointing error in the final maps.

8.2. Absolute Boresight Pointing

The azimuth and altitude of the telescope encoder readings must be corrected to account for their offset from the true boresight for each frequency band. The correction is different for each of the four central CMB observation azimuths, namely, the equatorial and southern stripes at rising and setting orientations.

The correction is obtained by projecting the data from a particular band and orientation into a map (after having applied the relative pointing solution described above), and comparing the positions of the bright point sources to catalog positions. These were obtained from the AT20G survey,36 from ATCA. Some of the sources used were extended, but their effect in the result was negligible. The resulting offsets in equatorial coordinates are converted to offsets in the boresight position, and this procedure is iterated to ensure convergence. The boresight offset varies by about 1' over the season, primarily in elevation.

The dominant source of error in the offset correction comes from estimating the centroid positions of the point sources in the maps, before matching them to the catalog positions. This error scales inversely with the square root of the number of point sources available, which was 20 for the 148 GHz band, and with their signal-to-noise ratio (S/N). The uncertainty was found to be 2farcs6 for the southern stripe and 5farcs3 for the equatorial stripe at 148 GHz. These values were obtained by adding in quadrature the error in the fit from the rising and setting maps.

The telescope pointing is expected to vary slightly due to thermal deformation of the mirror structure. This variation is estimated from observations of Saturn taken at nearly the same azimuth and altitude on different nights. The rms pointing variation of such observations is 4farcs3. Since any pixel in the final season maps contains contributions from many different nights, this random variation does not contribute significantly to the pointing uncertainty.

Pointing deviations are significantly higher at dawn when the telescope temperature begins to change more rapidly, showing a trend that repeats every morning. The drift begins nearly 1 hr after sunrise, reaching nearly 50'' an hour later. Data taken more than 1 hr after sunrise are not included in the maps.

On top of this, the altitude was observed to drift by about 20'' over the course of the 120 day season, producing pointing errors that were correlated with right ascension. To remove this trend, linear corrections of 0farcs2 day−1 and 0farcs15 day−1 were applied to the rising and setting fields, respectively. These corrections have little effect on the maps other than to remove the small, R.A.-dependent pointing offset.

The net pointing uncertainty is thus dominated by the systematic uncertainty in the alignment of the rising and setting maps, along with some residual variation due to temperature-dependent mirror deformation. Comparing the positions of bright point sources in the maps to their catalog positions, we estimate the pointing error in the final maps to be 4farcs8, with no preferred orientation (Marriage et al. 2011b). Note that this uncertainty is much smaller than the beam size and would cause errors of less than 1% in the measured flux for point sources.

9. DETECTOR TIME CONSTANTS

A detector's time response is limited by its electro-thermal properties. We model their optical step response as an exponential decay with a time constant τ. A finite response time results in a small shift in the spatial position of a point source. The shift depends on the scan direction. In the fits for the relative pointing solution described above, time constants were included in the beam model as a single-pole low-pass filter in the time domain. The time constants are measured with a precision of δτ ≲ 0.5 ms, which is an upper bound to their dispersion from the analysis of 30 Saturn observations. The median time constant of the 148 GHz array is 1.9 ± 0.2 ms (f3 dB = 83.8 ± 8.8 Hz), with only a handful of detectors showing responses slower than τ = 10 ms (f3 dB ≈ 15 Hz). For comparison, Equation (3) implies that ℓ = 104 corresponds to 27 Hz.

10. DATA SELECTION

Data selection can be divided into two types: data file selection (all detectors) and single-detector TOD cuts. The former determines the number of observing hours, while the latter determines the number of "effective detectors" within each data file, defined as the sum of the fraction of the time that each detector passed the data selection:

Equation (9)

where Ti, uncut is the available time for detector i after data selection and Ttotal is the total time before data selection. The number of array-wide effective detectors can then be defined as the average of Neff over all the available data files in the season.

The remainder of this section presents the data selection methods and statistics for these two types of data cuts.

10.1. Detector Classification

The detectors are classified in three groups.

  • 1.  
    Live detector candidates. Those that have the potential to be used for mapmaking.
  • 2.  
    Dark detector candidates. Those that do not couple to the sky (for example a broken pixel) but can be used to diagnose systematics. This includes the subset of the 32 dark detectors which work properly and the subset of defective detectors with working readout circuit.
  • 3.  
    Broken detectors. Those defective detectors that cannot be used to probe systematic effects. They include those former live and dark detectors with defective readout circuits, and slow detectors (τ > 10 ms). Many of them must be turned off while observing to prevent them from interfering with other detectors.

Several methods were used to classify detectors into these categories. These include assessing correlation with the atmospheric signal (by far the largest signal), searching for consistently oscillating detectors,37 and finding biasing problems. Table 5 gives a summary of the number of detectors in each of the three groups for each array.

Table 5. Summary of Number of Detectors in each of the Three Groups, Number of Detectors Cut by each Criterion, and Effective Number of Detectors for 148 GHz in 2008

  Number
Detector classification  
Live candidates 795
Dark candidates 128
Broken detectors 133
Total 1056
Cuts by criterion applied to live detectors
Calibration 3 ± 4
Drift 95 ± 61
Correlation 8 ± 16
Gain 2 ± 8
Mid-F. Noise 41 ± 33
HF rms 15 ± 12
HF skewness 3 ± 4
HF kurtosis 7 ± 9
Scan 2 ± 9
Glitch 26 ± 23
Effective detectors 593 ± 95

Note. Errors are the standard deviation of the number of detectors.

Download table as:  ASCIITypeset image

Once the live and dark detector candidates have been identified, a number of possible pathological behaviors may still justify removing part of the data from the reduction pipeline.

10.2. Data File Selection

Out of the total number of data files acquired, we rejected files for the following reasons and in the following order.

  • 1.  
    Files that correspond to planet observations and other calibration or engineering tests.
  • 2.  
    Data taken more than 1 hr after sunrise to avoid pointing and beam errors caused by telescope deformations as it was thermally settling.
  • 3.  
    Files with fewer than 400 effective detectors, as they were considered likely to be pathological.
  • 4.  
    Bad weather: PWV greater than 3.0 mm (transmission below 90%).
  • 5.  
    Poor cryogenic performance: detector base temperature more than 7 mK above the nominal temperature or when it changed more than 1 mK within the 15 minute file.
  • 6.  
    Poor calibration: if the relative gain dispersion of the detectors was more than 10%.
  • 7.  
    Data for which the analysis software failed.

The final amount of data available for analysis is summarized in Table 6.

Table 6. Data File Selection in 2008 at 148 GHz

Type Obs. Hours %
Calendar 3264 hr 100%
Total observation 1423 hr 43.6%
Total survey 1260 hr 38.6%
Later than 1 hr after sunrise −198 hr (6.0%)
Low effective detectors −156 hr (4.8%)
High PWV conditions −41 hr (1.2%)
Cryogenic problems −28 hr (0.9%)
High gain dispersion −10 hr (0.3%)
Other −13 hr (0.4%)
Uncut south 772 hr 23.7%
Uncut equator 44 hr 1.3%
Uncut total survey 816 hr 25.0%

Download table as:  ASCIITypeset image

10.3. Detector Cuts

Detectors are affected by sporadic pathologies. Depending on the kind of pathology, it may be necessary to reject a section or the full length of a detector TOD from a given data file.

The main causes for these pathologies are quantum jumps of the magnetic flux of a SQUID in the readout circuit (V−ϕ jumps), excessive detector noise, conducted noise from oscillating detectors, excessive electromagnetic pickup, and mechanical contamination, which can be optically or thermally coupled into the signal.

The following tests were performed over 15 minute data files to detect these pathologies.

  • 1.  
    Drift test. This probes low-frequency deviations of a detector TOD from the atmospheric signal. The data are first low-pass filtered above 50 mHz and calibrated into units of power, as described in Section 7. Then the thermal drift is removed by de-projecting both the dark detector common mode and the housekeeping temperature of the detectors. Finally, the atmosphere signal is removed by de-projecting eight multi-common modes, as explained in the Appendix. The drift error is defined as the standard deviation of the residual data. Detectors in 148 GHz are cut as outliers if their drift error is higher than 0.35 fW.
  • 2.  
    Correlation test. The drift test is complemented by finding the correlation between the detector drift and array common mode drift for TOD frequencies below 50 mHz. Detectors that correlate less than 98% are excluded.
  • 3.  
    Gain test. The drift test assesses the shapes of the TODs, but not their amplitudes. The relative detector gain can be quantified by the factor that best fits the atmospheric drift to every single TOD at frequencies below 50 mHz. For this the atmospheric drift is estimated as the array common mode after removing the thermal contamination. Detectors are cut whenever their gains differ from the mean by more than 15%.
  • 4.  
    Mid-frequency noise test. Some pathologies, mainly associated with mechanical contamination, become prominent at frequencies between 0.3 and 1.0 Hz, below the "science band." To isolate pathological detectors, we band-pass filter the data below and above those frequencies, de-project the array common mode (also filtered), and obtain the standard deviation of the residuals, which we call mid-frequency error. Detectors are cut whenever their mid-frequency error is more than 2.5 times the median value for the data file.
  • 5.  
    High-frequency noise tests. At frequencies above 3 Hz the data start being dominated by detector noise, which is chiefly Gaussian. Non-Gaussianities above this frequency are mostly caused by electromagnetic pickup, conducted electrical noise or opto-mechanical perturbations. To isolate them, we high-pass filter the data below 5 Hz and then test for non-Gaussianities by computing the standard deviation, skewness, and kurtosis. This is done within sections of the TOD of one scan period, with a characteristic length of 10.2 s. Using the transformation proposed by D'Agostino & Belanger (1990), the last two statistics yield a normal distribution in the case of Gaussian noise, which is verified for most of our data. Outliers are identified by how much they deviate from the mean. Only sections of the TOD with noise rms lower than 0.9 fW are kept.
  • 6.  
    Scan test. Motion defects and encoder errors are also detected and cut, affecting sub-sections of the TOD. This is done by examining the azimuth time stream and searching for interruptions in the scan pattern.
  • 7.  
    Glitch test. The data are also affected by spike-like glitches, for example from cosmic rays. Spikes larger than 10 times the noise rms are cut, including a 0.5 s buffer on either side. Also, whenever two such cuts are separated by less than 5 s they are stitched together into a single cut. If more that five glitches are found in a single detector TOD, then the whole detector TOD is cut.
  • 8.  
    Calibration bolometer test. During 2008 observations a calibration bolometer was used to load the detectors with radiation of roughly 400 mK every 24 minutes, each event lasting for 1.3 s. For these, a window of nearly 3 s is excised around the event.38 We included this within the "Glitch" cuts in Table 5.

As a general rule, if more than 20% of the detector TOD would be cut, then the full detector TOD is cut instead.

Dark detectors were selected in an analogous but simplified way. In this case, only full detector TOD cuts were performed. The cut criteria were the drift error of the dark detectors, the gain with respect to the dark common mode,39 and the noise rms, all given in raw data units.

10.4. Data Selection Results

After applying all the selection criteria given above, and considering only the data files available for analysis, the average number of effective detectors was 593 ± 95 in the 2008 season at 148 GHz. The error here is the standard deviation over different data files.

Table 5 shows a summary with the number of detectors in each of the three detector groups for the 148 GHz array, as well as the cut contribution from the eight main selection criteria: drift, gain, mid-frequency noise, noise rms, noise skewness, noise kurtosis, and partial cuts with more than 20% of the TOD cut. The bottom line is the average number of effective detectors in the season. Figure 8 is a diagram of the 148 GHz array showing the fraction of the time that each detector is uncut. Figure 9 shows the daily number of the effective detectors during the 2008 season, along with the weather conditions indicated by the PWV.

Figure 8.

Figure 8. Percentage of time that detectors were cut across the 148 GHz array. Each small square represents a single detector. They are oriented as the array is projected on the sky. Note that some rows and columns are always cut, which is mainly due to problems in the biasing (rows) and readout circuits (columns).

Standard image High-resolution image
Figure 9.

Figure 9. Effective number of detectors in the 148 GHz array in the 2008 season. Circles denote daily averages and the error bars are the standard deviation within that day. The PWV level is shown with the dashed line. The increase in the number of effective detectors starting in September 24 occurred after some oscillating detectors, which had been contaminating other detectors, were turned off.

Standard image High-resolution image

On top of these cuts, a small fraction of the remaining data is cut during the mapmaking process, as described in the following section.

11. MAPMAKING

11.1. Mapping Essentials

To make maps from ACT data, we solve for the best-fit sky given the noise in the data. In particular, we find the sky map that minimizes χ2 given a model for the noise, and a model for what the data should look like:

Equation (10)

Here $\mathbf {x}$ is the model for which we wish to solve, $\mathbf {M}$ describes how the data depend on the model, and $\mathbf {n}$ is the particular realization of the noise in the ACT data. Traditionally, $\mathbf {x}$ is a vector whose components are the sky map pixels and $\mathbf {M}$ is the pointing matrix. In its simplest conceivable form, each data point sees a single pixel in the map, so each row of $\mathbf {M}$ (corresponding to a single data point) has a single 1 in the column corresponding to the map pixel at which it was pointed. However, our model for the data, $\mathbf {x}$, can contain many contributions in addition to the sky map: components include atmospheric noise, correlated signals in the data, and missing (cut) data samples as discussed below. The mapping formalism easily generalizes to cover multiple components as long as the data depend on them linearly:

Equation (11)

Equation (12)

With an estimate of the noise covariance $\mathbf {N} \equiv \left<\mathbf {n} \mathbf {n}^T \right>$, which is an nsample by nsample matrix (nsample ≃ 109), we wish to find the model that maximizes the likelihood function

Equation (13)

where $\mathbf {d}$ is a vector containing all the data samples.

The standard linear least-squares solution is

Equation (14)

The matrix $\mathbf {M}^T\mathbf {N}^{-1}\mathbf {M}$ is too big to be practicably inverted directly (we typically have 107 map pixels), so we instead iteratively solve the least-squares equation for $\mathbf {x}$ using a Preconditioned Conjugate Gradient (PCG) scheme (Wright et al. 1996; Hinshaw et al. 2003; Press et al. 2007). Preconditioning involves introducing $\mathbf {P}$, an approximate inverse of $\mathbf {M}^T \mathbf {N}^{-1} \mathbf {M}$, in order to speed up convergence of classic Conjugate Gradient, and solving the better-conditioned system:

Equation (15)

To map ACT data, we use the mapmaking code Ninkasi and run on the Scinet General Purpose Cluster (GPC; Loken et al. 2010). The mapmaking algorithm for the maps in this release is improved from that used in previous papers (Fowler et al. 2010; Marriage et al. 2011b, 2011a; Dunkley et al. 2011; Das et al. 2011a; Hajian et al. 2011; Sehgal et al. 2011) in four ways: (1) rather than solving for detector-correlated noise (including atmospheric noise) explicitly, we put the correlations in the noise matrix, producing a substantial noise improvement in the damping tail; (2) we explicitly solve for cut data (time-stream gaps), which made the problem symmetric; (3) we subtract models for the point sources in the ACT maps directly from detector time streams, recovering part of the power that otherwise was biased low by the mapper; and (4) we re-estimate the noise after subtracting an initial map to remove signal-induced bias in the noise estimation.

The mapping is done in a cylindrical equal-area projection with a standard latitude of δ = −53fdg5 and pixels of 30'' × 30'', roughly one-third of the beam FWHM.

11.2. Pre-processing

In addition to data selection, calibration, and pointing, there are a few other pre-processing steps that are done to the data before solving for the maps.

We remove the median of each detector time stream for each 15 minute period, and subtract a single array-wide slope across the period so that the ends of the time streams roughly line up. We do this to reduce ringing in Fourier transforms and to facilitate searching for correlations among the detectors. We linearly interpolate across gaps in the data, such as those arising from cosmic-ray hits (again, to reduce Fourier artifacts: the data in the cuts are otherwise not used). We deconvolve the time streams by the anti-alias filter (described in Section 5) and the detector time constants (discussed in Section 9).

Next, the non-optical contamination signals are reduced using the dark detectors. This is done as follows: we take the time streams from each dark detector and subtract a mean and a slope. We then high-pass filter the dark time streams, filtering out any signal below 5 Hz. Most of the remaining signal is in only a few independent modes. We take those corresponding to the seven largest eigenvalues of the resulting covariance matrix and do a linear least-squares fit to the live detector data, which had been similarly processed (having removed the mean and slope, and high-pass filtered). Finally, we subtract the reconstructed fit from the data. We can do this because the modes subtracted are not correlated to the sky signal.

In Fowler et al. (2010), we downsampled the time streams from 400 Hz to 200 Hz, using a time-domain triangular kernel of the form [0.25 0.50 0.25]. The ACT signal band (nearly 30 Hz according to Section 3) is well away from the downsampled frequency limit. We find that downsampling does affect the raw power spectrum at up to several percent for ℓ  >   ∼ 5000. This should be mostly corrected for when using a beam measured using maps derived from downsampled data. However, we also find that the S/N on point sources is 2% higher for non-downsampled maps, triggered by the anti-aliasing filter used for downsampling. Consequently, we do not downsample the time streams in the results presented here.

Also in contrast to Fowler et al. (2010), we carry out one further step in the time domain. With the 30'' pixels somewhat undersampling the ACT 1farcm37 beam, we find that to get percent-level accuracy in source fluxes, we must deal with the bulk of the source flux directly in the time streams. To do this, we make first-pass maps in which sources are found and their fluxes estimated. Simulations show that these fluxes are typically accurate to a few percent, with source flux systematically underestimated by approximately 4%. We then take these source fluxes and model them directly in the time streams using the full ACT beam, and subtract the model from the time streams. We do a simple source-only projection into a map, and add this to the (mostly) source-free maximum likelihood map. We find that, in simulations, this recovers mean source fluxes to 1% accuracy, at which point it is subdominant to the calibration uncertainty. We do no additional filtering in the time domain.

11.3. Noise Modeling

The noise structure is modeled in both frequency and time domains. We include, as a term in the noise, a time domain windowing of the first and last 20 s of each time stream of the form (1 − cos x)/2, again to reduce Fourier ringing. We then search for correlations across the data by examining their covariance matrices, split at 4 Hz (ℓ ≃ 1500). We find all eigenvectors in the low-frequency covariance matrix with corresponding eigenvalues greater than 3.52 times the median (the eigenvalues of the data covariance matrix ΔTΔ are the squares of the corresponding data singular values, so this corresponds to an amplitude of 3.5 times the median in the time streams). We then project those eigenvectors out of the high-frequency covariance matrix, and again find all eigenvectors with eigenvalues larger than 3.52 times the median in the remaining high-frequency matrix. We find that the maps are not particularly sensitive to the exact threshold values chosen. This procedure typically finds 15–20 low-frequency eigenvectors and one or two high-frequency ones. The eigenvectors are response patterns of correlations across the array, and usually correspond to things like a common mode, gradients across the array, and the row-correlated noise. They provide the linear combination of detector time streams needed to produce a correlated mode, represented by the vector $\mathbf {\hat{v}}$ in Equation (A1). This projection corresponds to removing around 20 modes out of the approximately 600 available.

With the shapes of the array correlations in hand, we can use them to complete the description of the noise. We solve for the correlated modes corresponding to the eigenvectors and subtract them from the time streams. We then Fourier transform the (de-correlated) detector time streams and the correlated modes, and model them using frequency bins. In each frequency bin, we find the average power in each detector time stream and in each correlated mode. If the detector noises are denoted by the diagonal matrix $\mathbf {N}_{D}$, the detector covariance eigenvectors by the ndetector by nmode matrix $\mathbf {V}$ and their corresponding bin-wise noise powers by $\mathbf {N}_{V}$, then the bin-wise Fourier-space noise is simply $\mathbf {N}_{f} \equiv \mathbf {N}_{D}+\mathbf {V}\,\mathbf {N}_{V}\,\mathbf {V}^T$. Here $\mathbf {N}_f$ is an ndetector by ndetector matrix acting on a single frequency bin. This matrix can be quickly inverted using the Sherman–Woodbury formula (Duncan 1944; see Hager 1989 for a review). If we denote the time-domain edge tapering operator by $\mathbf {W}$ and the Fourier transform operator by $\mathbf {F}$, then our entire noise inverse becomes

Equation (16)

where the operator $\mathbf {N}_{F}^{-1}$ is conformed by all the $\mathbf {N}_f^{-1}$ such that it acts on every frequency bin in the Fourier transform. In the previous equation, every operator can be represented by an nsample by nsample matrix, in which detector operations like $\mathbf {N}_f$ are expanded such that all samples within a single detector are treated in the same way. To minimize our sensitivity to any low-frequency non-Gaussian component of the atmospheric noise, we taper the Fourier weights at frequencies below 0.5 Hz, with the weight explicitly set to zero below 0.25 Hz. We note that the noise in the mapmaking equation (Equation (15)) can be interpreted as a set of weights. The map is optimal if the weights are perfect, but the map remains unbiased as long as identical weights are used for the left and right sides of the equation, and the weights are uncorrelated with the sky signal. This is the essential part of the method. Time-stream filters are by design biased and must be accounted for with simulations. On the other hand, in our treatment, modes in the map that might cause problems are de-weighted as opposed to being filtered, producing an unbiased solution through careful consideration of the noise structure of the data.

One further difference remains between these maps and those used in Fowler et al. (2010). Here, for every single time-stream sample cut, we fit for its (unknown) amplitude as part of the map solution, as suggested by, for example, Patanchon et al. (2008). In addition to cuts from cosmic rays and the like, we explicitly cut the first and last second of each TOD to give the mapping algorithm the freedom to match the TOD beginning and end as smoothly as possible. Since these data are already highly downweighted by the time-domain taper, the additional amount of data loss is negligible.

11.4. Map Solution

With these pre-processing steps, form of the noise, and form of the solution specified, we must then actually solve for the map. We use the classic PCG algorithm, using the hit-count map as a Jacobi (diagonal) preconditioner for the sky map part of the solution. To recover the scales around ℓ ≃ 200 and below, we need a few hundred PCG iterations. The mapmaking is quite CPU-intensive, with each iteration taking about a minute on 1600 2.53 GHz Nehalem cores on the Scinet GPC, or a full run to 1000 iterations taking about a day of wall clock time, or 5 yr of CPU time.

We note that care must be taken when estimating the noise to make sure it does not bias the map. Consider the following thought experiment. Two detectors are scanned across a source, and their weights are estimated from the internal scatters of their time streams. In general, noise in the detectors will lead to one of them observing a lower flux from the source. If the detector weights are then measured and the source is not removed from the time streams, the detector which happened to measure the lower flux will usually have a lower variance in its data, and will therefore on average receive more weight than the other detector. When the detectors are combined, the more heavily weighted low-flux measurement will lead to the source flux being systematically biased low. The magnitude of the bias is set by the S/N in the individual time streams, not the final S/N. The bias can be mitigated if an estimate of the signal is removed from the time streams before noise estimation. Since the CMB is highly subdominant to the noise in ACT time streams, this effect is small, but simulations show that it is not negligible. Therefore, first we make an initial map using the full data set and the noise measured directly from the time streams, then we subtract that map from the time streams to reduce the sky signal in them before estimating the noise for a second time, and finally we use this improved estimate of the noise to make the final maps.

ACT must take a bit more care than all-sky experiments because of the large change in sensitivity across the map. In particular, near the map edges, only a small amount of data contributes to the map, and so removing the raw map would lead to an artificial reduction in the noise of the data near the edges, which would lead to an artificial increase in their weight. To prevent this from happening, the starter maps used are first filtered and apodized. The apodization is generated by first rescaling the hit count map to the 0–1.0 range, then setting all values higher than a threshold to unity, and finally smoothing the resulting weight map with a Gaussian window. Furthermore, from the starter data map we filter out scales larger than ℓ = 300 where atmospheric noise is very large, and scales smaller than ℓ = 3000 where the map is highly noise dominated and simulations show that the bias effect is truly negligible. Finally, we generate the starter map by multiplying the filtered data map with the apodization window. To test this procedure, we carry out the two-step procedure on both the real data and the real data with a simulated map injected into it. We compare the difference of these two sets of maps to the original simulation and show the resulting transfer function in Figure 10. At the low-pass scale of ℓ = 300, the maps are unbiased to 0.5%, and rapidly become unbiased to better than a part in 103 on smaller scales. Moreover, we find consistency of the ACT×WMAP map cross spectra with the ACT auto-spectrum and the WMAP binned power spectrum for scales in the range 300 < ℓ < 1000, as shown in Hajian et al. (2011).

Figure 10.

Figure 10. Transfer function of the mapmaking process. A simulated map (in) was added to the time streams and mapped together with the real data (sim-inject map). Then the real data map (previously computed) was subtracted from the sim-inject map to produce the output map (out). The transfer function was computed as the power spectrum of the output map divided by the cross-spectrum of the output map and the simulated map. The result is essentially unity for all scales above our noise-estimation input map filter scale of 300 (see the text), with a slight boosting of large scales that reaches 2% at multipoles of 200.

Standard image High-resolution image

11.5. Map Analysis and Results

The resulting maps cover an area of 845.6 deg2 on the sky, ranging between 20h43m and 7h53m in right ascension and −48fdg1 and −57fdg2 in declination. A total of five maps were made: one using the full data set, plus four partial versions using independent subsets of the data for noise estimation purposes. Each map is accompanied with a hit map with the number of data samples that fell on every pixel of the corresponding map.

The noise in the maps is strongly scale dependent and fairly distributed according to the hit counts. In general, noise at smaller scales is mostly uncorrelated between pixels and proportional to the square root of the number of hits, while at larger scales correlations produced by the atmosphere and systematics become more evident. Correlated noise is better behaved in Fourier domain where its covariance is approximately diagonal, so it can be represented as a power spectrum. Examples of noise analysis done to these maps can be found in Fowler et al. (2010), Marriage et al. (2011b), and Das et al. (2011a).

These maps are an overall improvement over those used in Fowler et al. (2010), Marriage et al. (2011b, 2011a), Dunkley et al. (2011), Das et al. (2011a), Hajian et al. (2011), and Sehgal et al. (2011), marked particularly by lower noise on angular scales between 500 < ℓ < 2000, reducing the variance by up to a factor of two or three. However, the original maps were already nearly sample limited on these scales, so errors on the power spectra are only modestly reduced. These changes are due to slightly different data selection cuts, improvement in the noise treatment during the mapping process, and improved per-detector calibration.

Figure 11 shows the final 2008 southern map, overlaid by contours of iso-sensitivity (35, 50, and 65 μK arcminute), as different regions of the map have different integration times. These sensitivities were computed using the number of samples per unit area and the NET given in Table 3. We run for 2000 PCG iterations on this map to ensure that it is truly converged, though as mentioned we find that all science scales are converged in far fewer PCG iterations.

Figure 11.

Figure 11. Final map of the southern region observed at 148 GHz in 2008 obtained after 2000 PCG iterations. The contours in white join regions with the same sensitivity, namely, 35, 50, and 65 μK arcmin, with increasing sensitivity toward the center of the maps. The total area enclosed by the contours is 220 deg2, 420 deg2, and 530 deg2, respectively. The region between 18h54m and 21h15m in R.A. is not shown because it was sparsely observed and has low sensitivity. Most of the analysis has been done in the deeper region between roughly 0h and 7h in R.A.. The map has been high-pass filtered for clarity, depressing modes larger than ℓ = 300. The CMB anisotropies can be seen by eye. There is some evidence of large-scale systematic noise, especially near the edges, observed as long horizontal features, which are mostly related to scan-synchronous systematics (such as ground spillover). Despite the size of the plot, it is still possible to see some bright point sources and SZ clusters. Find the Bullet cluster at 6h58m38s R.A., −55°57'0'' decl.

Standard image High-resolution image

Figure 12 shows the power spectrum computed for this map, compared to previous versions already published in Fowler et al. (2010) and Das et al. (2011a). The uncertainty in the spectrum reveals an improvement in the quality of the map.

Figure 12.

Figure 12. Power spectrum of the 148 GHz map obtained after 2000 PCG iterations compared to the power spectra in Fowler et al. (2010) and Das et al. (2011a). The thick orange curve shows the best-fit model including the CMB secondaries and point-source contribution taken from Dunkley et al. (2011). As revealed by the error bars, the power spectrum derived from this map represents an improvement from previous maps.

Standard image High-resolution image

The parameters obtained using a fit to the spectrum released with this paper are slightly shifted with respect to the values presented in Dunkley et al. (2011). The largest shifts in the primary parameters were for ns and Ωbh2 at Δσ = 0.3 and ΘA with Δσ = 0.7, although the change in the inferred dark energy density changed by a negligible 0.03σ. The other parameters changed by less than 0.07σ. The shift in ΘA is dominated by changes around the third peak of the power spectrum, caused by the new calibration, cuts, and pointing used in these maps. The secondary parameters were affected by the new mapping procedure, decreasing the Poisson point source level from 13.7 μK2 to 12.5 μK2. The upper 95% confidence limits of the correlated point source amplitude and SZ amplitude are instead unchanged.

Also, a cross-correlation analysis to the BLAST maps (http://blastexperiment.info/) shows correlations detected at a 25σ level, implying a detection of emission by radio and dusty star-forming galaxies at high ℓ, as described in Hajian et al. (2012).

Finally, the flux of sources as estimated in the data release is approximately 5% higher than those reported in Marriage et al. (2011b), due to the new source treatment in the mapmaking and a change in the beam solid angle. The best representation of the beam shape, as well as the window function for power spectrum analysis, is released together with the maps.

12. CONCLUSIONS

We have fully characterized the data from the 2008 season of observations from ACT, including data selection, calibration, pointing, random and systematic noise, and atmosphere contamination.

Observations done in 2008 yielded a total of nearly 1260 hr of CMB survey data at 148 GHz, distributed between two observation stripes centered at declinations near 0° and −53°, with approximately 850 and 280 deg2, respectively. After data selection, the observing time was reduced to nearly 816 hr. Out of a potential 1024 live detectors, the number of effective detectors was on average 593. Combining observing time and detector performance, it yields a system efficiency of 38%. Including the ratio of observing time over calendar time, the overall system efficiency was 15%.

The uncorrelated noise in the data, which excludes the atmospheric fluctuations, is dominated by detector noise. Between 5 Hz and 20 Hz, we found a total NET of $32\,\mu \mathrm{K}\sqrt{\mathrm{s}}$ at 148 GHz. In other words, given 1 hr of observation per square degree, or 1 sof integration per square arcminute, the noise should be about 32 μK arcmin.

The correlated noise is dominated at low frequencies by the atmosphere and thermal drift, while at higher frequencies electromagnetic pickup and mechanical vibrations are the largest sources of correlated noise. The former are characterized in frequency by negative power laws, equaling the detector white noise spectra at approximately 2 Hz. The latter show up mostly as narrowband signals emerging from the detector noise. They can all be modeled as correlated modes added to each detector TOD. Once identified, they can be de-weighted in the map solution, minimizing the errors in the map, while keeping the solution unbiased.

The data processing steps for mapmaking can be summarized as follows. The median signal level of each detector is removed together with a single array-wide slope across the 15 minute file, and the inverse anti-alias filter and time constant deconvolution are applied (see Sections 5 and 9). Then a pre-calculated data selection, calibration, and pointing solution are applied (see Sections 107, and 8). After this, seven pre-computed dark modes are de-projected from each detector TOD (see Section 11 and the Appendix). Before mapmaking, the expected signal from the bright point sources is subtracted from the time streams. Then the maps are made by minimizing χ2 over a noise model that includes correlated modes, cuts (missing time streams), and frequency dependence, as described in Section 11. This is done twice, with an estimate of the map removed from the data before the noise estimations are repeated in the second mapping pass. Finally, the flux from point sources previously removed is re-added to the maps. The resulting maps cover 845.6 deg2 on the sky and are consistent with WMAP at angular scales measured in common.

The same area of the sky covered by these maps was observed by the South Pole Telescope (SPT) team, who have recently made their data public (Schaffer et al. 2011). Figure 13 provides a side by side comparison between the ACT map and the SPT map for the same region of the sky and using the same filtering used in the SPT release. The correlation between CMB features in the two maps is clear to the eye, supporting the excellent quality of both measurements. It is also clear that the ACT map is noisier than the SPT map, which will improve in future map releases when data from the following observing seasons (from years 2009 and 2010) are included.

Figure 13.

Figure 13. Side by side comparison between the ACT map (season 2008) and the SPT map for the same region of the sky. The left panel shows the ACT map high-pass filtered with a cos 2ℓ-like filter that goes from 0 to 1 for 100 < ℓ < 300, and the center and right panels show the ACT and SPT maps, respectively, under the same high-pass filter used in the SPT data release (Schaffer et al. 2011). Agreement between the CMB features in the two maps is clear by eye.

Standard image High-resolution image

Sky maps for the 148 GHz ACT southern observations from 2008, described in this paper, are available through NASA's Legacy Archive for Microwave Background Data Analysis (LAMBDA), where a variety of ACT analysis software, data products, and model templates are also available. Future data releases will include ACT observations at higher frequencies and subsequent observing seasons, as well as sky coverage in ACT's equatorial stripe, which overlaps numerous other observing programs.

This work was supported by the U.S. National Science Foundation through awards AST-0408698 and AST-0965625 for the ACT project, and PHY-0855887 and PHY-1214379. Funding was also provided by Princeton University, the University of Pennsylvania, and a Canada Foundation for Innovation (CFI) award to UBC. ACT operates in the Parque Astronómico Atacama in northern Chile under the auspices of the Comisión Nacional de Investigación Científica y Tecnológica de Chile (CONICYT). Computations were performed on the GPC supercomputer at the SciNet HPC Consortium. SciNet is funded by the CFI under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund–Research Excellence; and the University of Toronto. We specially thank Astro-Norte, Masao Uehara, Felipe Rojas, Patricio Gallardo, Omelan Strysak, Bill Page, Katerina Visnjic, Ben Schmidt, David Faber, and Benjamin Walter. R.D. received additional support from a CONICYT scholarship, from MECESUP, from Fundación Andes, from FONDECYT-11100147, from Centro de Astrofísica y Tecnologías Afines CATA del Proyecto Financiamiento Basal PFB06, and from Centro de Astrofísica FONDAP 15010003. N.S. is supported by the U.S. Department of Energy contract to SLAC no. DE-AC3-76SF00515 and by the NSF under Award No. 1102762. E.R.S. acknowledges support by NSF Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics. A.K. has been supported by NSF-AST-0807790 for work on ACT. R.H. acknowledges funding from the Rhodes Trust and Christ Church. We are grateful for the assistance we received at various times from the ALMA, APEX, ASTE, CBI/QUIET, and NANTEN2 groups.

APPENDIX: MODE SELECTION AND REMOVAL

Contaminant signals, like the atmosphere and systematic signals, produce correlations between detector TODs. Removing this "correlated" noise is important for both mapmaking and data characterization, so we devote this appendix to explain how this is done in more detail.

A contaminant signal can be modeled as a time stream superimposed on the TODs from a set of individual detectors (common mode). We call this time stream a "correlated mode." One way to estimate it is using an appropriate linear combination of detector TODs. For instance, the common mode, defined as a linear combination with equal weight of all detector TODs, is a good estimator of the atmosphere signal. Organizing all the detector TODs into columns of an nsample by ndetector matrix $\mathbf {A}$, the correlated mode $\mathbf {\hat{m}}$ can be expressed as

Equation (A1)

where $\mathbf {\hat{v}}$ is a vector of unit magnitude representing the ndetector coefficients of the linear combination, and s normalizes the mode so it also has unit magnitude. We henceforth define the correlated modes such that they are always normalized.

Once a correlated mode has been identified, it can be fitted and subtracted from the data:

Equation (A2)

We can also construct correlated modes $\bf \hat{m}$ in other ways besides the linear combinations in Equation (A1) (for example, from a thermometry time stream), and in this case the first equality in Equation (A2) must be used directly, rather than the second equality.

One can identify important correlated modes using the singular value decomposition (SVD) of the matrix $\mathbf {A}$:

Equation (A3)

where the columns of $\mathbf {U}$ can be identified as normalized correlated modes $\mathbf {\hat{m}}$, $\mathbf {S}$ is a diagonal matrix containing the singular values of $\mathbf {A}$ (identified as the normalization factor, s, in Equation (A1)), and the columns of $\mathbf {V}$ are the eigenvectors of the covariance matrix $\mathbf {A}^T\mathbf {A}$ (identified as vector $\mathbf {\hat{v}}$ in Equation (A1)). The SVD mode selection can be tailored to specific signals by first finding the common mode from subsets of detector TODs, and then applying the previous method to find the strongest correlated modes out of this reduced set of modes. For example, to find modes that are more likely to correlate in rows, we first find the common mode from all detector TODs in each row (32 row common modes in total), and stack them together as columns of a matrix $\mathbf {A}_{{\rm row}}$. The strongest modes are then readily found using SVD. This method is useful to identify row- and column-correlated modes, and sub-array-scale modes from the atmosphere. For the latter, we divide the array into 16 square blocks, finding the common mode in each one before applying the SVD. We call the set of modes found this way a "multi-common mode."

An important consideration when using modes obtained from linear combinations of live detectors is that they are also correlated to the sky signal. For example, naively removing the common mode effectively filters the sky, depressing scales larger than the array size (24' or ℓ ≲ 450). Naturally, the effect is stronger for the multi-common mode, which filters scales larger than a fourth of the array (6' or ℓ ≲ 1800). For this reason, the multi-common mode is used to calculate the drift error, but it cannot be naively removed for making maps, nor can any other mode obtained from live detectors.

In Section 11, the appropriate way of projecting modes out of the data is discussed in the context of mapmaking.

A.0.1. Dark Mode Removal

The dark detectors share the same readout as the live detectors, so they can capture systematics like thermal drift, electromagnetic pickup, and magnetic pickup. Moreover, as they are not coupled to the optical signal, correlated modes obtained from dark detectors (dark modes) can be safely removed as a preprocessing step before making maps. For example, the thermal drift is well represented by the low-pass-filtered common mode of the dark detectors.

For row- and column-correlated electromagnetic signals, the ability to identify correlated modes depends on whether the desired row or column of the array contains a working dark detector. The array design uses one dark detector per column, always in the same row. This is ideal for identifying column-correlated modes, but not for row-correlated modes. For the latter, we must rely on the broken live detectors with a properly working readout circuit, as described in Section 10. Nevertheless, the row- and column-correlated modes can still be identified by using SVD analysis over all dark detectors. Note that it is useful to first remove the slow thermal drift from the dark detectors before trying to find these other higher frequency signals. Figure 7 shows an example of correlation matrices before and after having removed the dark modes. Note that both column and row correlations are significantly suppressed after the removal.

Footnotes

  • 32 
  • 33 

    For further detail see http://slimdata.sourceforge.net/.

  • 34 

    For pointing accuracy reasons, we ended up using only data obtained less than 1 hr after sunrise.

  • 35 

    The central frequency for a Rayleigh–Jeans source.

  • 36 
  • 37 

    We have implemented automatic methods to identify oscillating detectors on the fly and disconnect them.

  • 38 

    The calibration bolometer was not used after 2008.

  • 39 

    Note that the common mode of the dark detectors is driven by the thermal drift of the cryostat, which is the second-largest signal in the live detector data.

Please wait… references are loading.
10.1088/0004-637X/762/1/10