Article

Coaxing Cosmic 21cm Fluctuations from the Polarized Sky using m-mode Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we continue to develop the m-mode formalism, a technique for efficient and optimal analysis of wide-field transit radio telescopes, targeted at 21 cm cosmology. We extend this formalism to give an accurate treatment of the polarised sky, fully accounting for the effects of polarisation leakage and cross-polarisation. We use the geometry of the measured set of visibilities to project down to pure temperature modes on the sky, serving as a significant compression, and an effective first filter of polarised contaminants. We use the m-mode formalism with the Karhunen-Loeve transform to give a highly efficient method for foreground cleaning, and demonstrate its success in cleaning realistic polarised skies observed with an instrument suffering from substantial off axis polarisation leakage. We develop an optimal quadratic estimator in the m-mode formalism, which can be efficiently calculated using a Monte-Carlo technique. This is used to assess the implications of foreground removal for power spectrum constraints where we find that our method can clean foregrounds well below the foreground wedge, rendering only scales $k_\parallel < 0.02 h \,\mathrm{Mpc}^{-1}$ inaccessible. As this approach assumes perfect knowledge of the telescope, we perform a conservative test of how essential this is by simulating and analysing datasets with deviations about our assumed telescope. Assuming no other techniques to mitigate bias are applied, we recover unbiased power spectra when the per-feed beam width to be measured to 0.1%, and amplifier gains to be known to 1% within each minute. Finally, as an example application, we extend our forecasts to a wideband 400-800 MHz cosmological observation and consider the implications for probing dark energy, finding a medium-sized cylinder telescope improves the DETF Figure of Merit by around 70% over Planck and Stage II experiments alone.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Non-blind methods have also been tested in visibility space. The beam projection plus Karhunen-Loève transform proposed in Shaw et al. (2015) and the GPR method used in Soares et al. (2021) and Mertens et al. (2018) are examples we have found in the literature. ...
... A notable cleaning method applied in visibility-space is the combined beam projection and Karhunen-Loève (KL) transform proposed and tested in Shaw et al. (2015). This test assumed a simplified version of the CHIME instrument and was conducted at higher redshifts (1.84 < < 2.55) where polarized foregrounds are more severe. ...
... Such operations are not possible in visibility space. The closest thing one could do is separate the visibilities into m-modes, where the m refers to the azimuthal m found in spherical harmonics Shaw et al. (2015). Such freedom could be useful when cleaning foregrounds from maps, due to their dependence on line of sight direction and angular scale. ...
Article
Full-text available
This paper introduces a technique called NKL, which cleans both polarized and unpolarized foregrounds from HI intensity maps by applying a Karhunen-Loéve transform on the needlet coefficients. In NKL, one takes advantage of correlations not only along the line of sight, but also between different angular regions, referred to as ‘chunks’. This provides a distinct advantage over many of the standard techniques applied to map-space that one finds in the literature, which do not consider such spatial correlations. Moreover, the NKL technique does not require any priors on the nature of the foregrounds, which is important when considering polarized foregrounds. We also introduce a modified version of GNILC, referred to as MGNILC, which incorporates an approximation of the foregrounds to improve performance. The NKL and MGNILC techniques are tested on simulated maps which include polarized foregrounds. Their performance is compared to the GNILC, GMCA, ICA and PCA techniques. Two separate tests were performed. One at 1.84 < z < 2.55 and the other at 0.31 < z < 0.45. NKL was found to provide the best performance in both tests, providing a factor of 10 to 50 improvement over GNILC at k < 0.1 hMpc−1 in the higher redshift case and k < 0.03 hMpc−1 in the lower redshift case. However, none of the methods were found to recover the power spectrum satisfactorily at all BAO scales.
... Some semiblind methods have been proposed to make assumptions about the foregrounds and/or signal that are not so strong as assuming an ansatz of foregrounds in the polynomial fitting method, and demonstrated to alleviate the problem of signal loss in foreground subtraction. These methods include the Karhunen-Loève (KL) transform method (Shaw et al. 2014(Shaw et al. , 2015 that employs the modeled foregrounds and signal covariance matrices to form a set of modes that are ordered by their signal-to-contaminant ratios, and the generalized needlet internal linear combination method (Olivari et al. 2016) that uses the H I covariance matrix to separate the foregrounds from the signal in a wavelet (or needlet) space. While it is indeed easier to model the covariance matrix of the foregrounds and signal than the foregrounds or the signal per se, the overall magnitude of the covariance matrix may be highly biased, because the foregrounds are 4-5 orders of magnitude stronger than cosmic 21 cm signal. ...
... In this section, we test the performance of SVP in terms of recovery errors with simulation data. We use the CORA 8 (Shaw et al. 2014(Shaw et al. , 2015 package to generate the simulated data set, which includes the H I 21 cm signal and the mock foregrounds with two dominant components-galactic synchrotron emission and extragalactic point sources. We assume the Planck 2013 cosmological model (Planck Collaboration et al. 2014). ...
... In particular, the SVP estimators are independent of the overall magnitude of the covariance matrix, which can be highly biased. This may be an advantage against some other semiblind methods that depend on the overall magnitude of the foreground covariance matrix, e.g., the KL transform method (Shaw et al. 2014(Shaw et al. , 2015. ...
Article
Full-text available
The principal component analysis (PCA) method and the singular value decomposition (SVD) method are widely used for foreground subtraction in 21 cm intensity mapping experiments. We show their equivalence, and point out that the condition for completely clean separation of foregrounds and cosmic 21 cm signal using the PCA/SVD is unrealistic. We propose a PCA-based foreground subtraction method, dubbed the “singular vector projection (SVP)” method, which exploits a priori information of the left and/or right singular vectors of the foregrounds. We demonstrate with simulation tests that this new, semiblind method can reduce the error of the recovered 21 cm signal by orders of magnitude, even if only the left and/or right singular vectors in the largest few modes are exploited. The SVP estimators provide a new, effective approach for 21 cm observations to remove foregrounds and uncover the physics in the cosmic 21 cm signal.
... The foreground emission from the Galaxy and astrophysical sources is orders of magnitude larger than the desired cosmological 21cm signal [20,[77][78][79][80]. This effect contaminates the longwavelength radial Fourier modes, so that modes with k < k ,min are inaccessible [77,78,[81][82][83][84][85]. ...
... The foreground emission from the Galaxy and astrophysical sources is orders of magnitude larger than the desired cosmological 21cm signal [20,[77][78][79][80]. This effect contaminates the longwavelength radial Fourier modes, so that modes with k < k ,min are inaccessible [77,78,[81][82][83][84][85]. The separation of the signal from the foreground emission is a great challenge. ...
... However, the radio foregrounds are mainly very spectrally smooth free-free and synchrotron emission from our Galaxy and other unresolved sources. This characteristic makes possible the separation from the cosmological signal, which varies along the line-of-sight due to the underlying density field, without significant losses up to some small value of k ,min [77,78,84,85]. Reconstruction techniques have been developed, to recover the long radial modes that are lost to foregrounds, by using the measured short modes. ...
Article
Full-text available
The 21cm emission of neutral hydrogen is a potential probe of the matter distribution in the Universe after reionisation. Cosmological surveys of this line intensity will be conducted in the coming years by the SKAO and HIRAX experiments, complementary to upcoming galaxy surveys. We present the first forecasts of the cosmological constraints from the combination of the 21cm power spectrum and bispectrum. Fisher forecasts are computed for the constraining power of these surveys on cosmological parameters, the BAO distance functions and the growth function. We also estimate the constraining power on dynamical dark energy and modified gravity. Finally we investigate the constraints on the 21cm clustering bias, up to second order. We take into account the effects on the 21cm correlators of the telescope beam, instrumental noise and foreground avoidance, as well as the Alcock-Paczynski effect and the effects of theoretical errors in the modelling of the correlators. We find that, together with Planck priors, and marginalising over clustering bias and nuisance parameters, HIRAX achieves sub-percent precision on the ΛCDM parameters, with SKAO delivering slightly lower precision. The modified gravity parameter γ is constrained at 1% (HIRAX) and 5% (SKAO). For the dark energy parameters w 0 , w a , HIRAX delivers percent-level precision while SKAO constraints are weaker. HIRAX achieves sub-percent precision on the BAO distance functions D A , H , while SKAO reaches 1 - 2% for 0.6 ≲ z ≲ 1. The growth rate f is constrained at a few-percent level for the whole redshift range of HIRAX and for 0.6 ≲ z ≲ 1 by SKAO. The different performances arise mainly since HIRAX is a packed inteferometer that is optimised for BAO measurements, while SKAO is not optimised for interferometer cosmology and operates better in single-dish mode, where the telescope beam limits access to the smaller scales that are covered by an interferometer.
... Some semi-blind methods have been proposed to make assumptions about the foregrounds and/or signal that are not so strong as assuming an ansatz of foregrounds in the polynomial fitting method, and demonstrated to alleviate the problem of signal loss in foreground subtraction. These methods include the Karhunen-Loève (KL) transform method (Shaw et al. 2014(Shaw et al. , 2015 that employs the modeled foregrounds and signal covariance matrices to form a set of modes that are ordered by their signal-to-contaminant ratios, and the Generalized Needlet Internal Linear Combination (GNILC) method (Olivari et al. 2016) that uses the H covariance matrix to separate the foregrounds from the signal in a wavelet (or needlet) space. While it is indeed easier to model the covariance matrix of the foregrounds and signal than the foregrounds or the signal per se, the overall magnitude of the covariance matrix may be highly biased, because the foregrounds are four to five orders of magnitude stronger than cosmic 21 cm signal. ...
... In this section, we test the performance of SVP in terms of recovery errors with simulation data. We use the CORA (Shaw et al. 2014(Shaw et al. , 2015 package to generate the simulated dataset, which includes the H 21 cm signal and the mock foregrounds with two dominant components -galactic synchrotron emission and extragalactic point sources. We assume the Planck 2013 cosmological model (Planck Collaboration et al. 2014). ...
... We also note that the SVP estimators are independent of the overall magnitude of the covariance matrix, which can be highly biased. This may be an advantage against some other semi-blind methods that depend on the overall magnitude of the foreground covariance matrix, e.g. the Karhunen-Loève (KL) transform method (Shaw et al. 2014(Shaw et al. , 2015. ...
Preprint
The Principal Component Analysis (PCA) method and the Singular Value Decomposition (SVD) method are widely used for foreground subtraction in 21 cm intensity mapping experiments. We show their equivalence, and point out that the condition for completely clean separation of foregrounds and cosmic 21 cm signal using the PCA/SVD is unrealistic. We propose a PCA-based foreground subtraction method, dubbed "Singular Vector Projection (SVP)" method, which exploits a priori information of the left and/or right singular vectors of the foregrounds. We demonstrate with simulation tests that this new, semi-blind method can reduce the error of the recovered 21 cm signal by orders of magnitude, even if only the left and/or right singular vectors in the largest few modes are exploited. The SVP estimators provide a new, effective approach for 21 cm observations to remove foregrounds and uncover the physics in the cosmic 21 cm signal.
... Radio foregrounds are expected to affect 21-cm observations at least on scales k k FG ∼ 0 . 02 h Mpc −1 (Shaw et al. 2015 ), which coincides with the physical scale corresponding to a photo-z width σ z 0.05 at redshift z ∼ 0.8. Thus, foreground contamination can present a significant challenge to the clustering-redshifts method using two-point correlators involving 21 cm. ...
... 5 per cent for the fiducial case used in this paper: σ z,0 = 0.03 (an optimistic estimate of the photometric redshift accuracy achie v able by LSST), and k FG = 0 . 02 h Mpc −1 (Shaw et al. 2015 ). This fiducial case is shown as a black star in the figure. ...
... For the IM surv e ys, the foreground cutoff scale is fixed at k FG = 0 . 02 h Mpc −1 (Shaw et al. 2015 ), and we al w ays assume k max = 0 . 3 h Mpc −1 . ...
Article
The cross-correlation between 21-cm intensity mapping experiments and photometric surveys of galaxies (or any other cosmological tracer with a broad radial kernel) is severely degraded by the loss of long-wavelength radial modes due to Galactic foreground contamination. Higher-order correlators are able to restore some of these modes due to the non-linear coupling between them and the local small-scale clustering induced by gravitational collapse. We explore the possibility of recovering information from the bispectrum between a photometric galaxy sample and an intensity mapping experiment, in the context of the clustering-redshifts technique. We demonstrate that the bispectrum is able to calibrate the redshift distribution of the photometric sample to the required accuracy of future experiments such as the Rubin Observatory, using future single-dish and interferometric 21-cm observations, in situations where the two-point function is not able to do so due to foreground contamination. We also show how this calibration is affected by the photometric redshift width σz, 0 and maximum scale kmax. We find that it is important to reach scales k ≳ 0.3 h Mpc−1, with the constraints saturating at around k ∼ 1 h Mpc−1 for next-generation experiments.
... The main challenge associated with 21 cm intensity mapping is the very bright synchrotron foreground emission from the Milky Way and from other nearby galaxies (e.g., Santos et al. 2005;Liu & Tegmark 2012). We are investigating several approaches to foreground filtering and subtraction, which rely in various ways on recognizing the difference between the smooth Galactic spectrum and the chaotic BAO spectrum along each line of sight (e.g., Shaw et al. 2015). Separately, we note here that CHIME provides a detailed and high signal-to-noise ratio data set for probing the interstellar medium. ...
... To do so, it is crucial to have precise knowledge of the instrumental beam response. Estimates by Shaw et al. (2015) indicate that this response must be characterized to roughly a part in 10 4 in power units, and this has motivated the pursuit of a number of parallel strategies for beam measurement and modeling, as well as efforts to quantify the required precision in more detail. In this section, we first describe how CHIME's instrument design determines the general features of the beam response, and then we present the current status of our ongoing work to characterize this response. ...
... We use end-to-end simulations to determine our stability requirements. Simulation of a CHIME-sized telescope is challenging owing to computer resource limitations; therefore, we have performed simulations of a scaled-down instrument (with roughly one-quarter of CHIME's collecting area) to investigate these requirements, examining the anticipated accuracy of the 21 cm power spectrum measured after the application of the Karhunen-Loève foreground filter described in Shaw et al. (2015). This work found the requirement for fractional variations in the complex gain to be less than 1%, which translates into phase errors smaller than 0.007 rad and amplitude errors smaller than 0.7%. ...
Article
Full-text available
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a drift scan radio telescope operating across the 400–800 MHz band. CHIME is located at the Dominion Radio Astrophysical Observatory near Penticton, BC, Canada. The instrument is designed to map neutral hydrogen over the redshift range 0.8–2.5 to constrain the expansion history of the universe. This goal drives the design features of the instrument. CHIME consists of four parallel cylindrical reflectors, oriented north–south, each 100 m × 20 m and outfitted with a 256-element dual-polarization linear feed array. CHIME observes a two-degree-wide stripe covering the entire meridian at any given moment, observing three-quarters of the sky every day owing to Earth’s rotation. An FX correlator utilizes field-programmable gate arrays and graphics processing units to digitize and correlate the signals, with different correlation products generated for cosmological, fast radio burst, pulsar, very long baseline interferometry, and 21 cm absorber back ends. For the cosmology back end, the N feed 2 correlation matrix is formed for 1024 frequency channels across the band every 31 ms. A data receiver system applies calibration and flagging and, for our primary cosmological data product, stacks redundant baselines and integrates for 10 s. We present an overview of the instrument, its performance metrics based on the first 3 yr of science data, and we describe the current progress in characterizing CHIME’s primary beam response. We also present maps of the sky derived from CHIME data; we are using versions of these maps for a cosmological stacking analysis, as well as for investigation of Galactic foregrounds.
... These foregrounds have a smooth spectral shape and hence can in principle be distinguished from the 21 cm emission from large scale structure [82,[94][95][96]. However, any frequency dependence in the instrument response, for example from the instrument beam or gain fluctuations, can complicate our ability to differentiate between the smooth foreground and the essentially Gaussian cosmological signal [97,98]. Removing these foregrounds drives design choices including element uniformity, array redundancy, assessment of instrument stability and stabilization methods; provides opportunities for new calibration techniques in both beam and gain measurements; and requires analysis and simulations to fold in calibration measurements and assess their impact on cosmological parameter estimation. ...
... Work in 21 cm calibration focuses on instrument gain and beam measurement for the goal of removing astrophysical foreground power. Simulations for CHIME have provided a scale to the problem: the instrument response on the sky ('beam') must be understood to 0.1%, and the time-dependent response of the instrument ('gain') must be calibrated to 1% [97,98]. Current instruments rely primarily on sky signals for both types of calibration, however this has not yet been demonstrated to adequately remove foregrounds with these instruments. ...
... Each antenna has a characteristic response to an input sky signal, which varies with both time and frequency, known as the instrument gain. The frequencydependent gain for each input must be known to ∼ 1% on time scales between the integration period (< 5s scales) and a few hours (depending on the frequency of on-sky radio calibrator sources) [97]. The two primary techniques for achieving this are to design an instrument which is inherently stable enough to meet this specification or to design a calibration plan which can ensure we meet this specification, or (ideally) both. ...
Preprint
The 21cm line refers to a forbidden transition in neutral hydrogen associated with alignment of spins of the proton and electron. It is a very low energy transition that is emitted whenever there is neutral hydrogen in the Universe. Since baryons are mostly (~75%) hydrogen, one can in principle detect this emission throughout much of the history of the Universe. The dominant emission mechanism is different across cosmic ages. Before the photons decouple from matter, hydrogen is in an ionized state and does not emit in 21cm. After recombination and during the Dark Ages, at z ~ 30-1000, the 21cm emission is associated with density fluctuations in the neutral hydrogen medium. After the first stars turn on and galaxies begin to form, the 21cm emission traces bubbles of ionized hydrogen in the sea of the neutral medium. This epoch, spanning z ~ 6-30, is often referred to as cosmic dawn and the Epoch of Reionization (EoR). At redshifts below z<6, the intergalactic medium is largely ionized, but pockets of self-shielded neutral gas form in dense galactic environments and 21cm emission traces the distribution of galaxies. The vastly different emission mechanisms allow us to probe very different physics at different redshifts, corresponding to different observational frequencies. The instrumental challenges, namely building very sensitive and exquisitely calibrated radio telescopes, however, share many commonalities across frequency bands. The potential of the 21cm probe has been recognized by the Decadal Survey of Astronomy & Astrophysics, whose Panel on Cosmology identified the Dark Ages as its sole discovery area. We argue that HEP should recognize the potential of 21cm as a probe of fundamental physics across many axes and invest in the technology development that will enable full exploitation of this rich technique.
... In order to characterize the effect that the ripples in the beam have on our final stacking result, we repeat our analysis with a "control" beam that has the same large-scale properties as the default beam model, but without the small-scale structure in the frequency and declination direction. The control beam is a modified version of the analytical beam model proposed in Shaw et al. (2015) (henceforth, S15) for cylindrical telescopes. To briefly summarize the S15 model, the beam pattern of the antenna (henceforth, "base" beam) is assumed to be that of a horizontal dipole mounted a distance λ/4 over a conducting ground plane. ...
... The amplitude of these peaks has been reduced by deconvolving the model for the primary beam; however, they are still significant compared to our expectation for the noise. We are actively working on improving the accuracy of our beam model and implementing a deconvolution procedure that better addresses the off-axis response (see Shaw et al. (2015) for one example) to further reduce the amplitude of these peaks. The "U" shaped tracks in the delay power spectrum correspond to the brightest point sources moving through the far sidelobes. ...
... We make use of the m-mode formalism (Shaw et al. 2015) to translate simulated 21 cm maps into visibilities. In this formalism, the spherical harmonic coefficients a P m (ν) of sky maps for Stokes parameter P ∈ {I, Q, U, V } are related to the sidereal-time Fourier transform of the visibility timestream,Ṽ p xy,m (ν), via multiplication by a beam transfer matrix B p,P xy; m (ν): ...
Preprint
Full-text available
We present a detection of 21-cm emission from large-scale structure (LSS) between redshift 0.78 and 1.43 made with the Canadian Hydrogen Intensity Mapping Experiment (CHIME). Radio observations acquired over 102 nights are used to construct maps which are foreground filtered and stacked on the angular and spectral locations of luminous red galaxies (LRG), emission line galaxies (ELG), and quasars (QSO) from the eBOSS clustering catalogs. We find decisive evidence for a detection when stacking on all three tracers of LSS, with the logarithm of the Bayes Factor equal to 18.9 (LRG), 10.8 (ELG), and 56.3 (QSO). An alternative frequentist interpretation, based on the likelihood-ratio test, yields a detection significance of $7.1\sigma$ (LRG), $5.7\sigma$ (ELG), and $11.1\sigma$ (QSO). These are the first 21-cm intensity mapping measurements made with an interferometer. We constrain the effective clustering amplitude of neutral hydrogen (HI), defined as $\mathcal{A}_{\rm HI}\equiv 10^{3}\,\Omega_\mathrm{HI}\left(b_\mathrm{HI}+\langle\,f\mu^{2}\rangle\right)$, where $\Omega_\mathrm{HI}$ is the cosmic abundance of HI, $b_\mathrm{HI}$ is the linear bias of HI, and $\langle\,f\mu^{2}\rangle=0.552$ encodes the effect of redshift-space distortions at linear order. We find $\mathcal{A}_\mathrm{HI}=1.51^{+3.60}_{-0.97}$ for LRGs $(z=0.84)$, $\mathcal{A}_\mathrm{HI}=6.76^{+9.04}_{-3.79}$ for ELGs $(z=0.96)$, and $\mathcal{A}_\mathrm{HI}=1.68^{+1.10}_{-0.67}$ for QSOs $(z=1.20)$, with constraints limited by modeling uncertainties at nonlinear scales. We are also sensitive to bias in the spectroscopic redshifts of each tracer, and find a non-zero bias $\Delta\,v= -66 \pm 20 \mathrm{km/s}$ for the QSOs. We split the QSO catalog into three redshift bins and have a decisive detection in each, with the upper bin at $z=1.30$ producing the highest redshift 21-cm intensity mapping measurement thus far.
... We are investigating several approaches to foreground filtering and subtraction, that rely in various ways on recognizing the difference between the smooth Galactic spectrum and the chaotic BAO spectrum along each line of sight (e.g. Shaw et al. 2015). Separately, we note here that CHIME provides a detailed and high signalto-noise ratio dataset for probing the interstellar medium. ...
... To do so, it is crucial to have precise knowledge of the instrumental beam response. Estimates by Shaw et al. (2015) indicate that this response must be characterized to roughly a part in 10 4 in power units, and this has motivated the pursuit of a number of parallel strategies for beam measurement and modelling, as well as efforts to quantify the required precision in more detail. In this section, we first describe how CHIME's instrument design determines the general features of the beam response, and then present the current status of our ongoing work to characterize this response. ...
... We use end-to-end simulations to determine our stability requirements. Simulation of a CHIME-sized telescope is challenging due to computer resource limitations; therefore, we have performed simulations of a scaled-down instrument (with roughly 1/4 of CHIME's collecting area) to investigate these requirements, examining the anticipated accuracy of the 21 cm power spectrum measured after the application of the Karhunen-Loève foreground filter described in Shaw et al. (2015). This work found the requirement for fractional variations in the complex gain to be less than 1 %, which translates into phase errors smaller than 0.007 rad and ampli- Figure 23. ...
Preprint
Full-text available
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a drift scan radio telescope operating across the 400-800 MHz band. CHIME is located at the Dominion Radio Astrophysical Observatory near Penticton, BC Canada. The instrument is designed to map neutral hydrogen over the redshift range 0.8 to 2.5 to constrain the expansion history of the Universe. This goal drives the design features of the instrument. CHIME consists of four parallel cylindrical reflectors, oriented north-south, each 100 m $\times$ 20 m and outfitted with a 256 element dual-polarization linear feed array. CHIME observes a two degree wide stripe covering the entire meridian at any given moment, observing 3/4 of the sky every day due to Earth rotation. An FX correlator utilizes FPGAs and GPUs to digitize and correlate the signals, with different correlation products generated for cosmological, fast radio burst, pulsar, VLBI, and 21 cm absorber backends. For the cosmology backend, the $N_\mathrm{feed}^2$ correlation matrix is formed for 1024 frequency channels across the band every 31 ms. A data receiver system applies calibration and flagging and, for our primary cosmological data product, stacks redundant baselines and integrates for 10 s. We present an overview of the instrument, its performance metrics based on the first three years of science data, and we describe the current progress in characterizing CHIME's primary beam response. We also present maps of the sky derived from CHIME data; we are using versions of these maps for a cosmological stacking analysis as well as for investigation of Galactic foregrounds.
... Foregrounds can also be removed in a "visibility-based" way, where the map-making step is skipped and foregrounds are removed directly from visibilities. An important visibility-based method is the Karhunen-Loeve transform approach proposed in Shaw et al. (2015) [6]. ...
... One recent paper found these techniques performing relatively poorly when presented with more realistic data [8]. One the other hand, the visibility-based method proposed in Shaw et al. works extremely well, but only when the beam is precisely calibrated [6]. ...
Article
Full-text available
This paper describes work done to develop robust methods for foreground removal in hydrogen intensity mapping (HIM) experiments. In HIM, one measures the 21cm line from neutral hydrogen (HI), which in turn provides information about the distribution of matter in the universe. However, foreground emission from the milky way and extra-galactic sources must be removed in order to obtain useful results. Methods developed for removing these foregrounds either have not been thoroughly tested on realistic data, or perform poorly when the instrument is not precisely calibrated. The goal of the work here is to develop a method to remove foregrounds that works on realistic, imperfectly calibrated data.
... whereN (z 1 ) denotes the mean number of galaxies per redshift bin and steradian. The HI-HI correlation is affected by shot noise and interferometer noise; however, it has been shown in [62,63] that the former is always subdominant with respect to the latter. In [33], we derived an expression for interferometer noise for HIRAX, based on an analytical expression derived in [64], adapted to the outcome of numerical simulations for HIRAX [62,63]. ...
... The HI-HI correlation is affected by shot noise and interferometer noise; however, it has been shown in [62,63] that the former is always subdominant with respect to the latter. In [33], we derived an expression for interferometer noise for HIRAX, based on an analytical expression derived in [64], adapted to the outcome of numerical simulations for HIRAX [62,63]. Finally, the cross-correlation between the galaxy and intensity mapping is not affected by shot noise or interferometer noise. ...
Preprint
We propose a novel method to measure the $E_G$ statistic from clustering alone. The $E_G$ statistic provides an elegant way of testing the consistency of General Relativity by comparing the geometry of the Universe, probed through gravitational lensing, with the motion of galaxies in that geometry. Current $E_G$ estimators combine galaxy clustering with gravitational lensing, measured either from cosmic shear or from CMB lensing. In this paper, we construct a novel estimator for $E_G$, using only clustering information obtained from two tracers of the large-scale structure: intensity mapping and galaxy clustering. In this estimator, both the velocity of galaxies and gravitational lensing are measured through their impact on clustering. We show that with this estimator, we can suppress the contaminations that affect other $E_G$ estimators and consequently test the validity of General Relativity robustly. We forecast that with the coming generation of surveys like HIRAX and $\textit{Euclid}$, we will measure $E_G$ with a precision of up to 7% (3.9% for the more futuristic SKA2).
... We will examine several different methods for numerical computation of the the visibility function and provide analysis and discussion that are useful for efficient calculations with some sense for the error in the result. Of particular importance is the harmonic method which was first developed for the 21 cm field by Shaw et al. (2014) and Shaw et al. (2015a). ...
... The basic picture developed here is the same but the analysis is expanded to examine some details that are important for practical application. Additionally, Shaw et al. (2015a) incorrectly stated that the polarization components could be described in terms scalar spherical harmonics. Here the correct decomposition in terms of spin-weighted spherical harmonics of spin-weight ±2 is shown. ...
Article
The redshifted 21 cm line promises to provide a wealth of information about the evolution of our universe but remains as yet undetected. The general theme of this thesis is developing increasingly realistic models of the raw data we collect from a radio telescope. This is important because at the end of the day extracting the cosmological signal from the data will be accomplished by achieving a level of understanding of all the possible alternative sources that might mimic the cosmological signal, to a degree that we can confidently reject those alternatives as causes of our detection. The work presented in this thesis has been done in the context of working on the HERA experiment which aims to make the first measurements of spatial fluctuations in the emission from neutral hydrogen. In this thesis I emphasized aspects of the visibility function that are important for efficient and realistic visibility simulations including full account of polarization effects, in particular using a harmonic parameterization of the integrand. I assessed the effect of potential ionospheric attenuation on the suppression of polarization contamination in 21 cm power spectrum measurements using visibility simulations based on historical ionospheric plasma density data. I showed how we can use closed-form calculations of the cross-frequency angular power spectrum on the sky to generate simple mock cosmological signal simulations that are useful for validating data analysis methods. I showed how the window functions associated with a 21 cm power spectrum estimate can be approximated by simple forms that are much cheaper to evaluate than the general definition. Finally, I produced a new Southern Sky Model that combines the best available diffuse radio emission surveys that cover HERA's field of view and observing bandwidth with a point source catalog without double counting flux.
... It is difficult to generalize the beam knowledge requirements to meet this objective, since any statement on beam accuracy depends on both the details of the analysis and the form of the beam errors. To give one example, the analysis presented in Shaw et al. (2015) using m-mode formalism and a Karhunen-Love (KL) transform for foreground removal concluded that the beamwidth had to be known to an accuracy of 10 −3 in order to avoid foreground contamination of the 21 cm power spectrum. For comparison, we found in Section 3.1 that the east-west width of the solar beam measurements were discrepant with other measurements at the level of a few percent, or about one and a half orders of magnitude worse than the desired accuracy for this particular analysis. ...
... However, there are reasons for optimism. First, the modeled beamwidth errors in Shaw et al. (2015) were drawn from a random distribution, resulting in power leakage from low-k, foreground-dominated modes to higher k signal modes. In contrast, the errors in this measurement are highly correlated in frequency, affecting the high-k modes to a lesser degree. ...
Article
Full-text available
We present a beam pattern measurement of the Canadian Hydrogen Intensity Mapping Experiment (CHIME) made using the Sun as a calibration source. As CHIME is a pure drift-scan instrument, we rely on the seasonal north–south motion of the Sun to probe the beam at different elevations. This semiannual range in elevation, combined with the radio brightness of the Sun, enables a beam measurement that spans ∼7200 square degrees on the sky without the need to move the telescope. We take advantage of observations made near solar minimum to minimize the impact of solar variability, which is observed to be <10% in intensity over the observation period. The resulting data set is highly complementary to other CHIME beam measurements—both in terms of angular coverage and systematics—and plays an important role in the ongoing program to characterize the CHIME primary beam.
... The foreground emission from the Galaxy and astrophysical sources is orders of magnitude larger than the desired cosmological 21cm signal [20,[77][78][79][80]. This effect contaminates the long-wavelength radial Fourier modes, so that modes with k < k ,min are inaccessible. ...
... Reconstruction techniques have been developed, where the long radial modes, lost to the foregrounds, can be recovered from the knowledge of short modes. In the context of HI intensity mapping, this has been applied in [81][82][83], while in [78,[84][85][86][87][88][89][90] it shown that by using the forward model reconstruction framework, modes up to k 0.01 h/Mpc can be almost perfectly recovered. ...
Preprint
Full-text available
The 21cm emission of neutral hydrogen is a potential probe of the matter distribution in the Universe after reionisation. Cosmological surveys of this line intensity will be conducted in the coming years by the SKAO and HIRAX experiments, complementary to upcoming galaxy surveys. We present the first forecasts of the cosmological constraints from the combination of the 21cm power spectrum and bispectrum. Fisher forecasts are computed for the constraining power of these surveys on cosmological parameters, the BAO distance functions and the growth function. We also estimate the constraining power on dynamical dark energy and modified gravity. Finally we investigate the constraints on the 21cm clustering bias, up to second order. We consider the effects on the 21cm correlators of the telescope beam, instrumental noise, foreground avoidance, the Alcock-Paczynski effect and theoretical errors in the modelling of the correlators. Adding Planck priors, and marginalising over nuisance parameters, HIRAX achieves sub-percent precision on the $\Lambda$CDM parameters, with SKAO delivering slightly lower precision. The modified gravity parameter $\gamma$ is constrained at 1\% (HIRAX) and 5\% (SKAO). For the dark energy parameters $w_0,w_a$, HIRAX delivers percent-level precision while SKAO constraints are weaker. HIRAX achieves sub-percent precision on the BAO distance functions $D_A,H$, while SKAO reaches $1-2\%$ for $0.6\lesssim z\lesssim 1$. The growth rate $f$ is constrained at a few-percent level for the whole redshift range of HIRAX and for $0.6\lesssim z\lesssim 1$ by SKAO. The different performances arise mainly since HIRAX is a packed inteferometer that is optimised for BAO measurements, while SKAO is not optimised for interferometer cosmology and operates better in single-dish mode, where the telescope beam limits access to the smaller scales that are covered by an interferometer.
... Eastwood et al. (2018) started to address these issues by generating low-frequency high resolution (around ∼15 arcmin) northern sky maps using the Owens Valley Radio Observatory Long Wavelength Array (OVRO-LWA) (Kassim et al. 2005) interferometer array using a whole different imaging method altogether. Eastwood et al. (2018) employed a method known as the Tikhonov-regularised m-mode formalism; an adaption to the spherical harmonic transit interferometric imaging method suggested by Shaw et al. (2014Shaw et al. ( , 2015. Opposite to traditional radio interferometry, the m-mode formalism no longer utilises snapshot visibilities to image the sky, but uses components of timescale variations within these visibilities instead. ...
... The topic of spherical harmonic transit interferometry has been extensively covered by Shaw et al. (2014); Shaw et al. (2015) and Eastwood et al. (2018). As such, we will only briefly review the key points concerning the m-mode formalism and solving for the sky. ...
Article
Full-text available
One of the major priorities of international radio astronomy is to study the early universe through the detection of the 21 cm HI line from the epoch of reionisation (EoR). Due to the weak nature of the 21 cm signal, an important part in the detection of the EoR is removing contaminating foregrounds from our observations as they are multiple orders of magnitude brighter. In order to achieve this, sky maps spanning a wide range of frequencies and angular scales are required for calibration and foreground subtraction. Complementing the existing low-frequency sky maps, we have constructed a Southern Sky map through spherical harmonic transit interferometry utilising the Engineering Development Array 2 (EDA2), a Square Kilometre Array (SKA) low-frequency array prototype system. We use the m -mode formalism to create an all-sky map at 159 MHz with an angular resolution of 3 degrees, with data from the EDA2 providing information over +60 degrees to –90 degrees in declination. We also introduce a new method for visualising and quantifying how the baseline distribution of an interferometer maps to the spherical harmonics and discuss how prior information can be used to constrain spherical harmonic components that the interferometer is not sensitive to.
... It is difficult to generalize the beam knowledge requirements to meet this objective, since any statement on beam accuracy depends on both the details of the analysis and the form of the beam errors. To give one example, the analysis presented in Shaw et al. (2015) using m-mode formalism and a Karhunen-Loève (KL) transform for foreground removal concluded that the beam width had to be known to an accuracy of 10 −3 in order to avoid foreground contamination of the 21 cm power spectrum. For comparison, we found in Section 3.1 that the East-West width of the solar beam measurements were discrepant with other measurements at the level of few percent, or about one and a half orders of magnitude worse than the desired accuracy for this particular analysis. ...
... However, there are reasons for optimism. First, the modeled beam width errors in Shaw et al. (2015) were drawn from a random distribution, resulting in power leakage from low-k, foreground-dominated modes to higher k signal modes. In contrast, the errors in this measurement are highly correlated in frequency, affecting the high-k modes to a lesser degree. ...
Preprint
Full-text available
We present a beam pattern measurement of the Canadian Hydrogen Intensity Mapping Experiment (CHIME) made using the Sun as a calibration source. As CHIME is a pure drift scan instrument, we rely on the seasonal North-South motion of the Sun to probe the beam at different elevations. This semiannual range in elevation, combined with the radio brightness of the Sun, enables a beam measurement which spans ~7,200 square degrees on the sky without the need to move the telescope. We take advantage of observations made near solar minimum to minimize the impact of solar variability, which is observed to be <10% in intensity over the observation period. The resulting data set is highly complementary to other CHIME beam measurements -- both in terms of angular coverage and systematics -- and plays an important role in the ongoing program to characterize the CHIME primary beam.
... Fortunately, the spectral structure of the foreground sources is basically smooth, so we can use some cleaning algorithms to suppress them to a level that the H i signal can be extracted in an unbiased way (e.g., refs. [48][49][50][51][52][53][54][55][56]). Even so, there will be some foreground contamination left. ...
... Fortunately, these foregrounds have a basically smooth spectral structure and hence we can use some cleaning algorithms to remove them (e.g., refs. [48][49][50][51][52][53][54][55][56]). In this work, we assume that a cleaning algorithm has been applied and the covariance of residual foreground can be modeled as [23,64] ...
Article
Full-text available
Using the 21 cm intensity mapping (IM) technique can efficiently perform large-scale neutral hydrogen (H I ) surveys, and this method has great potential for measuring dark-energy parameters. Some 21 cm IM experiments aiming at measuring dark energy in the redshift range of 0<z<3 have been proposed and performed, in which the typical ones using single-dish mode include e.g., BINGO, FAST, and SKA1-MID, and those using interferometric mode include e.g., HIRAX, CHIME, and Tianlai. In this work, we make a forecast for these typical 21 cm IM experiments on their capability of measuring parameters of dark energy. We find that the interferometers have great advantages in constraining cosmological parameters. In particular, the Tianlai cylinder array alone can achieve the standard of precision cosmology for the ΛCDM model (i.e., the precision of parameters is better than 1%). However, for constraining dynamical dark energy, we find that SKA1-MID performs very well. We show that the simulated 21 cm IM data can break the parameter degeneracies inherent in the CMB data, and CMB+SKA1-MID offers σ( w )=0.013 in the wCDM model, and σ( w 0 )=0.080 and σ( w a )=0.25 in the CPL model. Compared with CMB+BAO+SN, Tianlai can provide tighter constraints in ΛCDM and wCDM, but looser constraints (tighter than CMB+BAO) in CPL, and the combination CMB+BAO+SN+Tianlai gives σ( w )=0.013, σ( w 0 )=0.055, and σ( w a )=0.13. In addition, it is found that the synergy of FAST (0<z<0.35)+SKA1-MID (0.35<z<0.77)+Tianlai (0.77<z<2.55) offers a very promising survey strategy. Finally, we find that the residual foreground contamination amplitude has a considerable impact on constraint results. We show that in the future 21 cm IM experiments will provide a powerful probe for exploring the nature of dark energy.
... Eastwood et al. (2018) started to address these issues by generating low-frequency high resolution (around ∼ 15 arcminutes) northern sky maps using the Owens Valley Radio Observatory Long Wavelength Array (OVRO-LWA) (Kassim et al., 2005) interferometer array using a whole different imaging method altogether. Eastwood et al. (2018) employed a method known as the Tikhonov-regularised m-mode formalism; an adaption to the spherical harmonic transit interferometric imaging method suggested by Shaw et al. (2014Shaw et al. ( , 2015. Opposite to traditional radio interferometry, the m-mode formalism no longer utilises snapshot visibilities to image the sky, but uses components of timescale variations within these visibilities instead. ...
... The topic of spherical harmonic transit interferometry has been extensively covered by Shaw et al. (2014Shaw et al. ( , 2015 and Eastwood et al. (2018). As of such, we will only briefly review the key points concerning the mmode formalism and solving for the sky. ...
Preprint
Full-text available
One of the major priorities of international radio astronomy is to study the early universe through the detection of the 21 cm HI line from the epoch of reionisation (EoR). Due to the weak nature of the 21 cm signal, an important part in the detection of the EoR is removing contaminating foregrounds from our observations as they are multiple orders of magnitude brighter. In order to achieve this, sky maps spanning a wide range of frequencies and angular scales are required for calibration and foreground subtraction. Complementing the existing low-frequency sky maps, we have constructed a Southern Sky map through spherical harmonic transit interferometry utilising the engineering development array 2 (EDA2), a square kilometre array (SKA) low-frequency array prototype system. We use the m-mode formalism to create an all-sky map at 159MHz with an angular resolution of 3 degrees, with data from the (EDA2) providing information over +60 degrees to -90 degrees in declination. We also introduce a new method for visualising and quantifying how the baseline distribution of an interferometer maps to the spherical harmonics, and discuss how prior information can be used to constrain spherical harmonic components that the interferometer is not sensitive to.
... The major source of contamination is Galactic synchrotron emission, due to the acceleration of cosmic ray electrons by the Galactic magnetic field, and ∼ 5 orders of magnitude larger than the 21cm signal. Other sources of contamination include free-free emission (due to acceleration of electrons by ions) and extragalactic bright radio galaxies [7,8]. ...
Article
Full-text available
Neutral hydrogen intensity mapping can in principle deliver rapid and large-volume cosmological surveys with exquisitely accurate redshifts that are determined directly from imaging. However, intensity maps suffer from very strong foreground contamination. Future surveys will require efficient data pipelines to remove the foregrounds and reveal the cosmological signal. It is expected that this cleaning will not remove the signal in substantial parts of the available Fourier space and that significant loss of signal due to imperfect cleaning will be confined to specific regions of Fourier space. This suggests a strategy which is useful for simplified estimates and rapid computations — i.e., to apply foreground filters that avoid the regions where loss of signal is significant. The standard Fourier-space power spectrum and foreground filters use a flat-sky approximation and thus exclude wide-angle correlations. We provide a new geometrical formulation of foreground filters in harmonic space, which naturally includes all wide-angle effects in the power spectrum. Foreground filtering leads to a loss of isotropy in Fourier space. In harmonic space this produces off-diagonal correlations. We derive analytical expressions for the generalised HI power spectrum and its cross-power with CMB lensing, for both single-dish and interferometer mode surveys. We show numerically that the off-diagonal contributions are negligible for the auto power. In the cross power, there is a non-negligible off-diagonal contribution, but only for a small interval of the largest available scales. For auto and cross power, the signal loss due to foreground avoidance decreases with increasing multipole (i.e. smaller scales), and the loss in interferometer mode is equal to, or slightly greater than, in single-dish mode. We find that the cross power in single-dish mode vanishes below a critical multipole, ℓ < ℓ 0. For an SKA-like survey, ℓ 0 ∼ 20 – 40 over redshifts z = 1 – 3. This feature is not seen in interferometer mode as the pertinent angular scales are larger than those allowed by the minimum baseline.
... Ho we ver, the polarization leakage can still be identified via the polarized beam pattern measurements. A few more studies are focusing on simulating polarization leakage for different H I experiments (de Bruyn et al. 2006 ;Wolleben et al. 2006 ;Schnitzeler, Katgert & de Bruyn 2009 ;Shaw et al. 2015 ;Nunhokee et al. 2017 ). Eliminating the polarization leakage from the full beam pattern is important for separating the foreground from the H I signal. ...
Article
The neutral hydrogen (HI) intensity mapping (IM) survey is regarded as a promising approach for cosmic large-scale structure studies. A major issue for the HI IM survey is to remove the bright foreground contamination. A key to successfully removing the bright foreground is to well control or eliminate the instrumental effects. In this work, we consider the instrumental effects of polarization leakage and use the U-Net approach, a deep learning-based foreground removal technique, to eliminate the polarization leakage effect. The thermal noise is assumed to be a subdominant factor compared with the polarization leakage for future HI IM surveys and ignored in this analysis. In this method, the principal component analysis (PCA) foreground subtraction is used as a preprocessing step for the U-Net foreground subtraction. Our results show that the additional U-Net processing could either remove the foreground residual after the conservative PCA subtraction or compensate for the signal loss caused by the aggressive PCA preprocessing. Finally, we test the robustness of the U-Net foreground subtraction technique and show that it is still reliable in the case of existing constraint error on HI fluctuation amplitude.
... direction. The control beam is a modified version of the analytical beam model proposed in Shaw et al. (2015, hereafter S15) for cylindrical telescopes. To briefly summarize the S15 model, the beam pattern of the antenna (henceforth "base" beam) is assumed to be that of a horizontal dipole mounted a distance λ/4 over a conducting ground plane. ...
Article
Full-text available
We present a detection of 21 cm emission from large-scale structure (LSS) between redshift 0.78 and 1.43 made with the Canadian Hydrogen Intensity Mapping Experiment. Radio observations acquired over 102 nights are used to construct maps that are foreground filtered and stacked on the angular and spectral locations of luminous red galaxies (LRGs), emission-line galaxies (ELGs), and quasars (QSOs) from the eBOSS clustering catalogs. We find decisive evidence for a detection when stacking on all three tracers of LSS, with the logarithm of the Bayes factor equal to 18.9 (LRG), 10.8 (ELG), and 56.3 (QSO). An alternative frequentist interpretation, based on the likelihood ratio test, yields a detection significance of 7.1 σ (LRG), 5.7 σ (ELG), and 11.1 σ (QSO). These are the first 21 cm intensity mapping measurements made with an interferometer. We constrain the effective clustering amplitude of neutral hydrogen (H i ), defined as  H I ≡ 10 3 Ω H I b H I + 〈 f μ 2 〉 , where Ω H i is the cosmic abundance of H i , b H i is the linear bias of H i , and 〈 f μ ² 〉 = 0.552 encodes the effect of redshift-space distortions at linear order. We find  H I = 1.51 − 0.97 + 3.60 for LRGs ( z = 0.84),  H I = 6.76 − 3.79 + 9.04 for ELGs ( z = 0.96), and  H I = 1.68 − 0.67 + 1.10 for QSOs ( z = 1.20), with constraints limited by modeling uncertainties at nonlinear scales. We are also sensitive to bias in the spectroscopic redshifts of each tracer, and we find a nonzero bias Δ v = − 66 ± 20 km s ⁻¹ for the QSOs. We split the QSO catalog into three redshift bins and have a decisive detection in each, with the upper bin at z = 1.30 producing the highest-redshift 21 cm intensity mapping measurement thus far.
... The main reason is the presence of not-well-characterized contaminants: Foregrounds of astrophysical origin are orders of magnitude more intense than the signal (Alonso et al. 2014). Moreover, this substantial difference in intensity among components translates any possible tiny leakage due to the instruments' imperfections and calibration uncertainties into catastrophic contamination, which is hard to model or prevent (e.g., Shaw et al. 2015). ...
Article
Full-text available
We summarize the second radio synchrotron background workshop, which took place on 2022 June 15–17 in Barolo, Italy. This meeting was convened because available measurements of the diffuse radio zero level continue to suggest that it is several times higher than can be attributed to known Galactic and extragalactic sources and processes, rendering it the least well-understood electromagnetic background at present and a major outstanding question in astrophysics. The workshop agreed on the next priorities for investigations of this phenomenon, which include searching for evidence of the radio Sunyaev–Zel’dovich effect, carrying out cross-correlation analyses of radio emission with other tracers, and supporting the completion of the 310 MHz absolutely calibrated sky map project.
... Numerous works hav e inv estig ated the mitig ation of simulated foreground contamination on the detection of the simulated HI signal, e.g. Wolz et al. ( 2014 ); Alonso et al. ( 2015 ); Bigot-Sazy et al. ( 2015 ); Shaw et al. ( 2015 ); Chapman et al. ( 2016 ); Zhang et al. ( 2016 ); Mertens, Ghosh & Koopmans ( 2018 ) ;Carucci, Irfan & Bobin ( 2020 ); Cunnington et al. ( 2021 ); Irfan & Bull ( 2021 ); Liccardo et al. ( 2021 ); E-mail: melis.irfan@qmul.ac.uk Makinen et al. ( 2021 ); Yohana et al. ( 2021 ); Soares et al. ( 2022 ); and Spinelli et al. ( 2022 ). ...
Article
Full-text available
We present a model for the full-sky diffuse Galactic synchrotron spectral index with an appropriate level of spatial structure for a resolution of 56 arcmin (to match the resolution of the Haslam 408 MHz data). Observational data at 408 MHz and 23 GHz have been used to provide spectral indices at a resolution of 5 degrees. In this work we make use of convolutional neural networks to provide a realistic proxy for the higher resolution information, in place of the genuine structure. Our deep learning algorithm has been trained using 14.4 arcmin observational data from the 1.4 GHz Parkes radio continuum survey. We compare synchrotron emission maps constructed by extrapolating the Haslam data using various spectral index maps, of different angular resolution, with the Global Sky Model. We add these foreground maps to a total emission model for a 21 cm intensity mapping experiment, then attempt to remove the foregrounds. The different models all display different spectral or spatial behaviour and so each provide a useful and different tool to the community for testing component separation techniques. We find that for an experiment operating using a cosine aperture taper beam with a primary Full Width at Half Maximum between 1.1 and 1.6 degrees, and the principal component analysis technique of foreground removal, there is a discernible difference between synchrotron spectral index models with a resolution larger than 5 degrees but that no greater resolution than 5 degrees is required.
... Numerous ★ E-mail: mirfan@myuwc.ac.za works have investigated the mitigation of simulated foreground contamination on the detection of the simulated H signal, e.g. Wolz et al. (2014); Shaw et al. (2015); Bigot-Sazy et al. Spinelli et al. (2022). ...
Preprint
We present a model for the full-sky diffuse Galactic synchrotron spectral index with an appropriate level of spatial structure for a resolution of 56 arcmin (to match the resolution of the Haslam 408 MHz data). Observational data at 408 MHz and 23 GHz have been used to provide spectral indices at a resolution of 5 degrees. In this work we make use of convolutional neural networks to provide a realistic proxy for the higher resolution information, in place of the genuine structure. Our deep learning algorithm has been trained using 14.4 arcmin observational data from the 1.4 GHz Parkes radio continuum survey. We compare synchrotron emission maps constructed by extrapolating the Haslam data using various spectral index maps, of different angular resolution, with the Global Sky Model. We add these foreground maps to a total emission model for a 21 cm intensity mapping experiment, then attempt to remove the foregrounds. The different models all display different spectral or spatial behaviour and so each provide a useful and different tool to the community for testing component separation techniques. We find that for an experiment operating using a cosine aperture taper beam with a primary Full Width at Half Maximum between 1.1 and 1.6 degrees, and the principal component analysis technique of foreground removal, there is a discernible difference between synchrotron spectral index models with a resolution larger than 5 degrees but that no greater resolution than 5 degrees is required.
... Hence, they MNRAS 519, 6246-6256 (2023) have to be removed. We can differentiate these dominant foregrounds from the signal taking advantage of their spectral smoothness (Liu & Tegmark 2011 ;Chapman et al. 2012 ;Wolz et al. 2014 ;Alonso et al. 2015 ;Shaw et al. 2015 ;Cunnington et al. 2020 ). As an e xample, 21cm fore ground remo v al studies using lo w-redshift H I IM simulations and real data employ blind foreground removal techniques like principal component analysis (PCA, Switzer et al. 2013Alonso et al. 2015 ) or independent component analysis (Hyv ärinen 1999 ;Wolz et al. 2017 ). ...
Article
Full-text available
We provide perturbation theory predictions for the H i intensity mapping power spectrum multipoles using the Effective Field Theory of Large Scale Structure, which should allow us to exploit mildly non-linear scales. Assuming survey specifications typical of proposed interferometric H i intensity mapping experiments like Canadian Hydrogen Observatory and Radio transient Detector and PUMA, and realistic ranges of validity for the perturbation theory modelling, we run mock full shape Markov chain Monte Carlo (MCMC) analyses at z = 0.5, and compare with Stage-IV optical galaxy surveys. We include the impact of 21cm foreground removal using simulations-based prescriptions, and quantify the effects on the precision and accuracy of the parameter estimation. We vary 11 parameters in total: three cosmological parameters, seven bias and counter terms parameters, and the H i brightness temperature. Amongst them, the four parameters of interest are: the cold dark matter density, ωc, the Hubble parameter, h, the primordial amplitude of the power spectrum, As, and the linear H i bias, b1. For the best-case scenario, we obtain unbiased constraints on all parameters with $\lt 3{{\ \rm per\ cent}}$ errors at $68{{\ \rm per\ cent}}$ confidence level. When we include the foreground removal effects, the parameter estimation becomes strongly biased for ωc, h, and b1, while As is less biased (<2σ). We find that scale cuts $k_{\rm min} \ge 0.03 \ h\,\mathrm{Mpc}^{-1}$ are required to return accurate estimates for ωc and h, at the price of a decrease in the precision, while b1 remains strongly biased. We comment on the implications of these results for real data analyses.
... HERA, work solely in this mode while this is one of the possible modes of observation for the other interferometers. Many different variants of the drift scan technique have been proposed in the literature: the m-mode analysis (Shaw et al. 2014(Shaw et al. , 2015 which has been applied to OVRO-LWA data in Eastwood et al. (2018); cross-correlation of the H i signal in time (Paul et al. 2014;Patwa & Sethi 2019 which has be applied to MWA drift scan data Patwa et al. 2021), drift and shift method (Trott 2014) and fringe-rate method , as applied to the PAPER data). ...
Preprint
Intensity mapping with the redshifted 21-cm line is an emerging tool in cosmology. Drift scan observations, where the antennas are fixed to the ground and the telescope's pointing center (PC) changes continuously on the sky due to earth's rotation, provide broad sky coverage and sustained instrumental stability needed for 21-cm intensity mapping. Here we present the Tracking Tapered Grided Estimator (TTGE) to quantify the power spectrum of the sky signal estimated directly from the visibilities measured in drift scan radio interferometric observations. The TTGE uses the data from the different PC to estimate the power spectrum of the signal from a small angular region located around a fixed tracking center (TC). The size of this angular region is decided by a suitably chosen tapering window function which serves to reduce the foreground contamination from bright sources located at large angles from the TC. It is possible to cover the angular footprint of the drift scan observations using multiple TC, and combine the estimated power spectra to increase the signal to noise ratio. Here we have validated the TTGE using simulations of $154 \, {\rm MHz}$ MWA drift scan observations. We show that the TTGE can recover the input model angular power spectrum $C_{\ell}$ within $20 \%$ accuracy over the $\ell$ range $40 < \ell < 700$.
... Current results are limited by their ability to remove bright astrophysical foregrounds from galactic and extragalactic synchrotron emission [223]. Removing this foreground emission relies on differentiating the frequency dependence between the foregrounds and signal of interest [224][225][226][227] which requires good knowledge and control of the instrumental frequency response [228,229]. ...
Preprint
This report summarizes the envisioned research activities as gathered from the Snowmass 2021 CF5 working group concerning Dark Energy and Cosmic Acceleration: Cosmic Dawn and Before. The scientific goals are to study inflation and to search for new physics through precision measurements of relic radiation from the early universe. The envisioned research activities for this decade (2025-35) are constructing and operating major facilities and developing critical enabling capabilities. The major facilities for this decade are the CMB-S4 project, a new Stage-V spectroscopic survey facility, and existing gravitational wave observatories. Enabling capabilities include aligning and investing in theory, computation and model building, and investing in new technologies needed for early universe studies in the following decade (2035+).
... To remedy this, instrumental systematics must be modeled to fairly high precision; an early study of CHIME found that if the only source of instrumental uncertainty were the per-feed beam-widths, those widths would have to be constrained to within 0.1% to enable an unbiased measurement of the 21 cm power spectrum. 24 ...
Preprint
Full-text available
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) will measure the 21 cm emission of astrophysical neutral hydrogen to probe large scale structure at redshifts z=0.8-2.5. However, detecting the 21 cm signal beneath substantially brighter foregrounds remains a key challenge. Due to the high dynamic range between 21 cm and foreground emission, an exquisite calibration of instrument systematics, notably the telescope beam, is required to successfully filter out the foregrounds. One technique being used to achieve a high fidelity measurement of the CHIME beam is radio holography, wherein signals from each of CHIME's analog inputs are correlated with the signal from a co-located reference antenna, the 26 m John A. Galt telescope, as the 26 m Galt telescope tracks a bright point source transiting over CHIME. In this work we present an analysis of several of the Galt telescope's properties. We employ driftscan measurements of several bright sources, along with background estimates derived from the 408 MHz Haslam map, to estimate the Galt system temperature. To determine the Galt telescope's beam shape, we perform and analyze a raster scan of the bright radio source Cassiopeia A. Finally, we use early holographic measurements to measure the Galt telescope's geometry with respect to CHIME for the holographic analysis of the CHIME and Galt interferometric data set.
... This design choice has been adopted to keep the total system noise of HIRAX to less than 50 K, allowing a sensitive measurement of the 0.1mK cosmological 21 cm signal. 6 ...
Preprint
Full-text available
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) aims to improve constraints on the dark energy equation of state through measurements of large-scale structure at high redshift ($0.8<z<2.5$), while serving as a state-of-the-art fast radio burst detector. Bright galactic foregrounds contaminate the 400--800~MHz HIRAX frequency band, so meeting the science goals will require precise instrument characterization. In this paper we describe characterization of the HIRAX antenna, focusing on measurements of the antenna beam and antenna noise temperature. Beam measurements of the current HIRAX antenna design were performed in an anechoic chamber and compared to simulations. We report measurement techniques and results, which find a broad and symmetric antenna beam for $\nu <$650MHz, and elevated cross-polarization levels and beam asymmetries for $\nu >$700MHz. Noise temperature measurements of the HIRAX feeds were performed in a custom apparatus built at Yale. In this system, identical loads, one cryogenic and the other at room temperature, are used to take a differential (Y-factor) measurement from which the noise of the system is inferred. Several measurement sets have been conducted using the system, involving CHIME feeds as well as four of the HIRAX active feeds. These measurements give the first noise temperature measurements of the HIRAX feed, revealing a $\sim$60K noise temperature (relative to 30K target) with 40K peak- to-peak frequency-dependent features, and provide the first demonstration of feed repeatability. Both findings inform current and future feed designs.
... In this paper we consider an idealized setup, in which we neglect the impact of foreground cleaning [59][60][61][62], the foreground wedge [63][64][65][66] and assume an idealized measurement. Even though this is not realistic, our main purpose is to demonstrate that even with such optimistic assumptions, the standard one-loop perturbative model is sufficiently accurate for upcoming 21cm IM surveys. ...
Preprint
We use an analytical forward model based on perturbation theory to predict the neutral hydrogen (HI) overdensity maps at low redshifts. We investigate its performance by comparing it directly at the field level to the simulated HI from the IllustrisTNG simulation TNG300-1 ($L=205\ h^{-1}$ Mpc), in both real and redshift space. We demonstrate that HI is a biased tracer of the underlying matter field and find that the cubic bias model describes the simulated HI power spectrum to within 1% up to $k=0.4 \;(0.3) \,h\,{\rm Mpc}^{-1}$ in real (redshift) space at redshifts $z=0,1$. Looking at counts in cells, we find an excellent agreement between the theory and simulations for cells as small as 5 $h^{-1}$ Mpc. These results are in line with expectations from perturbation theory and they imply that a perturbative description of the HI field is sufficiently accurate given the characteristics of upcoming 21cm intensity mapping surveys. Additionally, we study the statistical properties of the model error - the difference between the truth and the model. We show that on large scales this error is nearly Gaussian and that it has a flat power spectrum, with amplitude significantly lower than the standard noise inferred from the HI power spectrum. We explain the origin of this discrepancy, discuss its implications for the HI power spectrum Fisher matrix forecasts and argue that it motivates the HI field-level cosmological inference. On small scales in redshift space we use the difference between the model and the truth as a proxy for the Fingers-of-God effect. This allows us to estimate the nonlinear velocity dispersion of HI and show that it is smaller than for the typical spectroscopic galaxy samples at the same redshift. Finally, we provide a simple prescription based on the perturbative forward model which can be used to efficiently generate accurate HI mock data, in real and redshift space.
... The package also provides several tools for reconstructing maps from transit visibilities. Here, we have used the m-mode visibility computation and map making tools, which operate in the spherical harmonics space Y ,m as described in (Zhang et al. 2016) and (Shaw et al. 2015). The simulation and analysis pipeline includes several other C++ or python software modules, which handle the preparation of the input data, such as the generation of HI sources from optical catalogs, foreground subtraction, source detection, power spectrum computation and optical -radio cross-correlation computation. ...
Preprint
We present the science case for surveys with the Tianlai dish array interferometer tuned to the $\left[ 1300, 1400 \right] \mathrm{MHz}$ frequency range. Starting from a realistic generation of mock visibility data according to the survey strategy, we reconstruct a map of the sky and perform a foreground subtraction. We show that a survey of the North Celestial Polar cap during a year of observing time and covering an area of $150 \, \mathrm{deg^2}$ would reach a sensitivity of $ 1.5-2 \, \mathrm{mK} $ per $1 \, \mathrm{MHz} \times 0.25^2 \, \mathrm{deg^2 }$ voxel and be marginally impacted by mode-mixing. Tianlai would be able to detect a handful $(\sim 10)$ of nearby massive \HI clumps as well as a very strong cross-correlation signal of 21\,cm intensity maps with the North Celestial Cap Survey optical galaxies. We have also studied the performance of a mid-latitude survey, covering $\sim 1500 \, \mathrm{deg^2}$ centered on a declination of $\delta=55^\circ$, which overlaps the Sloan Digital Sky Survey footprint. Despite a higher noise level for the mid-latitude survey, as well as significant distortions due to mode mixing, Tianlai would be able to detect a highly significant cross-correlation between the 21\,cm signal and the Sloan spectroscopic galaxy sample. Using the extragalactic signals from either or both of these surveys, it will be possible to assess the impact of calibration uncertainties, antenna pattern uncertainties, sources of noise, and mode mixing for future surveys requiring higher sensitivity.
... 21cm Intensity Mapping shares many technical aspects, map making and foreground subtraction in particular with the search for the 21cm signal from the EoR 46 . Although signalforeground separation might be effective on a per visibility basis with filtering along the frequency axis 47,48 many authors have explored the methods where signal and foreground components are projected into different sub spaces or modes 44,49,45,50 . The power spectrum P 21 (k) in mK/(Mpc/h 70 ) 3 is plotted as a function of the comoving wavenumber k in h 70 Mpc −1 . ...
Preprint
21cm Intensity Mapping (IM) has been proposed about 15 years ago as a cost effective method to carry out cosmological surveys and to map the 3D distribution of matter in the universe, over a large range of post EoR redshifts, from z=0 to z=6. Since then a number of pathfinder instruments have been built, such as CHIME or Tianlai. Several other ones will be commissioned in the next few years (HIRAX, CHORD, BINGO), while even larger arrays, with several thousand antennae are being considered for the next generation experiments. We will briefly review the 21cm cosmology of the Epoch of Reionisation (EoR), and we will then focus on IM for late time cosmology. After presenting some of the promises of this technique to constrain the cosmological model, dark energy and inflation, we will review some of the instrumental and scientific challenges of IM surveys. The second part of the paper presents an overview of the ongoing and future experiments, as well as recent results by GBT, CHIME and Tianlai.
... Thus, a promising future direction is to generalize our algorithm to calibrate other types of systematic error, such as baseline errors and beam errors. In addition, the simulation pipeline we use to test the hybrid algorithm does not include sky polarization, but studies have shown that linear filters, such as the KL filter, can be applied to polarized data as well [54]. Applying our hybrid algorithm to polarized data is another aspect to be explored in future studies. ...
Preprint
Full-text available
Observations of the redshifted 21-cm signal emitted by neutral hydrogen represent a promising probe of large-scale structure in the universe. However, cosmological 21-cm signal is challenging to observe due to astrophysical foregrounds which are several orders of magnitude brighter. Traditional linear foreground removal methods can optimally remove foregrounds for a known telescope response but are sensitive to telescope systematic errors such as antenna gain and delay errors, leaving foreground contamination in the recovered signal. Non-linear methods such as principal component analysis, on the other hand, have been used successfully for foreground removal, but they lead to signal loss that is difficult to characterize and requires careful analysis. In this paper, we present a systematics-robust foreground removal technique which combines both linear and non-linear methods. We first obtain signal and foreground estimates using a linear filter. Under the assumption that the signal estimate is contaminated by foreground residuals induced by parameterizable systematic effects, we infer the systematics-induced contamination by cross-correlating the initial signal and foreground estimates. Correcting for the inferred error, we are able to subtract foreground contamination from the linearly filtered signal up to the first order in the amplitude of the telescope systematics. In simulations of an interferometric 21-cm survey, our algorithm removes foreground leakage induced by complex gain errors by one to two orders of magnitude in the power spectrum. Our technique thus eases the requirements on telescope characterization for modern and next-generation 21-cm cosmology experiments.
... We also note that PCA represents arguably the most basic form of blind foreground cleaning. Many other more sophisticated methods haven been experimented with (Wolz et al. 2014 ;Shaw et al. 2015 ;Carucci, Irfan & Bobin 2020 ;Fonseca & Liguori 2021 ;Irfan & Bull 2021 ;Makinen et al. 2021 ;Soares et al. 2021b ). Comparisons between many of these are presented in Spinelli et al. ( 2021 ). ...
Article
Full-text available
A goal for pathfinder intensity mapping (IM) surveys will be detecting features in the neutral hydrogen (H i) power spectrum, which serve as conclusive evidence of cosmological signals. Observing such features at the expected scales in H i IM auto-correlations, where contribution from systematics is uncertain, will provide a more convincing cosmological detection. We demonstrate how the turnover, i.e. the peak of the power spectrum at ultra-large scales, can be detected with H i IM. We find that a MeerKAT 4,000 deg2 survey using the UHF-band is capable of a 3.1σ detection of the turnover, relative to a null model power spectrum with no turnover. This should exceed that capable from current galaxy surveys in optical and near-infrared. The detection significance falls to ∼1σ in MeerKAT’s L-band but can reach ∼13σ with the SKAO, which should easily surpass the constraint capable from future Stage-IV-like spectroscopic galaxy surveys. We also propose a new model-independent methodology for constraining the precise turnover scale (k0) and our tests on UHF-band simulated data achieved a precision of 10%. This improved to 2.4% when using the full SKAO. We demonstrate how the results are robust to foreground contamination by using transfer functions, even when an incorrect cosmology has been assumed in their construction. Given that the turnover is related to the horizon scale at matter-radiation equality, a sufficiently precise constraint of k0 presents the possibility for a novel probe of cosmology. We therefore present a potential methodology for constructing a standard-ruler-based distance measurement, independent of the sound horizon, using the turnover location in the H i power spectrum.
... We also note that PCA represents arguably the most basic form of blind foreground cleaning. Many other more sophisticated methods haven been experimented with (Wolz et al. 2014;Shaw et al. 2015;Carucci et al. 2020;Makinen et al. 2021;Fonseca & Liguori 2021;Soares et al. 2021b;Irfan & Bull 2021). Comparisons between many of these are presented in Spinelli et al. (2021). ...
Preprint
A goal for pathfinder intensity mapping (IM) surveys will be detecting features in the neutral hydrogen (HI) power spectrum, which serve as conclusive evidence of cosmological signals. Observing such features at the expected scales in HI IM auto-correlations, where contribution from systematics is uncertain, will provide a more convincing cosmological detection. We demonstrate how the turnover, i.e. the peak of the power spectrum at ultra-large scales, can be detected with HI IM. We find that a MeerKAT 4,000$\,{\rm deg}^2$ survey using the UHF-band is capable of a $3.1\sigma$ detection of the turnover, relative to a null model power spectrum with no turnover. This should exceed that capable from current galaxy surveys in optical and near-infrared. The detection significance falls to ${\sim}1\sigma$ in MeerKAT's L-band but can reach ${\sim}13\sigma$ with the SKAO, which should easily surpass the constraint capable from future Stage-IV-like spectroscopic galaxy surveys. We also propose a new model-independent methodology for constraining the precise turnover scale ($k_0$) and our tests on UHF-band simulated data achieved a precision of 10%. This improved to 2.4% when using the full SKAO. We demonstrate how the results are robust to foreground contamination by using transfer functions, even when an incorrect cosmology has been assumed in their construction. Given that the turnover is related to the horizon scale at matter-radiation equality, a sufficiently precise constraint of $k_0$ presents the possibility for a novel probe of cosmology. We therefore present a potential methodology for constructing a standard-ruler-based distance measurement, independent of the sound horizon, using the turnover location in the HI power spectrum.
Article
This paper describes the design of a 5.5:1 bandwidth feed antenna and reflector system, intended for use in hydrogen intensity mapping experiments. The system is optimized to reduce systematic effects that can arise in these experiments from scattering within the feed/reflector and cross-coupling between antennas. The proposed feed is an ultra-wideband Vivaldi style design and was optimized to have a smooth frequency response, high gain, and minimal shadowing of the reflector dish. This feed can optionally include absorptive elements which reduce systematics but degrade sensitivity. The proposed reflector is a deep parabolic dish with [Formula: see text] along with an elliptical collar to provide additional shielding. The procedure for optimizing these design choices is described.
Article
The Tianlai cylinder pathfinder is a radio interferometer array to test 21 cm intensity mapping techniques in the post-reionization era. It works in passive drift scan mode to survey the sky visible in the northern hemisphere. To deal with the large instantaneous field of view and the spherical sky, we decompose the drift scan data into $m$-modes, which are linearly related to the sky intensity. The sky map is reconstructed by solving the linear interferometer equations. Due to incomplete $uv$ coverage of the interferometer baselines, this inverse problem is usually ill-posed, and regularization method is needed for its solution. In this paper, we use simulation to investigate two frequently used regularization methods, the Truncated Singular Value Decomposition (TSVD), and the Tikhonov regularization techniques. Choosing the regularization parameter is very important for its application. We employ the generalized cross validation (GCV) method and the L-curve method to determine the optimal value. We compare the resulting maps obtained with the different regularization methods, and for the different parameters derived using the different criteria. While both methods can yield good maps for a range of regularization parameters, in the Tikhonov method the suppression of noisy modes are more gradually applied, produce more smooth maps which avoids some visual artefacts in the maps generated with the TSVD method.
Article
We use an analytical forward model based on perturbation theory to predict the neutral hydrogen (HI) overdensity maps at low redshifts. We investigate its performance by comparing it directly at the field level to the simulated HI from the IllustrisTNG magneto-hydrodynamical simulation TNG300-1 (L=205 h−1 Mpc), in both real and redshift space. We demonstrate that HI is a biased tracer of the underlying matter field and find that the cubic bias model describes the simulated HI power spectrum to within 1% up to k=0.4(0.3) h Mpc−1 in real (redshift) space at redshifts z=0, 1. Looking at counts in cells, we find an excellent agreement between the theory and simulations for cells as small as 5 h−1 Mpc. These results are in line with expectations from perturbation theory, and they imply that a perturbative description of the HI field is sufficiently accurate given the characteristics of upcoming 21 cm intensity mapping surveys. Additionally, we study the statistical properties of the model error—the difference between the truth and the model. We show that on large scales this error is nearly Gaussian and that it has a flat power spectrum, with amplitude significantly lower than the standard noise inferred from the HI power spectrum. We explain the origin of this discrepancy, discuss its implications for the HI power spectrum Fisher matrix forecasts, and argue that it motivates the HI field-level cosmological inference. On small scales in redshift space, we use the difference between the model and the truth as a proxy for the Fingers-of-God effect. This allows us to estimate the nonlinear velocity dispersion of HI and show that it is smaller than for the typical spectroscopic galaxy samples at the same redshift. Finally, we provide a simple prescription based on the perturbative forward model which can be used to efficiently generate accurate HI mock data, in real and redshift space.
Article
The visibilities measured by radio astronomical interferometers include non-astronomical correlated signals that arise from the local environment of the array. These correlated signals are especially important in compact arrays such as those under development for 21[Formula: see text]cm intensity mapping. The amplitudes of the contaminated visibilities can exceed the expected 21[Formula: see text]cm signal and represent a significant systematic effect. We study the receiver noise radiated by antennas in compact arrays and develop a model for how it couples to other antennas. We apply the model to the Tianlai Dish Pathfinder Array (TDPA), a compact array of 16, 6-m dish antennas. The coupling model includes electromagnetic simulations, measurements with a network analyzer, and measurements of the noise of the receivers. We compare the model to drift-scan observations with the array and set requirements on the level of antenna cross-coupling for 21[Formula: see text]cm intensity mapping instruments. We find that for the TDPA, cross-coupling would have to be reduced by three orders of magnitude in order to contribute negligibly to the visibilities.
Article
The Tianlai cylinder array is a pathfinder for developing and testing 21cm intensity mapping techniques. In this paper, we use numerical simulation to assess how its measurement is affected by thermal noise and the errors in calibration and map-making process, and the error in the sky map reconstructed from a drift scan survey. Here we consider only the single frequency, unpolarized case. The beam is modelled by fitting to the electromagnetic simulation of the antenna, and the variations of the complex gains of the array elements are modelled by Gaussian processes. Mock visibility data is generated and run through our data processing pipeline. We find that the accuracy of the current calibration is limited primarily by the absolute calibration, where the error comes mainly from the approximation of a single dominating point source. We then studied the $m$-mode map-making with the help of Moore-Penrose inverse. We find that discarding modes with singular values smaller than a threshold could generate visible artifacts in the map. The impacts of the residue variation of the complex gain and thermal noise are also investigated. The thermal noise in the map varies with latitude, being minimum at the latitude passing through the zenith of the telescope. The angular power spectrum of the reconstructed map show that the current Tianlai cylinder pathfinder, which has a shorter maximum baseline length in the North-South direction, can measure modes up to $l \lesssim 2\pi b_{\rm NS}/\lambda \sim 200$ very well, but would lose a significant fraction of higher angular modes when noise is present. These results help us to identify the main limiting factors in our current array configuration and data analysis procedure, and suggest that the performance can be improved by reconfiguration of the array feed positions.
Article
The kinetic Sunyaev-Zeldovich (kSZ) effect will be an important source of cosmological and astrophysical information in upcoming surveys of the cosmic microwave background (CMB). However, the kSZ effect will also act as the dominant source of noise for several other measurements that use small angular scales in CMB temperature maps, since its blackbody nature implies that standard component separation techniques cannot be used to remove it from observed maps. In this paper, we explore the idea of “de-kSZing”: constructing a template for the late-time kSZ effect using external surveys of large-scale structure, and then subtracting this template from CMB temperature maps in order to remove some portion of the kSZ signal. After building intuition for general aspects of the de-kSZing procedure, we perform forecasts for the de-kSZing efficiency of several large-scale structure surveys, including BOSS, DESI, Roman, MegaMapper, and PUMA. We also highlight potential applications of de-kSZing to cosmological constraints from the CMB temperature power spectrum, CMB lensing reconstruction, and the moving-lens effect. While our forecasts predict achievable de-kSZing efficiencies of 10%–20% at best, these results are specific to the de-kSZing formalism adopted in this work, and we expect that higher efficiencies are possible using improved versions of this formalism.
Article
We propose a novel method to measure the EG statistic from clustering alone. The EG statistic provides an elegant way of testing the consistency of General Relativity by comparing the geometry of the Universe, probed through gravitational lensing, with the motion of galaxies in that geometry. Current EG estimators combine galaxy clustering with gravitational lensing, measured either from cosmic shear or from CMB lensing. In this paper, we construct a novel estimator for EG, using only clustering information obtained from two tracers of the large-scale structure: intensity mapping and galaxy clustering. In this estimator, both the velocity of galaxies and gravitational lensing are measured through their impact on clustering. We show that with this estimator, we can suppress the contaminations that affect other EG estimators and consequently test the validity of General Relativity robustly. We forecast that with the coming generation of surveys like HIRAX and Euclid, we will measure EG with a precision of up to 7% (3.9% for the more futuristic SKA2).
Article
Full-text available
The detection of the accelerated expansion of the Universe has been one of the major breakthroughs in modern cosmology. Several cosmological probes (Cosmic Microwave Background, Supernovae Type Ia, Baryon Acoustic Oscillations) have been studied in depth to better understand the nature of the mechanism driving this acceleration, and they are being currently pushed to their limits, obtaining remarkable constraints that allowed us to shape the standard cosmological model. In parallel to that, however, the percent precision achieved has recently revealed apparent tensions between measurements obtained from different methods. These are either indicating some unaccounted systematic effects, or are pointing toward new physics. Following the development of CMB, SNe, and BAO cosmology, it is critical to extend our selection of cosmological probes. Novel probes can be exploited to validate results, control or mitigate systematic effects, and, most importantly, to increase the accuracy and robustness of our results. This review is meant to provide a state-of-art benchmark of the latest advances in emerging “beyond-standard” cosmological probes. We present how several different methods can become a key resource for observational cosmology. In particular, we review cosmic chronometers, quasars, gamma-ray bursts, standard sirens, lensing time-delay with galaxies and clusters, cosmic voids, neutral hydrogen intensity mapping, surface brightness fluctuations, stellar ages of the oldest objects, secular redshift drift, and clustering of standard candles. The review describes the method, systematics, and results of each probe in a homogeneous way, giving the reader a clear picture of the available innovative methods that have been introduced in recent years and how to apply them. The review also discusses the potential synergies and complementarities between the various probes, exploring how they will contribute to the future of modern cosmology.
Article
We present the science case for surveys with the Tianlai dish array interferometer tuned to the [1300, 1400]MHz frequency range. Starting from a realistic generation of mock visibility data according to the survey strategy, we reconstruct maps of the sky and perform foreground subtraction. We estimate the level of residuals from imperfect subtraction, mostly due to mode mixing, i.e. distortions in the reconstructed 3D maps due to frequency dependent instrument response. We show that a survey of the North Celestial Polar cap during a year of observations, covering an area of 150 deg2, would reach a sensitivity of 1.5 − 2 mK per 1 MHz × 0.252 deg2 voxel and be marginally impacted by mode mixing. Tianlai would be able to detect ∼10 nearby massive HI clumps as well as a very strong cross-correlation signal of 21 cm intensity maps with the North Celestial Cap Survey optical galaxies. We also studied the performance of a midlatitude survey, covering ∼1500 deg2 overlapping the SDSS footprint. Despite a higher noise level for the midlatitude survey, as well as significant distortions due to mode mixing, Tianlai would be able to detect a highly significant cross-correlation between the 21 cm signal and the Sloan spectroscopic galaxy sample. Using the extragalactic signals measured from either or both of these surveys, and comparing them with simulations such as those presented here will make it possible to assess the impact of various instrumental imperfections on the Tianlai dish array performance. This would pave the way for future intensity mapping surveys with higher sensitivity.
Article
Observations of the redshifted 21-cm signal emitted by neutral hydrogen represent a promising probe of large-scale structure in the universe. However, the cosmological 21-cm signal is challenging to observe due to astrophysical foregrounds which are several orders of magnitude brighter. Traditional linear foreground removal methods can effectively remove foregrounds for a known telescope response but are sensitive to telescope systematic errors such as antenna gain and delay errors, leaving foreground contamination in the recovered signal. Nonlinear methods such as principal component analysis, on the other hand, have been used successfully for foreground removal, but they lead to signal loss that is difficult to characterize and requires careful analysis. In this paper, we present a systematics-robust foreground removal technique which combines both linear and nonlinear methods. We first obtain signal and foreground estimates using a linear filter. Under the assumption that the signal estimate is contaminated by foreground residuals induced by parametrizable systematic effects, we infer the systematics-induced contamination by cross-correlating the initial signal and foreground estimates. Correcting for the inferred error, we are able to subtract foreground contamination from the linearly filtered signal up to the first order in the amplitude of the telescope systematics. In simulations of an interferometric 21-cm survey, our algorithm removes foreground leakage induced by complex gain errors by 1 to 2 orders of magnitude in the power spectrum. Our technique thus eases the requirements on telescope characterization for modern and next-generation 21-cm cosmology experiments.
Article
Foreground mitigation is critical to all next-generation radio interferometers that target cosmology using the redshifted neutral hydrogen 21 cm emission line. Attempts to remove this foreground emission have led to new analysis techniques as well as new developments in hardware specifically dedicated to instrument beam and gain calibration, including stabilized signal injection into the interferometric array and drone-based platforms for beam mapping. The radio calibration sources currently used in the literature are broad-band incoherent sources that can only be detected as excess power and with no direct sensitivity to phase information. In this paper, we describe a digital radio source which uses Global Positioning Satellite (GPS) derived time stamps to form a deterministic signal that can be broadcast from an aerial platform. A copy of this source can be deployed locally at the instrument correlator such that the received signal from the aerial platform can be correlated with the local copy, and the resulting correlation can be measured in both amplitude and phase for each interferometric element. We define the requirements for such a source, describe an initial implementation and verification of this source using commercial Software Defined Radio boards, and present beam map slices from antenna range measurements using the commercial boards. We found that the commercial board did not meet all requirements, so we also suggest future directions using a more sophisticated chipset.
Article
Full-text available
We seek to remove foreground contaminants from 21cm intensity mapping observations. We demonstrate that a deep convolutional neural network (CNN) with a UNet architecture and three-dimensional convolutions, trained on simulated observations, can effectively separate frequency and spatial patterns of the cosmic neutral hydrogen (HI) signal from foregrounds in the presence of noise. Cleaned maps recover cosmological clustering statistics within 10% at all relevant angular scales and frequencies. This amounts to a reduction in prediction variance of over an order of magnitude on small angular scales (ℓ>300), and improved accuracy for small radial scales (k∥>0.17 h Mpc−1) compared to standard Principal Component Analysis (PCA) methods. We estimate posterior confidence intervals for the network's prediction by training an ensemble of UNets. Our approach demonstrates the feasibility of analyzing 21cm intensity maps, as opposed to derived summary statistics, for upcoming radio experiments, as long as the simulated foreground model is sufficiently realistic. We provide the code used for this analysis on Github this https URL as well as a browser-based tutorial for the experiment and UNet model via the accompanying this http URL Colab notebook.
Article
A cosmic string wake produces a distinct non-Gaussian signal in 21-cm intensity maps at redshifts above that of reionization. While the string signal is (locally) larger in amplitude than the signal of the Gaussian fluctuations of the ΛCDM model, they are overwhelmed (even locally in position space) by astrophysical and instrumental foregrounds. Here, we study to what extent the signal can be extracted from noisy interferometric data. The narrowness of the string-induced feature in the redshift direction allows for a subtraction of astrophysical and instrumental foregrounds. Based on the specific geometry of the string signal, we identify a particular three-point statistic that is promising in order to extract the signal, and we find that, having in mind a telescope with specifications similar to that of the MWA instrument, the string signal can be successfully extracted for a value of the string tension of Gμ=3×10−7. Prospects for further improvements of the analysis are discussed.
Article
Full-text available
A number of experiments are currently working towards a measurement of the 21 cm signal from the Epoch of Reionization. Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by a next-generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We consider an instrument modeled after the $\sim 0.1 \rm{km}^2$ collecting area Hydrogen Epoch of Reionization Array (HERA) concept design, and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described "wedge" footprint in $k$-space. Uncertainties in the reionization history are accounted for using a series of simulations which vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the IGM. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that $0.1 \rm{km}^2$ of collecting area is enough to ensure a very high significance ($\gtrsim30\sigma$) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented.
Article
Full-text available
Mapping our universe in 3D by imaging the redshifted 21 cm line from neutral hydrogen has the potential to overtake the cosmic microwave background as our most powerful cosmological probe, because it can map a much larger volume of our Universe, shedding new light on the epoch of reionization, inflation, dark matter, dark energy, and neutrino masses. We report on MITEoR, a pathfinder low-frequency radio interferometer whose goal is to test technologies that greatly reduce the cost of such 3D mapping for a given sensitivity. MITEoR accomplishes this by using massive baseline redundancy both to enable automated precision calibration and to cut the correlator cost scaling from N^2 to NlogN, where N is the number of antennas. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious HERA project, which would incorporate many identical or similar technologies using an order of magnitude more antennas, each with dramatically larger collecting area.
Article
Full-text available
Lunar occultation observations for more than 150 radio sources have been made with the steerable radio telescope at Ootacamund, India. The radio telescope is also being used for the observation of pulsars and interplanetary scintillations. It has an effective collecting area of about 8,700 m2.
Article
Full-text available
We study the prospects for extracting detailed statistical properties of the neutral hydrogen distribution during the era of reionization using the brightness temperature fluctuations from redshifted 21 cm line emission. Detection of this signal is complicated by contamination from foreground sources such as diffuse Galactic synchrotron and free-free emission at low radio frequencies, extragalactic free-free emission from ionized regions, and radio point sources. We model these foregrounds to determine the extent to which 21 cm fluctuations can be detected with upcoming experiments. We find that not only the level of correlation from one frequency to another but also the functional form of the foreground correlations has a substantial impact on foreground removal. We calculate how well the angular power spectra of the 21 cm fluctuations can be determined. We also show that the large-scale bias of the neutral hydrogen gas distribution with respect to the density field can be determined with high precision and used to distinguish between different reionization histories.
Article
Full-text available
Context: Linear polarization of Galactic synchrotron emission provides valuable information on the Galactic magnetic field and on the properties of the Galactic magneto-ionic medium. Polarized high-latitude Galactic emission is the major foreground for polarization studies of the cosmic microwave background. Aims: We present a new southern-sky lambda21 cm linear polarization survey, which complements the recent lambda21 cm DRAO northern sky polarization data. Methods: We used a 30-m telescope located at Villa Elisa/Argentina to map the southern sky simultaneously in continuum and linear polarization. Results: We present a fully sampled map of linearly polarized emission at lambda21 cm of the southern sky for declinations between -10° and -90°. The angular resolution of the survey is 36' and its sensitivity is 15 mK (rms-noise) in Stokes U and Q. The survey's zero-level has been adjusted to that of the recent DRAO 1.4 GHz linear polarization survey by comparing data in the region of overlap between -10° and -27°. Conclusions: The polarized southern sky at 1.4 GHz shows large areas with smooth low-level emission almost uncorrelated to total intensities indicating that Faraday rotation originating in the Galactic interstellar medium along the line of sight is significant at 1.4 GHz. The southern sky is much less contaminated by local foreground features than is the northern sky. Thus high-frequency observations of polarized cosmic microwave emission are expected to be less affected. The percentage polarization of the high-latitude emission is low, which seems to be an intrinsic property of Galactic emission.
Article
Full-text available
The large-scale distribution of neutral hydrogen in the Universe will be luminous through its 21 cm emission. Here, for the first time, we use the auto-power spectrum of 21 cm intensity fluctuations to constrain neutral hydrogen fluctuations at z ∼ 0.8. Our data were acquired with the Green Bank Telescope and span the redshift range 0.6 < z < 1 over two fields totalling ≈41 deg2 and 190 h of radio integration time. The dominant synchrotron foregrounds exceed the signal by ∼103, but have fewer degrees of freedom and can be removed efficiently. Even in the presence of residual foregrounds, the auto-power can still be interpreted as an upper bound on the 21 cm signal. Our previous measurements of the cross-correlation of 21 cm intensity and the WiggleZ galaxy survey provide a lower bound. Through a Bayesian treatment of signal and foregrounds, we can combine both fields in auto- and cross-power into a measurement of ΩHI bHI= [0.62+0.23−0.15] × 10−3 at 68 per cent confidence with 9 per cent systematic calibration uncertainty, where ΩHI is the neutral hydrogen (H i) fraction and bHI is the H i bias parameter. We describe observational challenges with the present data set and plans to overcome them.
Article
Full-text available
LOFAR, the LOw-Frequency ARray, is a new-generation radio interferometer constructed in the north of the Netherlands and across europe. Utilizing a novel phased-array design, LOFAR covers the largely unexplored low-frequency range from 10-240 MHz and provides a number of unique observing capabilities. Spreading out from a core located near the village of Exloo in the northeast of the Netherlands, a total of 40 LOFAR stations are nearing completion. A further five stations have been deployed throughout Germany, and one station has been built in each of France, Sweden, and the UK. Digital beam-forming techniques make the LOFAR system agile and allow for rapid repointing of the telescope as well as the potential for multiple simultaneous observations. With its dense core array and long interferometric baselines, LOFAR achieves unparalleled sensitivity and angular resolution in the low-frequency radio regime. The LOFAR facilities are jointly operated by the International LOFAR Telescope (ILT) foundation, as an observatory open to the global astronomical community. LOFAR is one of the first radio observatories to feature automated processing pipelines to deliver fully calibrated science products to its user community. LOFAR's new capabilities, techniques and modus operandi make it an important pathfinder for the Square Kilometre Array (SKA). We give an overview of the LOFAR instrument, its major hardware and software components, and the core science objectives that have driven its design. In addition, we present a selection of new results from the commissioning phase of this new radio observatory.
Article
Full-text available
The main goals of this study is to use the information from both WMAP intensity and polarization data to do a separation of the Galactic components, with a focus on the synchrotron and anomalous emissions. Our analysis is made at 23 GHz where the signal-to-noise ratio is the highest and the estimate of the CMB map is less critical. Our estimate of the synchrotron intensity is based on an extrapolation of the Haslam 408 MHz data with a spatially varying spectral index constrained by the WMAP 23 GHz polarization data and a bi-symmetrical spiral model of the galactic magnetic field with a turbulent part following a -5/3 power law spectrum. The 23 GHz polarization data are found to be compatible with a magnetic field with a pitch angle p=-8.5 degrees and an amplitude of the turbulent part of the magnetic field 0.57 times the local value of the field, in agreement with what is found using rotation measures of pulsars and polarized extinction by dust. The synchrotron spectral index between 408 MHz and 23 GHz obtained from polarization data and our model of the magnetic field has a mean value of Beta=-3.00 with a limited spatial variation with a standard deviation of 0.06. When thermal dust, free-free and synchrotron are removed from the WMAP intensity data, the residual anomalous emission is highly correlated with thermal dust emission with a spectrum in agreement with spinning dust models. Considering a classical model of the large scale Galactic magnetic field, we show that the polarization data of WMAP are in favor of a soft synchrotron intensity highly correlated with the 408 MHz data. Furthermore the combination of the WMAP polarization and intensity data brings strong evidence for the presence of unpolarized spinning dust emission in the 20-60 GHz range.
Article
Full-text available
The technique of 21 cm tomography promises to be a powerful tool for estimating cosmological parameters, constraining the epoch of reionization, and probing the so-called dark ages. However, realizing this promise will require the extraction of a cosmological power spectrum from beneath overwhelmingly large sources of foreground contamination. In this paper, we develop a unified matrix-based framework for foreground subtraction and power spectrum estimation, which allows us to quantify the errors and biases that arise in the power spectrum as a result of foreground subtraction. We find that existing line-of-sight foreground subtraction proposals can lead to substantial mode mixing as well as residual noise and foreground biases, whereas our proposed inverse-variance foreground subtraction eliminates noise and foreground biases, gives smaller error bars, and produces less correlated measurements of the power spectrum. We also numerically confirm the intuitive belief in the literature that 21 cm foreground subtraction is best done using frequency rather than angular information.
Article
Full-text available
Experiments aimed at detecting highly-redshifted 21 centimeter emission from the Epoch of Reionization (EoR) are plagued by the contamination of foreground emission. A potentially important source of contaminating foregrounds may be Faraday-rotated, polarized emission, which leaks into the estimate of the intrinsically unpolarized EoR signal. While these foregrounds' intrinsic polarization may not be problematic, the spectral structure introduced by the Faraday rotation could be. To better understand and characterize these effects, we present a simulation of the polarized sky between 120 and 180 MHz. We compute a single visibility, and estimate the three-dimensional power spectrum from that visibility using the delay spectrum approach presented in Parsons et al. (2012b) . Using the Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER) as an example instrument, we show the expected leakage into the unpolarized power spectrum to be several orders of magnitude above the expected 21cm EoR signal.
Article
Full-text available
The NRAO VLA Sky Survey (NVSS) covers the sky north of J2000.0 delta = -40 deg (82% of the celestial sphere) at 1.4 GHz. The principal data products are (1) a set of 2326 4 deg x 4 deg continuum "cubes'' with three planes containing Stokes I, Q, and U images plus (2) a catalog of almost 2 x 10^6 discrete sources stronger than S ~ 2.5 mJy. The images all have theta = 45" FWHM resolution and nearly uniform sensitivity. Their rms brightness fluctuations are sigma ~ 0.45 mJy beam^-1 ~ 0.14 K (Stokes I) and sigma ~ 0.29 mJy beam^-1 ~ 0.09 K (Stokes Q and U). The rms uncertainties in right ascension and declination vary from <~1" for the N ~ 4 x 10^5 sources stronger than 15 mJy to 7" at the survey limit. The NVSS was made as a service to the astronomical community. All data products, user software, and updates are being released via the World Wide Web as soon as they are produced and verified.
Article
Full-text available
We develop and demonstrate an acceleration of the Liu & Tegmark quadratic estimator formalism for inverse variance foreground subtraction and power spectrum estimation in 21 cm tomography from O(N^3) to O(N log N), where N is the number of voxels of data. This technique makes feasible the megavoxel scale analysis necessary for current and upcoming radio interferometers by making only moderately restrictive assumptions about foreground models and survey geometry. We exploit iterative and Monte Carlo techniques and the symmetries of the foreground covariance matrices to quickly estimate the 21 cm brightness temperature power spectrum, P(k_parallel, k_perpendicular), the Fisher information matrix, the error bars, the window functions, and the bias. We also extend the Liu & Tegmark foreground model to include bright point sources with known positions in a way that scales as O[(N log N)(N point sources)] < O(N^5/3). As a first application of our method, we forecast error bars and window functions for the upcoming 128-tile deployment of the Murchinson Widefield Array, showing that 1000 hours of observation should prove sufficiently sensitive to detect the power spectrum signal from the Epoch of Reionization.
Article
Full-text available
This work describes a new instrument optimized for a detection of the neutral hydrogen 21cm power spectrum between redshifts of 0.5-1.5: the Baryon Acoustic Oscillation Broadband and Broad-beam (BAOBAB) Array. BAOBAB will build on the efforts of a first generation of 21cm experiments which are targeting a detection of the signal from the Epoch of Reionization at z ~ 10. At z ~ 1, the emission from neutral hydrogen in self-shielded overdense halos also presents an accessible signal, since the dominant, synchrotron foreground emission is considerably fainter than at redshift 10. The principle science driver for these observations are Baryon Acoustic Oscillations in the matter power spectrum which have the potential to act as a standard ruler and constrain the nature of dark energy. BAOBAB will fully correlate dual-polarization antenna tiles over the 600-900MHz band with a frequency resolution of 300 kHz and a system temperature of 50K. The number of antennas will grow in staged deployments, and reconfigurations of the array will allow for both traditional imaging and high power spectrum sensitivity operations. We present calculations of the power spectrum sensitivity for various array sizes, with a 35-element array measuring the cosmic neutral hydrogen fraction as a function of redshift, and a 132-element system detecting the BAO features in the power spectrum, yielding a 1.8% error on the z ~ 1 distance scale, and, in turn, significant improvements to constraints on the dark energy equation of state over an unprecedented range of redshifts from ~0.5-1.5.
Article
Full-text available
We present a full-sky model of polarized Galactic microwave emission based on 3 years of observations by the Wilkinson Microwave Anisotropy Probe (WMAP) at frequencies from 23 to 94 GHz. The model compares maps of the Stokes Q and U components from each of the five WMAP frequency bands in order to separate synchrotron from dust emission, taking into account the spatial and frequency dependence of the synchrotron and dust components. This simple two-component model of the interstellar medium accounts for at least 97% of the polarized emission in the WMAP maps of the microwave sky. Synchrotron emission dominates the polarized foregrounds at frequencies below 50 GHz, and is comparable to the dust contribution at 65 GHz. The spectral index of the synchrotron component, derived solely from polarization data, is -3.2 averaged over the full sky, with a modestly flatter index on the Galactic plane. The synchrotron emission has mean polarization fraction 2%-4% in the Galactic plane and rising to over 20% at high latitude, with prominent features such as the north Galactic spur more polarized than the diffuse component. Thermal dust emission has polarization fraction 1% near the Galactic center, rising to 6% at the anticenter. Diffuse emission from high-latitude dust is also polarized with mean fractional polarization 0.036 ± 0.011.
Article
Full-text available
Baryon acoustic oscillations (BAO) provide a robust standard ruler with which to measure the acceleration of the universe. The BAO feature has so far been detected in optical galaxy surveys. Intensity mapping of neutral hydrogen emission with a ground-based radio telescope provides another promising window for measuring BAO at redshifts of order unity for relatively low cost. While the cylindrical radio telescope (CRT) proposed for these measurements will have excellent redshift resolution, it will suffer from poor angular resolution (arcminutes at best). We investigate the effect of angular resolution on the standard ruler test with BAO, using the Dark Energy Task Force Figure of Merit (FoM) as a benchmark. We then extend the analysis to include variations in the parameters characterizing the telescope and the underlying physics. Finally, we optimize the survey parameters (holding total cost fixed) and present an example of a CRT BAO survey that is competitive with Stage III dark energy experiments. The tools developed here form the backbone of a publicly available code that can be used to obtain estimates of cost and FoM for any set of survey parameters.
Article
Full-text available
We are developing the Precision Array for Probing the Epoch of Re-ionization (PAPER) to detect 21 cm emission from the early universe, when the first stars and galaxies were forming. We describe the overall experiment strategy and architecture and summarize two PAPER deployments: a four-antenna array in the low radio frequency interference (RFI) environment of Western Australia and an eight-antenna array at a prototyping site at the NRAO facilities near Green Bank, WV. From these activities we report on system performance, including primary beam model verification, dependence of system gain on ambient temperature, measurements of receiver and overall system temperatures, and characterization of the RFI environment at each deployment site. We present an all-sky map synthesized between 139 MHz and 174 MHz using data from both arrays that reaches down to 80 mJy (4.9 K, for a beam size of 2.15e–5 sr at 156 MHz), with a 10 mJy (620 mK) thermal noise level that indicates what would be achievable with better foreground subtraction. We calculate angular power spectra (C ℓ) in a cold patch and determine them to be dominated by point sources, but with contributions from galactic synchrotron emission at lower radio frequencies and angular wavemodes. Although the sample variance of foregrounds dominates errors in these power spectra, we measure a thermal noise level of 310 mK at ℓ = 100 for a 1.46 MHz band centered at 164.5 MHz. This sensitivity level is approximately 3 orders of magnitude in temperature above the level of the fluctuations in 21 cm emission associated with re-ionization.
Article
Full-text available
3D mapping of matter distribution in the universe through the 21 cm radio emission of atomic hydrogen HI is a complementary approach to optical surveys for the study of the Large Scale Structures, in particular for measuring the BAO (Baryon Acoustic Oscillation) scale up to redshifts z < 3, and therefore constraining dark energy parameters. We propose a novel method to map the HI mass distribution in three dimensions in radio, without detecting or identifying individual compact sources. This method would require an instrument with a large instantaneous bandwidth (> 100 MHz) and high sensitivity, while a rather modest angular resolution (~ 10 arcmin) should be sufficient. These requirements can be met by a dense interferometric array or a phased array (FPA) in the focal plane of a large primary reflector, representing a total collecting area of a few thousand square meters with few hundred simultaneous beams covering a 20 to 100 square degrees field of view. We describe the development and qualification of an electronic and data processing system for digital radio interferometry and beam forming suitable for such instruments with several hundred receiver elements.
Article
Full-text available
We discuss the detection of large scale HI intensity fluctuations using a single dish approach with the ultimate objective of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. We present 3D power spectra, 2D angular power spectra for individual redshift slices, and also individual line-of-sight spectra, computed using the S^3 simulated HI catalogue which is based on the Millennium Simulation. We consider optimal instrument design and survey strategies for a single dish observation at low and high redshift for a fixed sensitivity. For a survey corresponding to an instrument with T_sys=50 K, 50 feed horns and 1 year of observations, we find that at low redshift (z \approx 0.3), a resolution of 40 arc min and a survey of 5000 deg^2 is close to optimal, whereas at higher redshift (z \approx 0.9) a resolution of 10 arcmin and 500 deg^2 would be necessary. Continuum foreground emission from the Galaxy and extragalactic radio sources are potentially a problem. We suggest that it could be that the dominant extragalactic foreground comes from the clustering of very weak sources. We assess its amplitude and discuss ways by which it might be mitigated. We then introduce our concept for a single dish telescope designed to detect BAO at low redshifts. It involves an under-illumintated static 40 m dish and a 60 element receiver array held 90 m above the under-illuminated dish. Correlation receivers will be used with each main science beam referenced against an antenna pointing at one of the Celestial Poles for stability and control of systematics. We make sensitivity estimates for our proposed system and projections for the uncertainties on the power spectrum after 1 year of observations. We find that it is possible to measure the acoustic scale at z\approx 0.3 with an accuracy 2.4% and that w can be measured to an accuracy of 16%.
Article
Full-text available
In this letter, 21 cm intensity maps acquired at the Green Bank Telescope are cross-correlated with large-scale structure traced by galaxies in the WiggleZ Dark Energy Survey. The data span the redshift range 0.6 < z < 1 over two fields totaling ~41 deg. sq. and 190 hours of radio integration time. The cross-correlation constrains Omega_HI b_HI r = [0.43 \pm 0.07 (stat.) \pm 0.04(sys.)] x 10^-3, where Omega_HI is the neutral hydrogen HI fraction, r is the galaxy-hydrogen correlation coefficient, and b_HI is the HI bias parameter. This is the most precise constraint on neutral hydrogen density fluctuations in a challenging redshift range. Our measurement improves the previous 21 cm cross-correlation at z ~ 0.8 both in its precision and in the range of scales probed.
Article
Full-text available
A critical challenge in measuring the power spectrum of 21cm emission from cosmic reionization is compensating for the frequency dependence of an interferometer's sampling pattern, which can cause smooth-spectrum foregrounds to appear unsmooth and degrade the separation between foregrounds and the target signal. In this paper, we present an approach to foreground removal that explicitly accounts for this frequency dependence. We apply the delay transformation introduced in Parsons & Backer (2009) to each baseline of an interferometer to concentrate smooth-spectrum foregrounds within the bounds of the maximum geometric delays physically realizable on that baseline. By focusing on delay-modes that correspond to image-domain regions beyond the horizon, we show that it is possible to avoid the bulk of smooth-spectrum foregrounds. We map the point-spread function of delay-modes to k-space, showing that delay-modes that are uncorrupted by foregrounds also represent samples of the three-dimensional power spectrum, and can be used to constrain cosmic reionization. Because it uses only spectral smoothness to differentiate foregrounds from the targeted 21cm signature, this per-baseline analysis approach relies on spectrally- and spatially-smooth instrumental responses for foreground removal. For sufficient levels of instrumental smoothness relative to interfering foregrounds, this technique substantially reduces the level of calibration previously thought necessary to detect 21cm reionization. As a result, this approach places fewer constraints on antenna configuration within an array, facilitating the adoption of configurations optimized for power-spectrum sensitivity. Under these assumptions, we demonstrate the potential for PAPER to detect 21cm reionization at an amplitude of 10 mK^2 near k~0.2h Mpc^-1 with 132 dipoles in 7 months of observing.
Article
Full-text available
We present the design and development of the electronic multi-beam radio astronomy concept (EMBRACE), a demonstrator that is part of the European contribution towards the square kilometre array, which is currently being designed by the global radio astronomical community. One of the design goals of EMBRACE is to demonstrate the applicability of phased array technology for use in future radio telescopes. The EMBRACE system will ultimately consist of two stations, the largest of which comprises over 20,000 elements and has a physical area of about 160 m<sup>2</sup> . The antenna system, covering the 500-1500 MHz frequency range, is designed as a dual polarized system, however only the signals for one polarization are processed. To obtain a cost effective design, RF analog beam-forming is performed on tile level close to the radiators. The demonstrator is designed to provide two independent beams such that different parts of the sky can be observed simultaneously. First results from part of the array are presented and discussed. The results show that the complete data path is functional. Since the design resembles a large regular contiguous array, all coupling can be taken into account in the embedded element patterns. The array factor therefore suffices to describe the scanning of the array reducing significantly calibration complexity compared to, e.g. sparse, random or more irregular arrays. This is confirmed by the first array factor measurements, that were done using a novel technique that does not require calibration of the array. The first measurements on an astronomical source, the Sun, indicate that the system noise temperature lies between 104 and 118 K, which is reassuringly close to the design target of 100 K.
Article
Full-text available
The Allen Telescope Array (ATA) is a cm-wave interferometer in California, comprising 42 antenna elements with 6-m diameter dishes. We characterize the antenna optical accuracy using two-antenna interferometry and radio holography. The distortion of each telescope relative to the average is small, with RMS differences of 1% of beam peak value. Holography provides images of dish illumination, characterizing as-built mirror surfaces. Maximal distortions across ~ 2 meter lengths appear to result from mounting stresses or solar radiation. Experimental RMS errors are 0.7 mm at night and 3 mm under worst-case solar illumination. For frequencies 4, 10, and 15 GHz, the nighttime values indicate sensitivity losses of 1, 10 and 20%, respectively. ATA's wide-bandwidth receiver permits observations over a continuous range 0.5-11.2 GHz. We probe the antenna optical gain and beam pattern stability as a function of focus position and observation frequency, concluding that ATA can produce high fidelity images over a decade of simultaneous observation frequencies. We quantify solar heating effects on antenna sensitivity and pointing accuracy. We find that during the day, observations >;5 GHz will suffer some sensitivity loss and it may be necessary to make antenna pointing corrections on a 1-2 hourly basis.
Article
Full-text available
We aim to summarize the current state of knowledge regarding Galactic Faraday rotation in an all-sky map of the Galactic Faraday depth. For this we have assembled the most extensive catalog of Faraday rotation data of compact extragalactic polarized radio sources to date. In the map making procedure we use a recently developed algorithm that reconstructs the map and the power spectrum of a statistically isotropic and homogeneous field while taking into account uncertainties in the noise statistics. This procedure is able to identify some rotation angles that are offset by an integer multiple of pi. The resulting map can be seen as an improved version of earlier such maps and is made publicly available, along with a map of its uncertainty. For the angular power spectrum we find a power law behavior with a power law index of -2.14 for a Faraday sky where an overall variance profile as a function of Galactic latitude has been removed, in agreement with earlier work. We show that this is in accordance with a 3D Fourier power spectrum P(k) proportional to k^-2.14 of the underlying field n_e times B_r under simplifying geometrical and statistical assumptions.
Article
Full-text available
We present ACS, NICMOS, and Keck AO-assisted photometry of 20 Type Ia supernovae SNe Ia from the HST Cluster Supernova Survey. The SNe Ia were discovered over the redshift interval 0.623 < z < 1.415. Fourteen of these SNe Ia pass our strict selection cuts and are used in combination with the world's sample of SNe Ia to derive the best current constraints on dark energy. Ten of our new SNe Ia are beyond redshift $z=1$, thereby nearly doubling the statistical weight of HST-discovered SNe Ia beyond this redshift. Our detailed analysis corrects for the recently identified correlation between SN Ia luminosity and host galaxy mass and corrects the NICMOS zeropoint at the count rates appropriate for very distant SNe Ia. Adding these supernovae improves the best combined constraint on the dark energy density \rho_{DE}(z) at redshifts 1.0 < z < 1.6 by 18% (including systematic errors). For a LambdaCDM universe, we find \Omega_\Lambda = 0.724 +0.015/-0.016 (68% CL including systematic errors). For a flat wCDM model, we measure a constant dark energy equation-of-state parameter w = -0.985 +0.071/-0.077 (68% CL). Curvature is constrained to ~0.7% in the owCDM model and to ~2% in a model in which dark energy is allowed to vary with parameters w_0 and w_a. Tightening further the constraints on the time evolution of dark energy will require several improvements, including high-quality multi-passband photometry of a sample of several dozen z>1 SNe Ia. We describe how such a sample could be efficiently obtained by targeting cluster fields with WFC3 on HST.
Article
Full-text available
The \hi 21 cm transition line is expected to be an important probe into the cosmic dark ages and epoch of reionization. Foreground source removal is one of the principal challenges for the detection of this signal. This paper investigates the extragalactic point source contamination and how accurately bright sources ($\gtrsim 1$ ~Jy) must be removed in order to detect 21 cm emission with upcoming radio telescopes such as the Murchison Widefield Array (MWA). We consider the residual contamination in 21 cm maps and power spectra due to position errors in the sky-model for bright sources, as well as frequency independent calibration errors. We find that a source position accuracy of 0.1 arcsec will suffice for detection of the \hi power spectrum. For calibration errors, 0.05 % accuracy in antenna gain amplitude is required in order to detect the cosmic signal. Both sources of subtraction error produce residuals that are localized to small angular scales, $\kperp \gtrsim 0.05 $Mpc$^{-1}$, in the two-dimensional power spectrum. Comment: 12 pages, 19 Figures, submitted to ApJ
Article
Full-text available
Our position inside the Galaxy requires all-sky surveys to reveal its large-scale properties. The zero-level calibration of all-sky surveys differs from standard 'relative' measurements, where a source is measured in respect to its surroundings. All-sky surveys aim to include emission structures of all angular scales exceeding their angular resolution including isotropic emission components. Synchrotron radiation is the dominating emission process in the Galaxy up to frequencies of a few GHz, where numerous ground based surveys of the total intensity up to 1.4 GHz exist. Its polarization properties were just recently mapped for the entire sky at 1.4 GHz. All-sky total intensity and linear polarization maps from WMAP for frequencies of 23 GHz and higher became available and complement existing sky maps. Galactic plane surveys have higher angular resolution using large single-dish or synthesis telescopes. Polarized diffuse emission shows structures with no relation to total intensity emission resulting from Faraday rotation effects in the interstellar medium. The interpretation of these polarization structures critically depends on a correct setting of the absolute zero-level in Stokes U and Q. Comment: 10 pages, 8 figures. To be published in "Cosmic Magnetic Fields: From Planets, to Stars and Galaxies", K.G. Strassmeier, A.G. Kosovichev & J.E. Beckman, eds., Proc. IAU Symp. 259, CUP
Article
A new polarization survey of the northern sky at 1.41 GHz is presented. The observations were carried out using the 25.6 m telescope at the Dominion Radio Astrophysical Observatory in Canada, with an angular resolution of 36 arcmin. The data are corrected for ground radiation to obtain Stokes U and Q maps on a well-established intensity scale tied to absolute determinations of zero levels, containing emission structures of large angular extent, with an rms noise of 12 mK. Survey observations were carried out by drift scanning the sky between -29° and +90° declination. The fully sampled drift scans, observed in steps of 0.25° to ˜ 2.5° in declination, result in a northern sky coverage of 41.7% of full Nyquist sampling. The survey surpasses by a factor of 200 the coverage, and by a factor of 5 the sensitivity, of the Leiden/Dwingeloo polarization survey that was until now the most complete large-scale survey. The temperature scale is tied to the Effelsberg scale. Absolute zero-temperature levels are taken from the Leiden/Dwingeloo survey after rescaling those data by the factor of 0.94. The paper describes the observations, data processing, and calibration steps. The data are publicly available at http://www.mpifr-bonn.mpg.de/div/konti/26msurvey or http://www.drao.nrc.ca/26msurvey.
Article
Context. Large-scale structures (LSS) in the universe can be traced using the neutral atomic hydrogen HI through its 21 cm emission. Such a 3D matter distribution map can be used to test the cosmological model and to constrain the dark energy properties or its equation of state. A novel approach, called intensity mapping, can be used to map the HI distribution, using radio interferometers with a large instantaneous field of view and waveband. Aims: We study the sensitivity of different radio interferometer configurations, or multi-beam instruments for observing LSS and baryon acoustic oscillations (BAO) in 21 cm, and we discuss the problem of foreground removal. Methods: For each configuration, we determined instrument response by computing the (u, v) or Fourier angular frequency plane coverage using visibilities. The (u, v) plane response determines the noise power spectrum, hence the instrument sensitivity for LSS P(k) measurement. We also describe a simple foreground subtraction method of separating LSS 21 cm signal from the foreground due to the galactic synchrotron and radio source emission. Results: We have computed the noise power spectrum for different instrument configurations, as well as the extracted LSS power spectrum, after separating the 21 cm-LSS signal from the foregrounds. We have also obtained the uncertainties on the dark energy parameters for an optimized 21 cm BAO survey. Conclusions: We show that a radio instrument with few hundred simultaneous beams and a collecting area of ~10 000 m2 will be able to detect BAO signal at redshift z ~ 1 and will be competitive with optical surveys.
Book
The discipline of antenna theory has experienced vast technological changes. In response, Constantine Balanis has updated his classic text, Antenna Theory, offering the most recent look at all the necessary topics. New material includes smart antennas and fractal antennas, along with the latest applications in wireless communications. Multimedia material on an accompanying CD presents PowerPoint viewgraphs of lecture notes, interactive review questions, Java animations and applets, and MATLAB features. Like the previous editions, Antenna Theory, Third Edition meets the needs of electrical engineering and physics students at the senior undergraduate and beginning graduate levels, and those of practicing engineers as well. It is a benchmark text for mastering the latest theory in the subject, and for better understanding the technological applications. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Article
Observations of the redshifted 21-cm HI fluctuations promise to be an important probe of the post-reionization era (z≤ 6). In this paper we calculate the expected signal and foregrounds for the upgraded Ooty Radio Telescope (ORT) which operates at frequency ν o = 326.5 MHz which corresponds to redshift z = 3.35. Assuming that the visibilities contain only the HI signal and system noise, we show that a 3 σ detection of the HI signal (∼1 mK) is possible at angular scales 11′ to 3∘ with ≈1000 h of observation. Foreground removal is one of the major challenges for a statistical detection of the redshifted 21 cm HI signal. We assess the contribution of different foregrounds and find that the 326.5 MHz sky is dominated by the extragalactic point sources at the angular scales of our interest. The expected total foregrounds are 104−105 times higher than the HI signal.
Article
At redshifts z > or approx. 30 neutral hydrogen gas absorbs cosmic microwave background radiation at the 21 cm spin-flip frequency. In principle this is observable and a high-precision probe of cosmology. We calculate the linear-theory angular-power spectrum of this signal and cross correlation between redshifts on scales much larger than the linewidth. In addition to the well-known redshift distortion and density perturbation sources, a full linear analysis gives additional contributions to the power spectrum. On small scales there is a percent-level linear effect due to perturbations in the 21 cm optical depth, and perturbed recombination modifies the gas temperature perturbation evolution (and hence spin temperature and 21 cm power spectrum). On large scales there are several post-Newtonian and velocity effects; although negligible on small scales, these additional terms can be significant at l < or approx. 100 and can be nonzero even when there is no background signal. We also discuss the linear effect of reionization rescattering, which damps the entire spectrum and gives a very small polarization signal on large scales. On small scales we also model the significant nonlinear effects of evolution and gravitational lensing. We include full results for numerical calculation and also various approximate analytic results for the power spectrum and evolution of small-scale perturbations.
Article
21-cm tomography is emerging as a promising probe of the cosmological dark ages and the epoch of reionization, as well as a tool for observational cosmology in general. However, serious sources of foreground contamination must be subtracted for experimental efforts to be viable. In this paper, we focus on the removal of unresolved extragalactic point sources with smooth spectra, and evaluate how the residual foreground contamination after cleaning depends on instrumental and algorithmic parameters. A crucial but often ignored complication is that the synthesized beam of an interferometer array shrinks towards higher frequency, causing complicated frequency structure in each sky pixel as ‘frizz’ far from the beam centre contracts across unresolved radio sources. We find that current-generation experiments should none the less be able to clean out this point source contamination adequately, and quantify the instrumental and algorithmic design specifications required to meet this foreground challenge.
Article
In this paper we describe the spherical harmonic transit telescope, a novel formalism for the analysis of transit radio telescopes. This all-sky approach bypasses the curved sky complications of traditional interferometry and so is particularly well suited to the analysis of wide-field radio interferometers. It enables compact and computationally efficient representations of the data and its statistics that allow new ways of approaching important problems like map-making and foreground removal. In particular, we show how it enables the use of the Karhunen-Loeve transform as a highly effective foreground filter, suppressing realistic foreground residuals for our fiducial example by at least a factor twenty below the 21cm signal even in highly contaminated regions of the sky. This is despite the presence of the angle-frequency mode mixing inherent in real-world instruments with frequency-dependent beams. We show, using Fisher forecasting, that foreground cleaning has little effect on power spectrum constraints compared to hypothetical foreground-free measurements. Beyond providing a natural real-world data analysis framework for 21cm telescopes now under construction and future experiments, this formalism allows accurate power spectrum forecasts to be made that include the interplay of design constraints and realistic experimental systematics with twenty-first century 21cm science.
Article
Modifications of the Cross radiotelescope of Molonglo Observatory are largely complete, and observations are being done again. A steerable telescope, necessary to obtain extended observing times and greater sensitivity, has been included, and other engineering design modifications include two east-west collinear reflectors each 778 m by 11.6 m and each divided into 44 sections of length 17.7 m. Practical differences between the fan beam form of rotational synthesis and the interferometric form are given, as well as several estimated performance specifications such as a center frequency of 843 MHz, an effective bandwidth of 3 MHz, and an effective antenna area of 9000 sq m, but more time is necessary in order to evaluate fully the radiotelescope performance. A scientific program attempting to measure the effective temperature of Uranus is outlined, and other early programs are mentioned, including the astrometry of currently unidentified radio sources, the photometry of a selection of bright galaxies, and observations of pulse shapes of some of the stronger pulsars.
Article
The primary challenge for experiments measuring the neutral hydrogen power spectrum from the Epoch of Reionization (EoR) are mode-mixing effects where foregrounds from very bright astrophysical sources interact with the instrument to contaminate the EoR signal. In this paper we identify a new type of mode-mixing that occurs when measurements from non-identical baselines are combined for increased power spectrum sensitivity. This multi-baseline effect dominates the mode-mixing power in our simulations and can contaminate the EoR window, an area in Fourier space previously identified to be relatively free of foreground power.
Article
In my talk at the 2nd Galileo-Xu Meeting, I presented several different topics in 21cm cosmology for which I have done research. These includes the 21cm signature of the first stars[1,2], the 21cm signal from the IGM and minihalos[3], effect of dark matter annihila- tions on 21cm signal[4], the 21cm forest by ionized/neutral region[5], and the 21cm forest by minihalo and earliest galaxies[6,7]. In this conference proceeding I shall not repeat these discussions, but instead focus on the last part of my talk, i.e. the Tianlai project, an experiment effort on low redshift 21cm intensity mapping observation for dark energy measurements.
Article
With this Letter, we complete our model of the Galactic magnetic field (GMF), by using the WMAP7 22 GHz total synchrotron intensity map and our earlier results to obtain a 13-parameter model of the Galactic random field, and to determine the strength of the striated random field. In combination with our 22-parameter description of the regular GMF, we obtain a very good fit to more than forty thousand extragalactic Faraday Rotation Measures (RMs) and the WMAP7 22 GHz polarized and total intensity synchrotron emission maps. The data calls for a striated component to the random field whose orientation is aligned with the regular field, having zero mean and rms strength ~20% larger than the regular field. A noteworthy feature of the new model is that the regular field has a significant out-of-plane component, which had not been considered earlier. The new GMF model gives a much better description of the totality of data than previous models in the literature.
Article
Since cosmology is no longer "the data-starved science," the problem of how to analyze large data sets best has recently received considerable attention, and Karhunen-Loève eigenvalue methods have been applied to both galaxy redshift surveys and cosmic microwave background (CMB) maps. We present a comprehensive discussion of methods for estimating cosmological parameters from large data sets, which includes the previously published techniques as special cases. We show that both the problem of estimating several parameters jointly and the problem of not knowing the parameters a priori can be readily solved by adding an extra singular value decomposition step. It has recently been argued that the information content in a sky map from a next-generation CMB satellite is sufficient to measure key cosmological parameters (h, Ω, Λ, etc.) to an accuracy of a few percent or better—in principle. In practice, the data set is so large that both a brute force likelihood analysis and a direct expansion in signal-to-noise eigenmodes will be computationally unfeasible. We argue that it is likely that a Karhunen-Loève approach can nonetheless measure the parameters with close to maximal accuracy, if preceded by an appropriate form of quadratic "precompression." We also discuss practical issues regarding parameter estimation from present and future galaxy redshift surveys and illustrate this with a generalized eigenmode analysis of the IRAS 1.2 Jy survey optimized for measuring β ≡ Ω0.6/b using redshift space distortions.
Article
Interferometric observation of the cosmic microwave background (CMB) polarization can be expressed as a linear sum of spherical harmonic coefficients a±2,lm of the CMB polarization. The linear weight for a±2,lm depends on the observational configuration such as antenna pointing, baseline orientation and spherical harmonic number l, m. Since an interferometer is sensitive over a finite range of multipoles, a±2,lm in the range can be determined by fitting a±2,lm for visibilities of various observational configurations. The formalism presented in this paper enables the determination of a±2,lm directly from spherical harmonic spaces without spherical harmonic transformation of pixellized maps. The result of its application to a simulated observation is presented with the formalism.
Article
Observations of redshifted 21-cm radiation from neutral hydrogen (H i) at high redshifts is an important future probe of reionization. We consider the multifrequency angular power spectrum (MAPS) to quantify the statistics of the H i signal as a joint function of the angular multipole l and frequency separation Δν. The signal at two different frequencies is expected to decorrelate as Δν is increased, and quantifying this is particularly important in deciding the frequency resolution for future H i observations. This is also expected to play a very crucial role in extracting the signal from foregrounds as the signal is expected to decorrelate much faster than the foregrounds (which are largely continuum sources) with increasing Δν. In this paper, we develop formulae relating MAPS to different components of the 3D H i power spectrum taking into account H i peculiar velocities. We show that the flat-sky approximation provides a very good representation over the angular scales of interest, and a final expression which is very simple to calculate and interpret. We present results for z= 10 assuming a neutral hydrogen fraction of 0.6 considering two models for the H i distribution, namely, (i) DM: where H i traces the dark matter and (ii) PR: where the effects of patchy reionization are incorporated through two parameters which are the bubble size and the clustering of the bubble centres relative to the dark matter (bias), respectively. We find that while the DM signal is largely featureless, the PR signal peaks at the angular scales of the individual bubbles where it is Poisson fluctuation dominated, and the signal is considerably enhanced for large bubble size. For most cases of interest at l∼ 100 the signal is uncorrelated beyond Δν∼ 1 MHz or even less, whereas this occurs around ∼0.1 MHz at l∼ 103. The Δν dependence also carries an imprint of the bubble size and the bias, and is expected to be an important probe of the reionization scenario. Finally, we find that the l range 103–104 is optimum for separating out the cosmological H i signal from the foregrounds, while this will be extremely demanding at l < 100 where it is necessary to characterize the Δν dependence of the foreground MAPS to an accuracy better than 1 per cent.
Article
The Murchison Widefield Array is a dipole-based aperture array synthesis telescope designed to operate in the 80-300 MHz frequency range. It is capable of a wide range of science investigations but is initially focused on three key science projects: detection and characterization of three-dimensional brightness temperature fluctuations in the 21 cm line of neutral hydrogen during the epoch of reionization (EoR) at redshifts from six to ten; solar imaging and remote sensing of the inner heliosphere via propagation effects on signals from distant background sources; and high-sensitivity exploration of the variable radio sky. The array design features 8192 dual-polarization broadband active dipoles, arranged into 512 ldquotilesrdquo comprising 16 dipoles each. The tiles are quasi-randomly distributed over an aperture 1.5 km in diameter, with a small number of outliers extending to 3 km. All tile-tile baselines are correlated in custom field-programmable gate array based hardware, yielding a Nyquist-sampled instantaneous monochromatic uv coverage and unprecedented point spread function quality. The correlated data are calibrated in real time using novel position-dependent self-calibration algorithms. The array is located in the Murchison region of outback Western Australia. This region is characterized by extremely low population density and a superbly radio-quiet environment, allowing full exploitation of the instrumental capabilities.
Conference Paper
The behavior of cylindrical reflectors illuminated by complex linear arrays is simulated with the help of an infinite-array approach. Very large reflectors can be considered with a Method-of-Moments approach. The mutual coupling between elements can be obtained with the help of the array scanning method with non-regular sampling and FFT. A non-negligible variation versus frequency of the embedded element patterns and active impedances is observed. It results from array-reflector interaction and its period can be accurately predicted.
Article
We present a survey of the cosmological applications of the next generation of weak lensing surveys, paying special attention to the computational challenges presented by the number of galaxies, Ngal∼105. We focus on optimal methods with no pixelisation and derive a multigrid P3M algorithm that performs the relevant computations in O(Ngal log Ngal) time. We test the algorithm by studying three applications of weak lensing surveys—convergence map reconstruction, cluster detection and E and B power spectrum estimation using realistic 1°×1° simulations derived from N-body simulations. The map reconstruction is able to reconstruct large scale features without artifacts. Detecting clusters using only weak lensing is difficult because of line-of-sight contamination and noise, with low completeness if one desires low contamination of the sample. A power spectrum analysis of the convergence field is more promising and we are able to reconstruct the convergence spectrum with no loss of information down to the smallest scales. The numerical methods used here can be applied to other data sets with same O(N log N) scaling and can be generalised to a sphere.
Article
We present measurements of galaxy clustering from the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the Sloan Digital Sky Survey III (SDSS-III). These use the Data Release 9 (DR9) CMASS sample, which contains 264 283 massive galaxies covering 3275 square degrees with an effective redshift z = 0.57 and redshift range 0.43 < z < 0.7. Assuming a concordance ΛCDM cosmological model, this sample covers an effective volume of 2.2 Gpc3, and represents the largest sample of the Universe ever surveyed at this density, . We measure the angle-averaged galaxy correlation function and power spectrum, including density-field reconstruction of the baryon acoustic oscillation (BAO) feature. The acoustic features are detected at a significance of 5σ in both the correlation function and power spectrum. Combining with the SDSS-II luminous red galaxy sample, the detection significance increases to 6.7σ. Fitting for the position of the acoustic features measures the distance to z = 0.57 relative to the sound horizon DV/rs = 13.67 ± 0.22 at z = 0.57. Assuming a fiducial sound horizon of 153.19 Mpc, which matches cosmic microwave background constraints, this corresponds to a distance DV (z = 0.57) = 2094 ± 34 Mpc. At 1.7 per cent, this is the most precise distance constraint ever obtained from a galaxy survey. We place this result alongside previous BAO measurements in a cosmological distance ladder and find excellent agreement with the current supernova measurements. We use these distance measurements to constrain various cosmological models, finding continuing support for a flat Universe with a cosmological constant.
Article
Contamination from instrumental effects interacting with bright astrophysical sources is the primary impediment to measuring Epoch of Reionization and BAO 21 cm power spectra---an effect called mode-mixing. In this paper we identify four fundamental power spectrum shapes produced by mode-mixing that will affect all upcoming observations. We are able, for the first time, to explain the wedge-like structure seen in advanced simulations and to forecast the shape of an 'EoR window' that is mostly free of contamination. Understanding the origins of these contaminations also enables us to identify calibration and foreground subtraction errors below the imaging limit, providing a powerful new tool for precision observations.
Article
Before it becomes a sensitive probe of the Epoch of Reionization, the Dark Ages, and fundamental physics, 21 cm tomography must successfully contend with the issue of foreground contamination. Broadband foreground sources are expected to be roughly four orders of magnitude larger than any cosmological signals, so precise foreground models will be necessary. Such foreground models often contain a large number of parameters, reflecting the complicated physics that governs foreground sources. In this paper, we concentrate on spectral modeling (neglecting, for instance, bright point source removal from spatial maps) and show that 21 cm tomography experiments will likely not be able to measure these parameters without large degeneracies, simply because the foreground spectra are so featureless and generic. However, we show that this is also an advantage, because it means that the foregrounds can be characterized to a high degree of accuracy once a small number of parameters (likely three or four, depending on one's instrumental specifications) are measured. This provides a simple understanding for why 21 cm foreground subtraction schemes are able to remove most of the contaminants by suppressing just a small handful of simple spectral forms. In addition, this suggests that the foreground modeling process should be relatively simple and will likely not be an impediment to the foreground subtraction schemes that are necessary for a successful 21 cm tomography experiment.
Article
We present measurements of the baryon acoustic peak at redshifts z= 0.44, 0.6 and 0.73 in the galaxy correlation function of the final data set of the WiggleZ Dark Energy Survey. We combine our correlation function with lower redshift measurements from the 6-degree Field Galaxy Survey and Sloan Digital Sky Survey, producing a stacked survey correlation function in which the statistical significance of the detection of the baryon acoustic peak is 4.9σ relative to a zero-baryon model with no peak. We fit cosmological models to this combined baryon acoustic oscillation (BAO) data set comprising six distance–redshift data points, and compare the results with similar cosmological fits to the latest compilation of supernovae (SNe) and cosmic microwave background (CMB) data. The BAO and SNe data sets produce consistent measurements of the equation-of-state w of dark energy, when separately combined with the CMB, providing a powerful check for systematic errors in either of these distance probes. Combining all data sets we determine w=−1.03 ± 0.08 for a flat universe, consistent with a cosmological constant model. Assuming dark energy is a cosmological constant and varying the spatial curvature, we find Ωk=−0.004 ± 0.006.
Article
Measurements of the 21 cm line emission by residual cosmic hydrogen after reionization can be used to trace the power spectrum of density perturbations through a significant fraction of the observable volume of the Universe. We show that a dedicated 21 cm observatory could probe a number of independent modes that is 2 orders of magnitude larger than currently available, and enable a cosmic-variance limited detection of the signature of a neutrino mass approximately 0.05 eV. The evolution of the linear growth factor with redshift could also constrain exotic theories of gravity or dark energy to an unprecedented precision.
Article
Growing interest in 21-cm tomography has led to the design and construction of broad-band radio interferometers with low noise, moderate angular resolution, high spectral resolution and wide fields of view. With characteristics somewhat different from traditional radio instruments, these interferometers may require new calibration techniques in order to reach their design sensitivities. Self-calibration or redundant calibration techniques that allow an instrument to be calibrated off complicated sky emission structures are ideal. In particular, the large number of redundant baselines possessed by these new instruments makes redundant calibration an especially attractive option. In this paper, we explore the errors and biases in existing redundant calibration schemes through simulations, and show how statistical biases can be eliminated. We also develop a general calibration formalism that includes both redundant baseline methods and basic point source calibration methods as special cases, and show how slight deviations from perfect redundancy and coplanarity can be taken into account.
Article
We present a filtering technique that can be applied to individual baselines of wide-bandwidth, wide-field interferometric data to geometrically select regions on the celestial sphere that contain primary calibration sources. The technique relies on the Fourier transformation of wide-band frequency spectra from a given baseline to obtain one-dimensional "delay images", and then the transformation of a time-series of delay images to obtain two-dimensional "delay/delay-rate images." Source selection is possible in these images given appropriate combinations of baseline, bandwidth, integration time and source location. Strong and persistent radio frequency interference (RFI) limits the effectiveness of this source selection owing to the removal of data by RFI excision algorithms. A one-dimensional, complex CLEAN algorithm has been developed to compensate for RFI-excision effects. This approach allows CLEANed, source-isolated data to be used to isolate bandpass and primary beam gain functions. These techniques are applied to data from the Precision Array for Probing the Epoch of Reionization (PAPER) as a demonstration of their value in calibrating a new generation of low-frequency radio interferometers with wide relative bandwidths and large fields-of-view. Comment: 17 pages, 6 figures, 2009AJ....138..219P