Upcoming weak lensing surveys will probe large fractions of the sky with unprecedented accuracy. ... more Upcoming weak lensing surveys will probe large fractions of the sky with unprecedented accuracy. To infer cosmological constraints, a large ensemble of survey simulations are required to accurately model cosmological observables and their covariances. We develop a parallelized multi-lens-plane pipeline called UFalcon, designed to generate full-sky weak lensing maps from lightcones within a minimal runtime. It makes use of L-PICOLA, an approximate numerical code, which provides a fast and accurate alternative to cosmological N-Body simulations. The UFalcon maps are constructed by nesting 2 simulations covering a redshift-range from z=0.1 to 1.5 without replicating the simulation volume. We compute the convergence and projected overdensity maps for L-PICOLA in the lightcone or snapshot mode. The generation of such a map, including the L-PICOLA simulation, takes about 3 hours walltime on 220 cores. We use the maps to calculate the spherical harmonic power spectra, which we compare to t...
Weak lensing by large scale structure or 'cosmic shear' is a potentially powerful cosmolo... more Weak lensing by large scale structure or 'cosmic shear' is a potentially powerful cosmological probe to shed new light on Dark Matter, Dark Energy and Modified Gravity. It is based on the weak distortions induced by large-scale structures on the observed shapes of distant galaxies through gravitational lensing. While the potentials of this purely gravitational effect are great, results from this technique have been hampered because the measurement of this weak effect is difficult and limited by systematics effects. In particular, a demanding step is the measurement of the weak lensing shear from wide field CCD images of galaxies. We describe the origin of the problem and propose a way forward for cosmic shear. Our proposed approach is based on Monte-Carlo Control Loops and draws upon methods widely used in particle physics and engineering. We describe the control loop scheme and show how it provides a calibration method based on fast image simulations tuned to reproduce the ...
We present extended modeling of the strong lens system RXJ1131-1231 with archival data in two HST... more We present extended modeling of the strong lens system RXJ1131-1231 with archival data in two HST bands in combination with existing line-of-sight contribution and velocity dispersion estimates. Our focus is on source size and its influence on time-delay cosmography. We therefore examine the impact of mass-sheet degeneracy and especially the degeneracy pointed out by Schneider & Sluse (2013) using the source reconstruction scale. We also extend on previous work by further exploring the effects of priors on the kinematics of the lens and the external convergence in the environment of the lensing system. Our results coming from RXJ1131-1231 are given in a simple analytic form so that they can be easily combined with constraints coming from other cosmological probes. We find that the choice of priors on lens model parameters and source size are subdominant for the statistical errors for H_0 measurements of this systems. The choice of prior for the source is sub-dominant at present (2 u...
Quantifying the concordance between different cosmological experiments is important for testing t... more Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between datasets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13 and Planck 15 constraints on the ΛCDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S = 17.6 bits, implying a deviation from consistency at 99.8 due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Planck 15 ...
In light of the growing number of cosmological observations, it is important to develop versatile... more In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release. We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w. We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) ...
Dark matter in the universe evolves through gravity to form a complex network of halos, filaments... more Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks (GAN) to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from N-body simulations of size 500 and 100 Mpc. We show that the GAN-generated samples are qualitatively and quantitative...
We extend the results of previous analyses towards constraining the abundance and clustering of p... more We extend the results of previous analyses towards constraining the abundance and clustering of post-reionization (z ∼ 0-5) neutral hydrogen (HI) systems using a halo model framework. We work with a comprehensive HI dataset including the small-scale clustering, column density and mass function of HI galaxies at low redshifts, intensity mapping measurements at intermediate redshifts and the UV/optical observations of Damped Lyman Alpha (DLA) systems at higher redshifts. We use a Markov Chain Monte Carlo (MCMC) approach to constrain the parameters of the best-fitting models, both for the HI-halo mass relation and the HI radial density profile. We find that a radial exponential profile results in a good fit to the low-redshift HI observations, including the clustering and the column density distribution. The form of the profile is also found to match the high-redshift DLA observations, when used in combination with a three-parameter HI-halo mass relation and a redshift evolution in the...
We present the scientific performance results of PynPoint, our Python-based software package that... more We present the scientific performance results of PynPoint, our Python-based software package that uses principle component analysis to detect and estimate the flux of exoplanets in two dimensional imaging data. Recent advances in adaptive optics and imaging technology at visible and infrared wavelengths have opened the door to direct detections of planetary companions to nearby stars, but image processing techniques have yet to be optimized. We show that the performance of our approach gives a marked improvement over what is presently possible using existing methods such as LOCI. To test our approach, we use real angular differential imaging (ADI) data taken with the adaptive optics assisted high resolution near-infrared camera NACO at the VLT. These data were taken during the commissioning of the apodising phase plate (APP) coronagraph. By inserting simulated planets into these data, we test the performance of our method as a function of planet brightness for different positions on...
Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical ap... more Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical applications and cosmological probes including weak gravitational lensing. The PSF smears the image of any recorded object and therefore needs to be taken into account when inferring properties of galaxies from astronomical images. In the case of cosmic shear, the PSF is one of the dominant sources of systematic errors and must be treated carefully to avoid biases in cosmological parameters. Recently, forward modeling approaches to calibrate shear measurements within the Monte-Carlo Control Loops (MCCL) framework have been developed. These methods typically require simulating a large amount of wide-field images, thus, the simulations need to be very fast yet have realistic properties in key features such as the PSF pattern. Hence, such forward modeling approaches require a very flexible PSF model, which is quick to evaluate and whose parameters can be estimated reliably from survey data. W...
We explore a new technique to measure cosmic shear using Einstein rings. In Birrer et al. (2017),... more We explore a new technique to measure cosmic shear using Einstein rings. In Birrer et al. (2017), we showed that the detailed modelling of Einstein rings can be used to measure external shear to high precision. In this letter, we explore how a collection of Einstein rings can be used as a statistical probe of cosmic shear. We present a forecast of the cosmic shear information available in Einstein rings for different strong lensing survey configurations. We find that, assuming that the number density of Einstein rings in the COSMOS survey is representative, future strong lensing surveys should have a cosmological precision comparable to the current ground based weak lensing surveys. We discuss how this technique is complementary to the standard cosmic shear analyses since it is sensitive to different systematic and can be used for cross-calibration.
Recent observations have led to the establishment of the concordance LCDM model for cosmology. A ... more Recent observations have led to the establishment of the concordance LCDM model for cosmology. A number of experiments are being planned to shed light on dark energy, dark matter, inflation and gravity, which are the key components of the model. To optimize and compare the reach of these surveys, several figures of merit have been proposed. They are based on either the forecasted precision on the LCDM model and its expansion, or on the expected ability to distinguish two models. We propose here another figure of merit that quantifies the capacity of future surveys to rule out the LCDM model. It is based on a measure of the difference in volume of observable space that the future surveys will constrain with and without imposing the model. This model breaking figure of merit is easy to compute and can lead to different survey optimizations than other metrics. We illustrate its impact using a simple combination of supernovae and BAO mock observations and compare the respective merit of...
Weak lensing surveys provide a powerful probe of dark energy through the measurement of the mass ... more Weak lensing surveys provide a powerful probe of dark energy through the measurement of the mass distribution of the local Universe. A number of ground-based and space-based surveys are being planned for this purpose. Here, we study the optimal strategy for these future surveys using the joint constraints on the equation of state parameter wn and its evolution wa as a figure of merit by considering power spectrum tomography. For this purpose, we first consider an `ideal' survey which is both wide and deep and exempt from systematics. We find that such a survey has great potential for dark energy studies, reaching one sigma precisions of 1% and 10% on the two parameters respectively. We then study the relative impact of various limitations by degrading this ideal survey. In particular, we consider the effect of sky coverage, survey depth, shape measurements systematics, photometric redshifts systematics and uncertainties in the non-linear power spectrum predictions. We find that,...
Journal of Cosmology and Astroparticle Physics, 2021
Narrow-band imaging surveys allow the study of the spectral characteristics of galaxies without t... more Narrow-band imaging surveys allow the study of the spectral characteristics of galaxies without the need of performing their spectroscopic follow-up. In this work, we forward-model the Physics of the Accelerating Universe Survey (PAUS) narrow-band data. The aim is to improve the constraints on the spectral coefficients used to create the galaxy spectral energy distributions (SED) of the galaxy population model in Tortorelli et al. 2020. In that work, the model parameters were inferred from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) data using Approximate Bayesian Computation (ABC). This led to stringent constraints on the B-band galaxy luminosity function parameters, but left the spectral coefficients only broadly constrained. To address that, we perform an ABC inference using CFHTLS and PAUS data. This is the first time our approach combining forward-modelling and ABC is applied simultaneously to multiple datasets. We test the results of the ABC inference by comparin...
We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters... more We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters from density fields, focusing on the extraction of non-Gaussian information. We consider weak lensing mass maps as our dataset. We aim for our method to be able to distinguish between five models, which were chosen to lie along the $\sigma_8$ - $\Omega_m$ degeneracy, and have nearly the same two-point statistics. We design and implement a Deep Convolutional Neural Network (DCNN) which learns the relation between five cosmological models and the mass maps they generate. We develop a new training strategy which ensures the good performance of the network for high levels of noise. We compare the performance of this approach to commonly used non-Gaussian statistics, namely the skewness and kurtosis of the convergence maps. We find that our implementation of DCNN outperforms the skewness and kurtosis statistics, especially for high noise levels. The network maintains the mean discrimination ...
Upcoming weak lensing surveys will probe large fractions of the sky with unprecedented accuracy. ... more Upcoming weak lensing surveys will probe large fractions of the sky with unprecedented accuracy. To infer cosmological constraints, a large ensemble of survey simulations are required to accurately model cosmological observables and their covariances. We develop a parallelized multi-lens-plane pipeline called UFalcon, designed to generate full-sky weak lensing maps from lightcones within a minimal runtime. It makes use of L-PICOLA, an approximate numerical code, which provides a fast and accurate alternative to cosmological N-Body simulations. The UFalcon maps are constructed by nesting 2 simulations covering a redshift-range from z=0.1 to 1.5 without replicating the simulation volume. We compute the convergence and projected overdensity maps for L-PICOLA in the lightcone or snapshot mode. The generation of such a map, including the L-PICOLA simulation, takes about 3 hours walltime on 220 cores. We use the maps to calculate the spherical harmonic power spectra, which we compare to t...
Weak lensing by large scale structure or 'cosmic shear' is a potentially powerful cosmolo... more Weak lensing by large scale structure or 'cosmic shear' is a potentially powerful cosmological probe to shed new light on Dark Matter, Dark Energy and Modified Gravity. It is based on the weak distortions induced by large-scale structures on the observed shapes of distant galaxies through gravitational lensing. While the potentials of this purely gravitational effect are great, results from this technique have been hampered because the measurement of this weak effect is difficult and limited by systematics effects. In particular, a demanding step is the measurement of the weak lensing shear from wide field CCD images of galaxies. We describe the origin of the problem and propose a way forward for cosmic shear. Our proposed approach is based on Monte-Carlo Control Loops and draws upon methods widely used in particle physics and engineering. We describe the control loop scheme and show how it provides a calibration method based on fast image simulations tuned to reproduce the ...
We present extended modeling of the strong lens system RXJ1131-1231 with archival data in two HST... more We present extended modeling of the strong lens system RXJ1131-1231 with archival data in two HST bands in combination with existing line-of-sight contribution and velocity dispersion estimates. Our focus is on source size and its influence on time-delay cosmography. We therefore examine the impact of mass-sheet degeneracy and especially the degeneracy pointed out by Schneider & Sluse (2013) using the source reconstruction scale. We also extend on previous work by further exploring the effects of priors on the kinematics of the lens and the external convergence in the environment of the lensing system. Our results coming from RXJ1131-1231 are given in a simple analytic form so that they can be easily combined with constraints coming from other cosmological probes. We find that the choice of priors on lens model parameters and source size are subdominant for the statistical errors for H_0 measurements of this systems. The choice of prior for the source is sub-dominant at present (2 u...
Quantifying the concordance between different cosmological experiments is important for testing t... more Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between datasets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13 and Planck 15 constraints on the ΛCDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S = 17.6 bits, implying a deviation from consistency at 99.8 due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Planck 15 ...
In light of the growing number of cosmological observations, it is important to develop versatile... more In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release. We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w. We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) ...
Dark matter in the universe evolves through gravity to form a complex network of halos, filaments... more Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks (GAN) to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from N-body simulations of size 500 and 100 Mpc. We show that the GAN-generated samples are qualitatively and quantitative...
We extend the results of previous analyses towards constraining the abundance and clustering of p... more We extend the results of previous analyses towards constraining the abundance and clustering of post-reionization (z ∼ 0-5) neutral hydrogen (HI) systems using a halo model framework. We work with a comprehensive HI dataset including the small-scale clustering, column density and mass function of HI galaxies at low redshifts, intensity mapping measurements at intermediate redshifts and the UV/optical observations of Damped Lyman Alpha (DLA) systems at higher redshifts. We use a Markov Chain Monte Carlo (MCMC) approach to constrain the parameters of the best-fitting models, both for the HI-halo mass relation and the HI radial density profile. We find that a radial exponential profile results in a good fit to the low-redshift HI observations, including the clustering and the column density distribution. The form of the profile is also found to match the high-redshift DLA observations, when used in combination with a three-parameter HI-halo mass relation and a redshift evolution in the...
We present the scientific performance results of PynPoint, our Python-based software package that... more We present the scientific performance results of PynPoint, our Python-based software package that uses principle component analysis to detect and estimate the flux of exoplanets in two dimensional imaging data. Recent advances in adaptive optics and imaging technology at visible and infrared wavelengths have opened the door to direct detections of planetary companions to nearby stars, but image processing techniques have yet to be optimized. We show that the performance of our approach gives a marked improvement over what is presently possible using existing methods such as LOCI. To test our approach, we use real angular differential imaging (ADI) data taken with the adaptive optics assisted high resolution near-infrared camera NACO at the VLT. These data were taken during the commissioning of the apodising phase plate (APP) coronagraph. By inserting simulated planets into these data, we test the performance of our method as a function of planet brightness for different positions on...
Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical ap... more Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical applications and cosmological probes including weak gravitational lensing. The PSF smears the image of any recorded object and therefore needs to be taken into account when inferring properties of galaxies from astronomical images. In the case of cosmic shear, the PSF is one of the dominant sources of systematic errors and must be treated carefully to avoid biases in cosmological parameters. Recently, forward modeling approaches to calibrate shear measurements within the Monte-Carlo Control Loops (MCCL) framework have been developed. These methods typically require simulating a large amount of wide-field images, thus, the simulations need to be very fast yet have realistic properties in key features such as the PSF pattern. Hence, such forward modeling approaches require a very flexible PSF model, which is quick to evaluate and whose parameters can be estimated reliably from survey data. W...
We explore a new technique to measure cosmic shear using Einstein rings. In Birrer et al. (2017),... more We explore a new technique to measure cosmic shear using Einstein rings. In Birrer et al. (2017), we showed that the detailed modelling of Einstein rings can be used to measure external shear to high precision. In this letter, we explore how a collection of Einstein rings can be used as a statistical probe of cosmic shear. We present a forecast of the cosmic shear information available in Einstein rings for different strong lensing survey configurations. We find that, assuming that the number density of Einstein rings in the COSMOS survey is representative, future strong lensing surveys should have a cosmological precision comparable to the current ground based weak lensing surveys. We discuss how this technique is complementary to the standard cosmic shear analyses since it is sensitive to different systematic and can be used for cross-calibration.
Recent observations have led to the establishment of the concordance LCDM model for cosmology. A ... more Recent observations have led to the establishment of the concordance LCDM model for cosmology. A number of experiments are being planned to shed light on dark energy, dark matter, inflation and gravity, which are the key components of the model. To optimize and compare the reach of these surveys, several figures of merit have been proposed. They are based on either the forecasted precision on the LCDM model and its expansion, or on the expected ability to distinguish two models. We propose here another figure of merit that quantifies the capacity of future surveys to rule out the LCDM model. It is based on a measure of the difference in volume of observable space that the future surveys will constrain with and without imposing the model. This model breaking figure of merit is easy to compute and can lead to different survey optimizations than other metrics. We illustrate its impact using a simple combination of supernovae and BAO mock observations and compare the respective merit of...
Weak lensing surveys provide a powerful probe of dark energy through the measurement of the mass ... more Weak lensing surveys provide a powerful probe of dark energy through the measurement of the mass distribution of the local Universe. A number of ground-based and space-based surveys are being planned for this purpose. Here, we study the optimal strategy for these future surveys using the joint constraints on the equation of state parameter wn and its evolution wa as a figure of merit by considering power spectrum tomography. For this purpose, we first consider an `ideal' survey which is both wide and deep and exempt from systematics. We find that such a survey has great potential for dark energy studies, reaching one sigma precisions of 1% and 10% on the two parameters respectively. We then study the relative impact of various limitations by degrading this ideal survey. In particular, we consider the effect of sky coverage, survey depth, shape measurements systematics, photometric redshifts systematics and uncertainties in the non-linear power spectrum predictions. We find that,...
Journal of Cosmology and Astroparticle Physics, 2021
Narrow-band imaging surveys allow the study of the spectral characteristics of galaxies without t... more Narrow-band imaging surveys allow the study of the spectral characteristics of galaxies without the need of performing their spectroscopic follow-up. In this work, we forward-model the Physics of the Accelerating Universe Survey (PAUS) narrow-band data. The aim is to improve the constraints on the spectral coefficients used to create the galaxy spectral energy distributions (SED) of the galaxy population model in Tortorelli et al. 2020. In that work, the model parameters were inferred from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) data using Approximate Bayesian Computation (ABC). This led to stringent constraints on the B-band galaxy luminosity function parameters, but left the spectral coefficients only broadly constrained. To address that, we perform an ABC inference using CFHTLS and PAUS data. This is the first time our approach combining forward-modelling and ABC is applied simultaneously to multiple datasets. We test the results of the ABC inference by comparin...
We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters... more We demonstrate the potential of Deep Learning methods for measurements of cosmological parameters from density fields, focusing on the extraction of non-Gaussian information. We consider weak lensing mass maps as our dataset. We aim for our method to be able to distinguish between five models, which were chosen to lie along the $\sigma_8$ - $\Omega_m$ degeneracy, and have nearly the same two-point statistics. We design and implement a Deep Convolutional Neural Network (DCNN) which learns the relation between five cosmological models and the mass maps they generate. We develop a new training strategy which ensures the good performance of the network for high levels of noise. We compare the performance of this approach to commonly used non-Gaussian statistics, namely the skewness and kurtosis of the convergence maps. We find that our implementation of DCNN outperforms the skewness and kurtosis statistics, especially for high noise levels. The network maintains the mean discrimination ...
Uploads
Papers by Adam Amara