Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

STOLPOVSKIY Mikhail 1 Va 20161125

Download as pdf or txt
Download as pdf or txt
You are on page 1of 220

Université Sorbonne Paris Cité

Thèse préparée
à l’Université Paris Diderot
Ècole doctorale STEP’UP — ED N◦ 560
Laboratoire AstroParticule et Cosmologie

Development of the B-mode measurements


pipeline for QUBIC experiment

par
Mikhail Stolpovskiy

présentée et soutenue publiquement le


25 novembre 2016

Thèse de doctorat de Sciences de l’Univers


dirigée par Jean-Christophe Hamilton

Devant un jury composé de:


Prèsident de jury: Prof. Stavros Katsanevas – Université Paris Diderot
Directeur de thèse: Dr. Jean-Christophe Hamilton – APC Paris
Rapporteurs: Prof. Emory F. Bunn – University of Richmond
Dr. Juan Macias-Perez – LPSC Grenoble
Membres: Dr. Sophie Henrot-Versillé – LAL Orsay
Dr. Éric Nuss – LUPM Montpellier
“It don’t mean a thing if it ain’t got that swing"

Duke Ellington

“Swing - to move (something) in a curved or circular path on or as if on an axis."

English explanatory dictionary

“Gravitational waves from inflation generate a faint but distinctive twisting pattern in
the polarization of the CMB, known as a "curl" or B-mode pattern."

BICEP2 2014 Release Image Gallery, bicepkeck.org


UNIVERSITÉ PARIS DIDEROT, PARIS-7

Abstract
Ecole Doctorale STEP’UP
Laboratoire APC

Doctor of Philosophy

Development of the B-mode measurements pipeline for QUBIC experiment

by Mikhail Stolpovskiy

QUBIC is a ground-based experiment aiming to measure the primordial B-modes, cur-


rently under construction, that uses the novel bolometric interferometry technique. Thanks
to the fusion nature of QUBIC, it has very good sensitivity and excellent control of sys-
tematics. Moreover, the fact that the synthesized beam depends on the wavelength
allows us to treat QUBIC as a spectro-polarimeter. These factors together give sensitiv-
ity on tensor-to-scalar ratio r 0.012. The goal of this thesis is to describe the pipeline
of data analysis for QUBIC, from map-making of CMB from raw time-ordered data,
through component separation and power spectra estimation to cosmological parameter
estimation. The main accents of this work are: map-making, which is very unusual in
comparison with other experiments in the field, and the development of scanning strategy
for QUBIC.

Keywords: cosmology, cosmic microwave background, primordial B-modes, inflation, ex-


periment, bolometric interferometry
UNIVERSITÉ PARIS DIDEROT, PARIS-7

Résumé
Ecole Doctorale STEP’UP
Laboratoire APC

Docteur es Sciences

Développement du pipeline de mesure des modes B pour l’expérience


QUBIC

par Mikhail Stolpovskiy

QUBIC est une expérience au sol en cours de construction dont le but est de mesurer
les modes-B primordiaux du fond diffus cosmologique en utilisant la technique innovante
de l’interférométrie bolométrique. Grâce à la fusion entre interférométrie et imagerie,
QUBIC a une très bonne sensibilité et un excellent contrôle des effets systématiques
instrumentaux. De plus, du fait de la dépendance en fréquence du lobe synthétique,
QUBIC peut être utilisé comme un spectre-imageur. Ces points pris en compte, la
sensibilité globale de QUBIC au rapport tenseur/scalaire est 0.012. L’objectif de cette
thèse est de décrire le code d’analyse de données de QUBIC, depuis la fabrication de cartes
à partir des données temporelle jusqu’à la séparation de composantes astrophysique,
l’estimation du spectre de puissance angulaire et celle des paramètres cosmologiques.
Les aspects essentiels de ce travail sont les suivants: la fabrication de carte qui est très
inhabituelle vis à vis des autres projets du domaine et le développement de la stratégie
de couverture du ciel pour QUBIC.

Mots clés: cosmologie, fond diffus cosmologique, modes-B primordiaux, inflation, expérience,
interférométrie bolométrique
Acknowledgements
Firstly I’d like to thank my supervisor Jean-Christophe Hamilton for his support and
immense patience. He guided me all the time of research and writing of this thesis and
I can’t imagine having a better advisor for my Ph.D. study.

Besides I would like to express my sincere gratitude to the rest of my thesis committee:
Prof. Emory F. Bunn, Dr. Juan Macias Perez, Dr. Sophie Henrot-Versillé, Dr. Éric
Nuss and Prof. Stavros Katsanevas, for their perceptive comments and the hard questions
which urged me to widen my research.

I sincerely thank also Dr. Pierre Chanial, the leader of the team for QUBIC data analysis.
He taught me various techniques of data analysis and software development and patiently
helped me with awful bugs which I made every now and then.

I thank Dr. Jean Kaplan for an immense help, especially during the writing of this
thesis, but also during all the years of my Ph.D. research. He always taught me to do
not forget about the physical meaning of the things every time when I went to deep into
the software abstractions.

I also thank all my friends on Ph.D. school. It was very nice to have friendly support from
them. I’ll never forget our discussions about the latest discoveries in physics during the
lunch and how we debated on the conferences on the subjects we just learned. Especially
I’d like to thank Pierros Ntelis and Cyrille Doux, with whom we conducted the Journal
Club discussions on Thursdays.

The last but not the least: my gratitude is to my family, my beloved wife Olga, my
wonderful children Arina and Marc and my dear parents Valentin and Elena. You were
always there to listen and give a wise advise (Marc, it’s about you too, although you
don’t speak yet). You are the main motivation for me. Thank you!

Thank you very much, everyone!

Mikhail Stolpovskiy.

iv
Contents

Abstract ii

Résumé iii

Acknowledgements iv

Contents v

List of Figures ix

List of Tables xvii

1 Cosmology: brief introduction 1


1.1 Cosmology: historical view . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Archeoastronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Ancient Greece . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 More modern views . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Birth of relativistic cosmology . . . . . . . . . . . . . . . . . . . . . 5
1.1.5 Expanding Universe . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.6 Big Bang model problems . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.6.1 Flatness of the Universe . . . . . . . . . . . . . . . . . . . 10
1.1.6.2 Horizon problem . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.6.3 Matter-antimatter assymetry . . . . . . . . . . . . . . . . 12
1.1.6.4 Magnetic monopoles . . . . . . . . . . . . . . . . . . . . . 15
1.1.7 Inflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.7.1 Observational hints in favor of inflation . . . . . . . . . . 20
1.2 Observations in cosmology . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.1 Large-scale structures . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.2 Light elements abundances . . . . . . . . . . . . . . . . . . . . . . 23
1.2.3 Dark matter studies . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.4 Dark energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.2.5 Results of accelerator experiments in application for cosmology . . 25
1.2.6 Other cosmological observations . . . . . . . . . . . . . . . . . . . . 26

v
Contents vi

2 Cosmic microwave background and its fluctuations 27


2.1 Cosmic microwave relic background radiation . . . . . . . . . . . . . . . . 27
2.1.1 History of CMB discovery . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.2 CMB temperature fluctuations . . . . . . . . . . . . . . . . . . . . 32
2.1.2.1 Power spectrum . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.3 History of CMB fluctuations studies . . . . . . . . . . . . . . . . . 35
2.1.4 CMB polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.4.1 E-modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.1.4.2 B-modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.1.5 Foregrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.1.6 Secondary anisotropies . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.1.6.1 Sunyaev-Zel’dovich effect . . . . . . . . . . . . . . . . . . 49
2.1.6.2 Cosmic Infrared Background . . . . . . . . . . . . . . . . 50
2.1.6.3 Integrated Sachs-Wolfe effect . . . . . . . . . . . . . . . . 51
2.1.6.4 Lensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3 Bolometric interferometry and QUBIC experiment 53


3.1 The concept of the bolometric interferometer . . . . . . . . . . . . . . . . 53
3.1.1 Imagers and interferometers . . . . . . . . . . . . . . . . . . . . . . 53
3.1.2 Bolometric interferometry . . . . . . . . . . . . . . . . . . . . . . . 54
3.1.3 Self-calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2 QUBIC instrument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.1 QUBIC instrument subsystems . . . . . . . . . . . . . . . . . . . . 59
3.2.1.1 Mount system and baffling . . . . . . . . . . . . . . . . . 59
3.2.1.2 Cryostat . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.1.3 Window, half-wave plate and polarising grid . . . . . . . 61
3.2.1.4 Horn array . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.1.5 Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2.1.6 Dichroic and filters . . . . . . . . . . . . . . . . . . . . . . 64
3.2.1.7 Focal planes . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2.2 The QUBIC site in the Puna desert . . . . . . . . . . . . . . . . . 65
3.2.3 Time-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4 Map-making in monochromatic case 71


4.1 QUBIC pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2 Imager map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3 QUBIC map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3.1 Initial assumptions for QUBIC simulation pipeline . . . . . . . . . 75
4.3.2 Synthesized beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3.2.1 Synthesized beam approximate model . . . . . . . . . . . 80
4.3.3 Acquisition model . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.3.3.1 Acquisition model for a bolometric interferometer . . . . 81
4.3.3.2 QUBIC acquisition model . . . . . . . . . . . . . . . . . . 82
4.3.4 Map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.4.1 Monte-Carlo simulations . . . . . . . . . . . . . . . . . . . 86
4.3.5 QUBIC-Planck fusion acquisition . . . . . . . . . . . . . . . . . . . 88
4.3.6 Second-order features of the synthesized beam . . . . . . . . . . . . 91
Contents vii

4.3.7 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5 Map-making in polychromatic case 97


5.0.1 Polychromatic synthesised beam . . . . . . . . . . . . . . . . . . . 97
5.0.1.1 How to model the wide frequency band? . . . . . . . . . . 98
5.0.2 Map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.0.2.1 Preconditioned conjugate gradient method . . . . . . . . 105
5.0.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6 QUBIC as a spectro-polarimeter 114


6.1 Multifrequency map-making . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.2 Component separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.2.1 Dust emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.3 How many sub-bands? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3.1 Noise increase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.3.2 Component separation . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.4 Monte-Carlo simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.5 QUBIC multi band plus Planck acquisition model . . . . . . . . . . . . . . 123
6.6 Possible CMB space-born instrument . . . . . . . . . . . . . . . . . . . . . 126
6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

7 Spectra reconstruction 131


7.1 Spectra reconstruction problems . . . . . . . . . . . . . . . . . . . . . . . . 131
7.1.1 Noisy sky with realistic resolution . . . . . . . . . . . . . . . . . . . 132
7.1.1.1 Pixel and beam window functions . . . . . . . . . . . . . 133
7.1.2 Pseudo-spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.1.3 Leakage problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.1.4 Errorbars on the reconstructed spectra . . . . . . . . . . . . . . . . 139
7.2 Xpol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.3 Xpure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.4 Spice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.5 Choosing coverage threshold . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.6 Choosing method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

8 Scanning strategy 148


8.1 Sensitivity of an imager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.2 1/f noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.3 General approach to the scanning strategy and instrumental constraints . 151
8.3.1 Azimuthal and elevation rotations . . . . . . . . . . . . . . . . . . 151
8.3.2 Rotations around the optical axis of the instrument . . . . . . . . . 153
8.3.3 HWP rotation and dead time . . . . . . . . . . . . . . . . . . . . . 154
8.4 Sensitivity of a bolometric interferometer . . . . . . . . . . . . . . . . . . . 155
8.5 Scan of scanning strategy parameters . . . . . . . . . . . . . . . . . . . . . 158
8.6 Pointing accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Contents viii

9 Sensitivity of QUBIC 169


9.1 Cosmological parameter estimation . . . . . . . . . . . . . . . . . . . . . . 169
9.1.1 Likelihood approach to the parameter estimation problem . . . . . 169
9.1.1.1 From CMB map to the cosmological parameters . . . . . 169
9.1.1.2 From C` to the cosmological parameters . . . . . . . . . . 170
9.2 Realistic Monte-Carlo of QUBIC . . . . . . . . . . . . . . . . . . . . . . . 171
9.2.1 Map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.2.2 Component separation . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.2.3 Spectra reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.2.4 Parameter estimation . . . . . . . . . . . . . . . . . . . . . . . . . 172

General conclusions 176


9.3 Physical problematic and QUBIC experiment . . . . . . . . . . . . . . . . 179
9.4 Overview of QUBIC data analysis pipeline . . . . . . . . . . . . . . . . . . 180
9.5 For future studies and development . . . . . . . . . . . . . . . . . . . . . . 181

Résumé général 182


9.6 Problématique physique et l’expérience QUBIC . . . . . . . . . . . . . . . 183
9.7 Vue d’ensemble du pipeline d’analyse de données QUBIC . . . . . . . . . . 184

A QUBIC data analysis package documentation 186

Bibliography 190
List of Figures

1.1 Anaximander’s model of the Universe with the flat cylindrical Earth in
the centre and several rings around it that contain the primordial fire inside. 3
1.2 That was a hot discussion! . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Velocity-Distance Relation among Extra-Galactic Nebulae. Radial veloc-
ities, corrected for solar motion, are plotted against distances estimated
from involved stars and mean luminosities of nebulae in a cluster. The
black discs and full line represent the solution for solar motion using the
nebulae individually; the circles and broken line represent the solution
combining the nebulae into groups; the cross represents the mean velocity
corresponding to the mean distance of 22 nebulae whose distances could
not be estimated individually. (Picture caption is cited from original paper
by Hubble [1]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Illustration of horizon problem: today we observe the CMB, that contains
many causally disconnected regions. However, they all have the same
statistical properties, which is hardly possible to have by chance. . . . . . 12
1.5 Winery illustration for Sakharov conditions of matter-antimatter unbal-
anced Universe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 The slow roll potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Illustration for the evolution of density perturbations during inflation and
hot Big Bang. Blue line show the comoving scales, which remain constant.
Red line represents the comoving horizon, which shrinks during inflation
and increases at late time. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8 Power spectrum of large-scale structures . . . . . . . . . . . . . . . . . . . 22
1.9 Constraints on the baryonic density [2]. The boxes are for the observa-
tions, for 3 He there is only upper limit. The vertical stripe fixes the
baryon density due to the measurement of deuterium. . . . . . . . . . . . 23

2.1 Spectrum of the metagalaxy in assumption of great quantity of neutrinos:


ρ̄ = ρcrit = 1.86 × 10−29 g/cm3 . The equation of state is P = ρ̄c2 /3. Two
different assumptions for galactic spectrum is plotted: a) galaxies become
luminous at time when the mean distance between them l is 10 times
less than the present l0 , b) galaxies become luminous at the same time,
but more precise assumptions are made for the galaxy evolution. Plot c)
presents the equilibrium Planck radiation with T = 1K. Crosses denote
experimental points [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2 Uniform spectrum and fit to Planck black body. Uncertainties are a small
fraction of the line thickness [4]. . . . . . . . . . . . . . . . . . . . . . . . . 31

ix
List of Figures x

2.3 The illustration of how C` encodes the properties of CMB fluctuations.


On the right the C` spectra are shown. Each one is equal to the delta
function on ` = 1, 10and50. The corresponding maps are shown on the
right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 The radio map of Relikt-1 experiment in ecliptic coordinates. The white
parts have zero statistical weight and correspond to the Galactic plane
and regions illuminated by the Earth and Moon. The rectangle shows the
observed anomalous dip in brightness temperature. . . . . . . . . . . . . . 35
2.5 The DMR maps on three frequencies: 31.5, 53 and 90 GHz after the dipole
anisotropy removing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 The angular power spectrum of CMB temperature anisotropies, measured
by the BOOMERanG experiment at 150 GHz. Color curves are for dif-
ferent cosmological models, see original article for explanations [5] . . . . . 38
2.7 CMB temperature map, measured by the Planck experiment. . . . . . . . 39
2.8 Measured angular power spectra of Planck, WMAP (9 years of operation.),
ACT, and SPT. The model plotted is Planck’s best-fit model including
Planck temperature, WMAP polarization, ACT, and SPT (the model is
labelled [Planck+WP+HighL] in [6]). Error bars include cosmic variance.
The horizontal axis is logarithmic up to l = 50, and linear beyond[7]. . . 40
2.9 Approximate experimental sensitivities for CMB anisotropy observations
of stages I–IV [8]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.10 Upper third – polarisation for Q and U Stokes parameters. Below – typical
E- and B-mode polarisation patterns. . . . . . . . . . . . . . . . . . . . . 43
2.11 Polarization from Thompson scattering. . . . . . . . . . . . . . . . . . . . 44
2.12 Polarization direction depends on the velocity gradient on the last scat-
tering surface and hence correlates with temperature fluctuations. Polar-
ization direction (shown in green thick line) is defined by the fact that
the fluid velocities (thin black arrows) are not isotropic in respect to the
scattering point (little black circle). The fluid motion from a hot spot to
a cold one on the left plot (or from a cold spot to a hot one on the right
plot) is shown with dashed arrows. . . . . . . . . . . . . . . . . . . . . . . 45
2.13 Left: BICEP2 apodized E-mode and B-mode maps filtered to 50 < ` <
120. Right: The equivalent maps for the first of the lensed-ΛCDM +
noise simulations. The color scale displays the E-mode scalar and B-mode
pseudoscalar patterns while the lines display the equivalent magnitude and
orientation of linear polarization. Note that excess B mode is detected
over lensing+noise with high signal-to-noise ratio in the map (s/n > 2
per map mode at ` ≈ 70). (Also note that the E-mode and B-mode maps
use different color and length scales.) Figure caption is cited from [9]. . . . 47
2.14 Current status of the BB power spectrum measurements from SPTpol,
ACTpol, BICEP2/Keck and POLARBEAR experiments. The solid gray
line shows the expected lensed BB spectrum from the Planck+lensing+WP+highL
best-fit model. The dotted line shows the nominal 150 GHz BB power
spectrum of Galactic dust emission. This model is derived from an anal-
ysis of polarized dust emission in the BICEP2/Keck field using Planck
data. The dash-dotted line shows the sum of the lensed BB power and
dust BB power [10]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
List of Figures xi

2.15 Planck 353GHz channel DBB power spectra in µK 2 ) computed on three


of the selected CMB regions that have the sky fraction fsky = 0.3 (circles,
lightest), fsky = 0.5 (diamonds, medium) and fsky = 0.7 (squares, dark-
est). The uncertainties shown are ±1σ. The best-fit power laws in ` are
displayed for each spectrum as a dashed line of the corresponding colour.
The corresponding r = 0.2 DBB CMB model are displayed as solid black
lines. In the lower parts of each panel, the global estimates of the power
spectra of the systematic effects responsible for intensity-to-polarization
leakage are displayed in different shades of grey, with the same symbols to
identify the three regions. Finally, absolute values of the null-test spectra
are represented as dashed-dotted, dashed, and dotted grey lines for the
three regions. (Figure caption is cited due to [11]) . . . . . . . . . . . . . . 49
2.16 Atmospheric transmission from the Atacama plateau at the zenith for
different amounts of precipitable water vapour. . . . . . . . . . . . . . . . 50
2.17 Frequency dependence of thermal and kinetic SZ effects. The thick line
shows the frequency dependence of ∆T /Tcmb from the thermal SZ effect,
the thin solid line shows the same for the change in spectral intensity
∆I(x). The thin dashed lines show the change in spectral intensity for
kinetic SZ effect, the upper one for an approaching source and the lower
one for a receding source. The vertical dotted line shows the scaled fre-
quency at which TSZ is zero and KSZ effect is maximum. Here frequency
x, I0 and Tcmb are all scaled to unity [12]. . . . . . . . . . . . . . . . . . . 51
2.18 The illustration of the integrated Sachs-Wolfe effect . . . . . . . . . . . . . 52

3.1 The QUBIC instrument sketch. See text for explanations. . . . . . . . . . 55


3.2 Formation of the QUBIC synthesised beam: (A) and (B) – map of horn
array with 1 horn open and the beam on the focal plane. Then, similarly,
(C) and (D) are map and interferometric pattern for 2 open horns; (E)
and (F) – 3 horns;(G) and (H) – 20 horns; (I) and (J) – 50 horns; (K) and
(L) – 100 horns; (M) and (N) – 200 horns; (O) and (P) – full horn array
(400 horns) is open. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 The 3-D model of the QUBIC instrument. . . . . . . . . . . . . . . . . . . 60
3.4 Mount system design of QUBIC with forebaffle. . . . . . . . . . . . . . . . 61
3.5 Shielding for QUBIC, consisting the forebaffle and the ground shield. . . . 61
3.6 Picture of the horn array, produced for the technological demonstrator of
QUBIC (left). Close picture of a horn cut (center). Mirror, produced for
the technological demonstrator (right). . . . . . . . . . . . . . . . . . . . . 64
3.7 Design of the 1024 bolometer array (left), one pixel of it (top right) and
the TES detector with its electrodes (bottom right). See text for the
explanations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.8 Transition edge for four detectors distributed far from each other on one
quarter of the focal plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.9 Two SQUID boards stacked (left) to finally obtain a SQUID box com-
posed of 4 PCBs, and thus 128 SQUIDs (center). TES thermo-mechanical
structure showing the 2 SQUIDs boxes near the TES array. . . . . . . . . 66
3.10 Noise Equivalent Power at both sites Argentina and Antarctica, for fre-
quencies 150 and 220 GHz, as a function of month of the year. . . . . . . . 67
List of Figures xii

3.11 Maps of galactic dust emission, measured by Planck at 150 GHz. The D`BB
amplitudes at ` = 80 are plotted on the top and the associated uncertainty
σ(rdust ) on the bottom. BICEP2 deep-field region shown with the black
contour. Center of the QUBIC patch is shown with the black star. The
picture is from [11]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.12 QUBIC patch and PolarBear patches, overlaid on a full-sky 143 GHz in-
tensity map of Planck [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.13 Elevation of different fields above the horizon (dashed horizontal line) for
Puna site. Shaded regions show allowed ranges for elevation. The elevation
of the QUBIC patch as it is seen from Concordia station is shown for the
comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.1 Sketch of the QUBIC pipeline . . . . . . . . . . . . . . . . . . . . . . . . . 72


4.2 Diffraction of the beam of the green laser on the 2D diffractive grating.
Multi-peaked interferometry pattern resembles the synthesized beam of
QUBIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3 Radial cut of the synthesized beam for two detectors on the focal plane:
one in the centre of the focal plane (blue) and one 50mm apart (green).
Modulating primary beam is shown with red dashed line. . . . . . . . . . 78
4.4 Reconstruction of the bolometric interferometer TOD with map-making
algorithm of an imager. From left to the right: input map of temperature
CMB anisotropies (simulation), convolved with the gaussian beam of 23.50
width; reconstructed map; difference of the input and output maps. Units
of the color axis are µK. For the simulations we used the QUBIC simula-
tion pipeline with random pointing within a circle of radius 10◦ around the
north galactic pole, 1000 samples, temperature-only noiseless observations. 79
4.5 Radial cut of SB, the reference green line is due to interferometry and the
red line is approximation due to equation (4.20). Logarithmic scale on the
vertical axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6 Reconstruction of the bolometric interferometer TOD. From left to the
right: input map of temperature CMB anisotropies (simulation), con-
volved with the gaussian beam of 23.50 width; reconstructed map; differ-
ence of the input and output maps. Units of the color axis are µK. We
use the same TOD, as was used for simulations shown on the figure 4.4.
Note that the observed field is larger than on the figure 4.4 because of the
side peaks of the synthesized beam. . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Comparison of the power-spectra, reconstructed from realistic and pseudo
Monte-Carlo (here "full" means realistic and "fast" means pseudo Monte-
Carlo). The bias is shown with the solid line and the level of the errorbars
is shown with the dashed lines. The errors are build as a standard devia-
tion for 10 realizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.8 QUBIC-only simulation results: three columns are for I, Q and U Stokes
parameters from left to right respectively. From top to bottom there are:
input convolved maps, reconstructed maps and residual maps. Units on
the color axes are µK. Note high noise on the peripheral pixels. . . . . . . 90
List of Figures xiii

4.9 QUBIC-Planck fusion simulation results: three columns are for I, Q and
U Stokes parameters from left to right respectively. From top to bottom
there are: input convolved maps, reconstructed maps and residual maps.
Units on the color axes are µK. Note much lower noise on the peripheral
pixels in comparison with simulations shown on the picture 4.8. Outside
the field of view of QUBIC there is just the Planck map. . . . . . . . . . . 91
4.10 QUBIC-Planck fusion simulation results: profiles of the residual maps
for three Stokes parameters I, Q and U . Blue is for the QUBIC-only
simulations, constant red line shows the noise level on the Planck maps,
green profile is for QUBIC-Planck fusion acquisition. . . . . . . . . . . . . 92
4.11 Zoom view of the synthesized beam. The rippled features around the peak
are evident. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.12 Beam window function for rippled peak and gaussian peak for QUBIC
synthesized beam at 150 GHz. . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.13 Correction of the rippled beam window function: left plot – deviation of
the modeled window function from the real one before correction, right
plot – after correction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.14 Realistic simulations of 1 month of QUBIC data reconstruction, monochro-
matic case. Three columns are for I, Q and U Stokes parameters from
left to right respectively. From top to bottom there are: input convolved
maps, reconstructed maps and residual maps. Units on the color axes are
µK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.1 Radial cut of the synthesized beam for two different frequencies. Pri-
mary beams are shown with dashed lines. Peaks widths and the distance
between peaks are highlighted. . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Sum of two gaussians with F W HM = 1.. One has mean 0 (blue line),
while the mean of another one is varied (shown in grey). For the second
one the mean is equal to some fraction of F W HM . The sum of the two
gaussian peaks is shown in red, the line styles repeat those of the grey lines.100
 2
5.3 Fit of the synthesized beam peak with sin(θ)θ function. The synthesized
beam is taken from the pixelized map of the beam, its step-like structure is
from the map pixels. The parameter nside is 1024. Units of the horizontal
axis are degrees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Sum of two rippled peaks with F W HM = 1.. One has mean 0 (blue line),
while the mean of another one is varied (shown in grey). For the second
one the mean is equal to some fraction of F W HM . The sum of the two
peaks is shown in red, the line styles repeat those of the grey lines. . . . . 102
5.5 Left: interferometry synthesised beam for 15 frequencies at 150 GHz band
for one of the central detectors of QUBIC. Right: its approximation with
rippled peaks. The minor features on the right plot are numerical effects
of map-to-alm conversion and back. . . . . . . . . . . . . . . . . . . . . . . 102
5.6 Comparison of the realistic synthesised beam to the approximate one for
frequency band 150 GHz. . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.7 Comparison of the realistic synthesised beam to the approximate one for
frequency band 220 GHz. . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
List of Figures xiv

5.8 Simulations of QUBIC-only map-making at 150 GHz band. Three columns


are for I, Q and U Stokes parameters from left to right respectively. From
top to bottom there are: input convolved maps, reconstructed maps and
residual maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.9 Simulations of QUBIC-only map-making at 220 GHz band. Three columns
are for I, Q and U Stokes parameters from left to right respectively. From
top to bottom there are: input convolved maps, reconstructed maps and
residual maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.10 Simulations of QUBIC-Planck fusion map-making at 150 GHz band. Three
columns are for I, Q and U Stokes parameters from left to right respec-
tively. From top to bottom there are: input convolved maps, reconstructed
maps and residual maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.11 Simulations of QUBIC-Planck fusion map-making at 220 GHz band. Three
columns are for I, Q and U Stokes parameters from left to right respec-
tively. From top to bottom there are: input convolved maps, reconstructed
maps and residual maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6.1 Simulation of QUBIC observation of two monochromatic sources with fre-


quencies 140 and 159 GHz and its reconstruction as two separate maps. . 116
6.2 Microwave radiation in Mollweide projection for the Q component of po-
larization, simulation according the theoretical power spectra. Upper left
– clear CMB emission. Upper right – clear dust emission (note the differ-
ent color range). Lower left (right) – total Q signal in the bandwidth of
QUBIC 150 (220) GHz band. . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3 Amount of noise on the reconstructed map as a function of number of
sub-bands Nb , normalized to the noise in case of only 1 band. Errorbars
present the standard deviation of noise level between
√ different sub-bands.
Smooth line represents the theoretical dependence Nb . . . . . . . . . . . 121
6.4 At 150 GHz band the synthesized beam barely fits to the focal plane. This
explains poor frequency resolution for this band. . . . . . . . . . . . . . . 122
6.5 Standard deviation σres of the values on the residual maps of ILC as a
function of number of sub-bands Nb per band (that is the total number of
sub-bands is twice of it). We calculate σres for each of 20 realizations of
CMB and noise and then plot the average between all the σres with dots.
Errorbars show the standard deviation of σres values. . . . . . . . . . . . . 123
6.6 Reconstruction of multiple sub-bands within each of QUBIC wide bands.
Sub-band central frequencies are: [140.0, 158.8, 200.9, 218.5, 237.6] GHz,
they are plotted respectively on the sub-plots A, B, C, D and E. Input
convolved maps, output maps and their difference are plotted for each
frequency for I, Q and U Stokes parameters. . . . . . . . . . . . . . . . . 124
6.7 Reconstruction of CMB emission from 5 frequency bands of QUBIC, using
ILC component separation method. Input convolved maps, output maps
and their difference are plotted for I, Q and U Stokes parameters. . . . . 125
6.8 Reconstruction of multiple sub-bands within each of QUBIC wide bands,
using the fusion map-making. Sub-band central frequencies are: [140.0, 158.8, 200.9, 218.5, 237.6]
GHz, they are plotted respectively on the sub-plots A, B, C, D and E. In-
put convolved maps, output maps and their difference are plotted for each
frequency for I, Q and U Stokes parameters. . . . . . . . . . . . . . . . . 127
List of Figures xv

6.9 Reconstruction of CMB emission from 5 frequency bands of QUBIC, using


ILC component separation method. We use the fusion maps for the input
to the ILC, as shown on the figure 6.8. Input convolved maps, output
maps and their difference are plotted for I, Q and U Stokes parameters. . 128
6.10 Frequency resolution ∆ν ν of a space-born QUBIC-like instrument as a func-
tion of frequency. On high frequencies the resolution is limited due detec-
tor finite size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

7.1 Pixel window functions for nside equal 64, 128 and 256. . . . . . . . . . . 134
7.2 Illustration of the noise bias for the pseudo-spectrum. Blue line is for
the theoretical spectrum. Green line with errorbars is obtained with the
pseudo-spectra of 100 simulations of full sky, according to the theoreti-
cal spectrum, plus gaussian white noise with standard deviation σnoise =
1
2 σmap , where σmap is the standard deviation of the sky temperature fluc-
tuations. Strong bias on the high multipoles is evident. . . . . . . . . . . . 137
7.3 Xpure errorbars of BB power spectrum for different apodization lengths
from 0 to 3◦ . Sample variance for r = 0.02 included. . . . . . . . . . . . . 143
7.4 C` calculated with Spice in 1298 BOOMERanG-like simulations and then
rebinned into flat C` bands with a width of 50. The small points show the
individual measurements, with the error bars representing the standard
deviations in each band. The theoretical error bars of equation 7.33 are
displayed and shifted to the right for clarity. The arrows point to the
effective beam and pixel scales [13]. . . . . . . . . . . . . . . . . . . . . . . 144
7.5 Reconstructed QUBIC power spectra. Green line – spectrum used as an
input for simulations, blue – Xpure reconstruction, red – Xpol, cyan –
Spice. (The BB spectrum is on the middle left plot). . . . . . . . . . . . . 146
7.6 Errorbars of reconstructed QUBIC power spectra. Line colors are the
same as on the figure 7.5. The BB spectrum errorbars are on the middle
left plot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

8.1 Elevation (top panel) and azimuth (bottom panel) for the QUBIC scan-
ning strategy with time period on constant elevation 2 hours, constant
angular speed 1◦ / s and dead time 5 sec. Dead time is shown as sections
of constant azimuth at both edges of each sweep. . . . . . . . . . . . . . . 152
8.2 QUBIC overlap, calculated for 100 randomly picked detectors of a focal
plane, considering observations from the Dome-C site. . . . . . . . . . . . 154
8.3 Sky fraction (top), apodization factor (middle) and σnoise (bottom) in
dependence from the scanning strategy, namely from the delta azimuth. . 157
8.4 Study of dependence of ∆C`bi on the scanning strategy, precisely on the az-
imuth range, having all other parameters of scanning strategy unchanged.
Xpure and Spice lines are the standard deviations of reconstructed spectra
in the wide ` band from 50 to 150 for the corresponding spectra recon-
struction methods. The ∆C`bi line is calculated with the (8.7) formula. . . 158
`(`+1) BB
8.5 Normalized power spectrum D`BB ≡ 2π C` and its errorbars in the
region of the BB spectrum peak for for different sample time (inverse of
sampling frequency). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.6 Apodization factor η for studied scanning strategies. . . . . . . . . . . . . 160
8.7 Fraction of sky for the coverage field fsky for studied scanning strategies. . 160
8.8 2
Noise variance σnoise for studied scanning strategies. . . . . . . . . . . . . 161
List of Figures xvi

8.9 BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.10 BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB corresponding to r = 0.02. . . . . . . . . . . . . . . 162
8.11 BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB = 0. 1/f noise with νknee = 1Hz included. . . . . . 163
8.12 BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB corresponding to r = 0.02. 1/f noise with νknee =
1Hz included. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.13 Quality of scanning strategy. Smaller values (bluer bins) are for better
scanning strategies. The plotted value is defined by (8.9), where ∆C`
values are taken from the figure 8.12. . . . . . . . . . . . . . . . . . . . . . 165
8.14 The standard deviation of residual map as a function of the error on the
pointing angles az(t) and el(t). Errorbars are from the different real-
izations of CMB and pointing errors (we use 10 realizations per point).
The vertical dashed lines shows the planned level of pointing accuracy for
QUBIC: the right one at 2 arcminutes shows the mount system pointing
accuracy and the left one at 20 arcseconds shows the accuracy of the stel-
lar sensor. The horizontal dashed lines highlight the relative increase of
residuals at 20 arcseconds and 2 arcminutes. . . . . . . . . . . . . . . . . . 167
8.15 The BB spectrum mean value (line) and standard deviation for 10 real-
izations (errorbars) in the ` bin from 50 to 150 as a function of the error
on the pointing angles az(t) and el(t). Relative values with respect to the
first one are plotted. The vertical dashed lines shows the planned level
of pointing accuracy for QUBIC: the right one at 2 arcminutes shows the
mount system pointing accuracy and the left one at 20 arcseconds shows
the accuracy of the stellar sensor. The horizontal dashed lines highlight
the relative increase of residuals at 20 arcseconds and 2 arcminutes. . . . . 168

9.1 Reconstruction of multiple sub-bands within each of QUBIC wide bands


in the realistic Monte-Carlo, implying the optimal scanning strategy. Sub-
band central frequencies are: [140.0, 158.8, 200.9, 218.5, 237.6] GHz, they
are plotted respectively on the sub-plots A, B, C, D and E. Input convolved
maps, output maps and their difference are plotted for each frequency for
I, Q and U Stokes parameters. Note that the difference maps are built
on a smaller scale on color axis. . . . . . . . . . . . . . . . . . . . . . . . . 173
9.2 Repetition of Q maps from the figure 9.1 . . . . . . . . . . . . . . . . . . . 174
9.3 Reconstruction of CMB signal from the maps presented on the figure 9.1
by ILC method for component separation. Input convolved maps, output
maps and their difference are plotted for each frequency for I, Q and U
Stokes parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
9.4 Reconstruction of the power spectra. Errorbars it the standard deviation
defined from different realizations. Solid green curves are for the theoret-
ical spectra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
9.5 Likelihood function for single parameter r for the reconstructed spectrum
9.4. The maximum of the likelihood is at r = 0.035, the sigma of the peak
is 0.012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

A.1 Sketch of the qubic package structure . . . . . . . . . . . . . . . . . . . . 187


List of Tables

2.1 Results of full Planck mission on the main cosmological parameters. . . . 39

3.1 Results of self-calibration simulations for the QUBIC instrument with 400
horns, 992 bolometers array, 1000 pointings and all baselines measure-
ments. The column "No Self. Cal." shows the values for standard devi-
ations between the ideal and corrupted parameters. Columns "1 day /
year" and "100 days / year" give the values of standard deviation on the
parameters after, respectively, 1 day per year spent for self-calibration and
100 days. The ratio subcolumns show the ratio of reduce of the systematic
due to the self-calibration. . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Signal, passing through the half-wave plate and polarizing gird as a func-
tion of the half-wave plate rotation angle φ. . . . . . . . . . . . . . . . . . 63

8.1 Optimal scanning strategy parameters with allowed ranges to change with-
out spoiling too much the sensitivity. . . . . . . . . . . . . . . . . . . . . . 165

9.1 Summary of the main ground and balloon projects aiming at measuring
B modes [14]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

xvii
To my beloved wife Olga Stolpovskaya.

xviii
Chapter 1

Cosmology: brief introduction

In this chapter we are briefly discussing the modern cosmology: from the historical point
of view, from the theoretical and observational points of view. We introduce the Big
Bang model and its problems and discuss the possible solution of them – the theory of
inflation.

1.1 Cosmology: historical view

From the philosophical point of view, science helps us to understand our role in the
World. Who we are, what is our past and future. All sciences try to approach these
questions from different sides and on various scales. While most of other sciences like
biology, geology, sociology, concentrate on studying the Universe at scales of Earth and
the life on Earth, physics takes an interest in all scales from elementary particles and
their possible composition to the biggest structures of the Universe and their possible
extension beyond the cosmic horizon. Cosmology, as a field of physics that studies the
Universe in its wholeness, is, probably, the most philosophical science of all. Let’s return
back to the beginnings of astronomy and track development of the modern cosmological
model of the Universe.

1.1.1 Archeoastronomy

Archeoastronomy is a domain of archeology that focuses on studying how people in the


past have understood the celestial phenomena. The most intriguing studies are dedicated
to prehistorical astronomy.

1
Chapter 1. CMB and its fluctuations 2

According to the archeological studies, mankind has started to perform simple astro-
nomical observations around 8 – 7 thousands years B.C. [15] [16]. In most of the cases
archeologists find a kind of observational astronomical places, astronomical images and
lunar calendars. However there is also evidence that men were interested by astron-
omy even 20 thousands years B.C. For example of such evidence, the wand made of
mammoth’s tusk found near Siberian city Achinsk [17]. It was found in 1972 during
archeological investigations of one of the oldest Siberian neolithic settlements. The wand
is covered with a spiral ornament of little holes. On the first glance it is nothing but an
ornament. But careful counting of the holes tells that:

• One can derive a number 29.53 from number of holes in different groups, which is
equal to the number of days in the synodical month - the period between two new
moons.

• The numbers in the ornament could be split by three groups, number of holes in
which correspond to the number of days in the draconic year (period after which
the Sun returns to the same point of the lunar orbit), synodic and tropical year.

• Full number of holes in the ornament corresponds to the number of days in three
lunar years.

• There is a pattern in the ornament, that tells us that the wand could be somehow
used as a computational instrument.

These results could seem fantastical, considering the age of the founding if it would be
a single occurrence. There are more archeological discoveries in the same region, that
consist the same kind of astronomical information. The Achinsk wand shows that since
the dawn of time mankind had scientific interest on the world they live in.

1.1.2 Ancient Greece

The first attempts to build a cosmological model of the whole Universe are found in the
chronicles of court astronomers in China [18]. But the most interesting views are probably
to be found in the scientific poems On Nature of greek philosophers. These works are
especially interesting because they had major influence on the western civilisation during
many centuries.

Ancient greek philosophers tried to downgrade all variety of the observational world to
some few primordial elements. Later this principle was formulated by Willam of Ock-
ham, an english scholastic philosopher and theologian: "Among competing hypotheses,
Chapter 1. CMB and its fluctuations 3

the one with the fewest assumptions should be selected", that was later called the Ock-
ham’s razor. This principle became one of the ruling principles in building theories and
sometimes was used to judge the validity of the theory.

Greek philosophers defined the following five primordial elements [18]. From the heaviest
to the lightest they are: earth, water, air, fire and ether. All the five form layers around
each other such that the most heavy elements lay in the centre.

The very appearance of the Universe was sometimes associated with the idea of the
primordial heat (ancient Greece and India). A remarkable model was invented by Anax-
imander (VII century B.C.). He imagined the origin of the Universe as the result of
overheating a central core – embryo – that broke apart to several rings ("cosmoses")
made of some opaque matter and filled with celestial fire. The celestial bodies are con-
sidered as holes in the cosmoses-rings, through these holes we are able to see the fire.

Figure 1.1: Anaximander’s model of the Universe with the flat cylindrical Earth in
the centre and several rings around it that contain the primordial fire inside.

Aristotle (384 – 322 B.C.) for the first time generalised all the knowledge about the
Universe at that time and wrote down the first ever observationally proved physical
picture of the World. At the centre of the Universe he places the spherical Earth. Around
it there are Sun, Moon and five known at that time planets (the only planets seen by a
naked eye): Mercury, Venus, Mars, Jupiter and Saturn. There is a sphere corresponding
to each of the body, that rotates around the Earth. The farthest, eighth sphere, that
embrace all other spheres, contains stars. Accordingly to Aristotle, celestial spheres and
bodies are made of ether, that has no mass and exists in eternal rotational motion.
Chapter 1. CMB and its fluctuations 4

1.1.3 More modern views

Unfortunately, we are not able to cover the whole history of cosmological views from
prehistorical times to nowadays. One of the main steps in understanding the Universe
was the change of the geocentric system to the heliocentric. That allowed to realistically
estimate the sizes of the Earth, Sun and the Solar system, finding that the Earth is
much smaller than the size of the Sun and it is negligibly small relatively the Earth-
Sun distance. Later it was understood that even the orbit of the Earth around Sun is
relatively small in the scales of solar system.

Little by little, with accumulation of knowledge about the Solar System, interest of the
scientists moved to the study of our galaxy. In XVIII - XIX centuries scientists believed
that Milky Way was itself the Universe [19]. That’s why all the attempts of astronomers
at that time were pointed to study kinematics and composition of the galaxy. One of the
most active investigators of the galactic structure was William Herschel (1732 – 1822).
One of the scientific achievements of Herschel was building a model of our galaxy. He
imaged it as a lentil-shaped cloud of stars with the Sun in the centre.

By the end of XIX – beginning of XX century our galaxy was studied in details. The
galactic diameter was measured, various types of star population, star aggregates and
nebulae were studied. Spectral classification of stars led to the Hertzsprung-Russell
diagram, that has a deep evolutional meaning.

The question about true size of the Universe was especially keenly posed in the beginning
of twentieth century, when scientists started to think about the nature of numerous
nebulae that could be seen in telescopes. In 1920 a discussion between two authoritative
american astronomers Harlow Shapley and Heber Curtis arose. The discussion was about
the nature of nebulae. Shapley affirmed that all the nebulae were nothing but gas
formations situated in our galaxy. Meanwhile Curtis contended that many nebulae were
actually individual galaxies, containing billions of stars and are situated far away of our
galaxy. According to Curtis our world is the world of galaxies and its size by many orders
of magnitude surpasses the size of each of the galaxies. Both scientists gave observational
and theoretical arguments for their concepts, but couldn’t come to a conclusion.

We can consider Curtis’s point of view as a broadened Copernican principle: we should


never place ourselves at the centre of the macrocosm. Our planet is one of several rotating
around the Sun, our Sun is an ordinary star in the Milky Way. Extending this logic we
can expect, that our galaxy is nothing but an ordinary galaxy among billions of galaxies
in the Universe. Some speculative theories say that probably our Universe is one of many
universes. That’s how our view was extended thanks to the work of Nicolaus Copernicus
Chapter 1. CMB and its fluctuations 5

Figure 1.2: That was a hot discussion!

(1473 – 1543), who for the first time ever placed the Sun rather than the Earth at the
centre.

Speaking about the possible centre of the Universe we could also remember a medieval
french philosopher Alain de Lille (1128 – 1202/1203), who said "God is an intelligible
sphere whose centre is everywhere and whose circumference is nowhere." Besides intel-
ligibility, it is also a very nice metaphor of the Universe. The Universe indeed has no
centre, or, in other words, has centre everywhere. And its circumference is, probably,
nowhere. Later, considering the expanding Universe, we will learn that it is actually
true.

1.1.4 Birth of relativistic cosmology

In 1916 Albert Einstein (1879 – 1955) published the general theory of relativity [20] and
a year later, in 1917, he published his first cosmological work [21] where he developed his
model of stationary Universe. At that time Einstein, as many others, believed that our
Universe is a cloud of billions of stars, Milky Way, being in stationary state. Surprisingly,
the developed general theory of relativity didn’t allow to get a stationary solution. So he
had to introduce in the equations a new term, that he called a cosmological constant Λ.
The Einstein’s Universe in his work of 1917 is eternal and at rest, without any evolution.
Its three-dimensional space is non-euclidean and is like a sphere (or, more precisely, a
hypersphere). Einstein thought that this space had to have a finite volume and be closed.
It seems that Einstein was not quite satisfied by his theory. At the end of his paper he
stated again that the cosmological constant is needed to allow quasi-statical distribution
of matter, that correspond to small peculiar velocities of stars. But the nature of this
constant was not understood.
Chapter 1. CMB and its fluctuations 6

The very heart of the theory of general relativity is the idea, that the curvature of space
is related with the distribution of energy. It directly comes from the equivalence principle
which tells that the gravitational field is equivalent to acceleration. That is an observer
could not distinguish the gravitational field (that produces the force, let’s say, pointed
down) and acceleration of the frame (towards up) where he is in the rest. This relation
between curvature and energy is encoded in the Einstein equation:

1
Gµν ≡ Rµν − gµν R = 8πGTµν , (1.1)
2

where Gµν is the Einstein tensor; G is Newton’s constant and Tµν is the energy-momentum
tensor that describes the distribution of energy and mass in space-time. The Rµν term
is the Ricci tensor which depends on the metric:

Rµν = Γαµν,α − Γαµα,ν + Γαβα Γβµν − Γαβν Γβµα , (1.2)

where Γ is Christoffel symbol, comma denotes derivative with respect to the noted com-
ponent of x and the usual convention of summation on repeated indexes is applied.
Finally R ≡ g µν Rµν is the Ricci scalar.

So the left-hand side of the 1.1 equation is a function of the metric. The right-hand side
is a function of the energy and matter distribution. Einstein equation relates the two
[22].

In 1922 A. Friedmann (1888 – 1925) for the first time introduced the possibility of
cosmological expansion of the Universe. He considered the equations of general relativity
with Λ-term and has shown that they allow not only static world, but also an expandable
or shrinkable worlds. His conclusions were expressed in two papers [23] and [24] (see also
his popular science book "The World as Space and Time" [25]). Describing the behaviour
of the world in time he says that "The variable type of Universe gives us the big variety
of cases. It is possible that the radius of curvature of the Universe increases with time.
Or it is also possible that the curvature radius changes periodically: Universe shrinks to
a point (to virtually nothing), then expands its radius to some value, then again reduces
its radius to a point etc." [25].

1.1.5 Expanding Universe

In 1917 the Mount Wilson Observatory was equipped with the largest telescope at the
time with the main mirror diameter 2.5m. Edwin Hubble (1889 – 1953) started to work
Chapter 1. CMB and its fluctuations 7

there. Using photographic method in 1923 – 1924 he resolved, for the first time ever,
three spiral nebulae to individual stars. Among the stars of the Andromeda nebula (M31)
he found some variable stars – cepheids. A strong direct relationship between a Cepheid
variable’s luminosity and pulsation period established Cepheids as important indicators
of cosmic benchmarks for scaling galactic and extragalactic distances [26]. According to
Hubble’s estimation, the distance to M31 (Andromeda nebula), was 9 × 105 light years
(according to the modern data it is about 2.4 millions light years. Thereby Hubble proved
that the Andromeda nebula is actually situated outside the Milky Way and constitutes
a giant star system, as big as our own galaxy. Thus, with the inauguration of a new
telescope, the size of the Universe was increased by orders of magnitude.

Later, in 1927 – 1929, Edwin Hubble discovered, that the galaxies don’t stay still, but
move away from each other. Ten years before, in 1917 an american astronomer V. M.
Slipher wrote about the moving away of the cosmic nebulae [27], [28] (it is the very year
of Einstein’s article on stationary Universe [21] !) Slipher discovered that 11 among 15
nebulae studied by him the spectroscopic lines are shifted to the red part of spectrum.
It appeared that the fainter the nebula the more it was red shifted. This kind of red
shift could be interpreted as a Doppler effect and points to the moving of nebulae away
from us. At that time neither the distances to the nebulae, nor their nature were known,
that’s why Slipher didn’t give any cosmological interpretation of his results.

In 1927 E. Hubble, thanks to his studies, already knew that many nebulae, observed
by telescope, are far galaxies. Moreover, observing cepheids he determined distances
to many of those galaxies. Using spectroscopic data, he deduced the dependence of the
receding speed of galaxies on the distance to them, see the figure 1.3. Thus he derived the
famous law [1] that bears his name: the receding speed of a distant galaxy is proportional
to the distance to it:

v = HR (1.3)

Modern value of Hubble constant is ∼ 70kms−1 M pc−1 . Thus Hubble has empirically
proven that our Universe expands and has given a numerical characteristic of this ex-
pansion: the speed of expansion is proportional to the distance to the galaxy. Exactly
this type of Universe expansion was predicted by Friedmann’s cosmological theory.

Assuming the isotropic and homogeneous Universe one can derive the Friedmann equa-
tions from the 00-component of the Einstein equation:

 2
2 ȧ 8πG Λ κ
H = = ρ + − 2, (1.4)
a 3 3 a
Chapter 1. CMB and its fluctuations 8

Figure 1.3: Velocity-Distance Relation among Extra-Galactic Nebulae. Radial veloc-


ities, corrected for solar motion, are plotted against distances estimated from involved
stars and mean luminosities of nebulae in a cluster. The black discs and full line repre-
sent the solution for solar motion using the nebulae individually; the circles and broken
line represent the solution combining the nebulae into groups; the cross represents the
mean velocity corresponding to the mean distance of 22 nebulae whose distances could
not be estimated individually. (Picture caption is cited from original paper by Hubble
[1])

ä 4πG Λ
= − 2 (ρ + 3p) + . (1.5)
a 3c 3

Here H is the Hubble constant from 1.3, a is the scale factor that grows with the expan-
sion of the Universe, ρ is the energy density and κ is geometry constant, which is equal
to +1 if the Universe has a closed (spherical) geometry, 0 if the geometry of the Universe
is Euclidean and −1 if it is hyperbolic. The Friedmann equation describes the evolution
of the Universe in homogenous and isotropic case, which is a good approximation on the
large scales.

Independently from Friedmann, in 1927 Belgian astronomer G. Lemaitre (1894 – 1966)


learned about the Slipher and Hubble’s results gave his own explanation to the global
Universe expantion [29]. He built a model of changing of the space curvature radius with
time and considered evolution of perturbations. Actually, he was the first who wrote
the Hubble’s law 1.3 and he also made the first estimation of the Hubble’s constant. He
proposed an interesting idea, that as the Universe now expands, maybe before it was
just a point-size. He called this "hypothesis of the primeval atom" or the "Cosmic Egg".
Chapter 1. CMB and its fluctuations 9

Later this model of hot Universe was ironically called the Big Bang model by Fred Hoyle,
who was an opponent of this idea.

It is important to mention that there is a common misunderstanding while speaking


about the Big Bang. Very often people think about it like an actual explosion happened
at the moment of time 0, when all the distances were 0 (virtually the Universe was just
a point) and the density was infinite. In fact, we know nothing about the Universe
before the end of the Planck epoch. Let’s consider it a bit more precise, as it has crucial
importance for all the cosmology.

The Planck epoch is the earliest epoch in the history of the Universe that we can describe
with our theories. It is characterised by the Planck mass density

c5 kg
ρP l ≡ 2
≈ 5.15 · 1096 3 , (1.6)
~G m
which is a value of units of mass density, obtained by the dimension analysis from
the three fundamental constants of Nature: the gravitational constant G, the special-
relativistic constant c, and the quantum constant ~. When the density is higher than
the Planck density, that is before the end of Planck epoch, the processes in the Universe
are ruled by the laws of quantum gravity. So far we don’t have any quantum gravity
theory which is not self-contradictory. Even if we would have such a theory, it is not yet
possible to falsify it, as the quantum gravity regime comes with very high energies, much
higher than possible for modern experiments:

r
~c5
EP l ≡ ≈ 1.956 × 109 J ≈ 1.22 × 1028 eV ≈ 0.5433M W h (1.7)
G

This energy, called Planck energy – the energy scale that one is able to obtain with di-
mensional analysis of fundamental constants – is the scale of quantum gravity. Currently
we are absolutely unable to study the Universe before the end of the Planck epoch. It
took place before time 10−43 seconds (Planck time):

r
~G
tP l ≡ ≈ 5.39 106(32) × 10−44 s (1.8)
c5
The Big Bang is the model of expansion of the Universe from some hot and dense state
after the Planck era.
Chapter 1. CMB and its fluctuations 10

1.1.6 Big Bang model problems

The Big Bang model is a very successful model that explains many aspects of the obser-
vational Universe. However it has some issues. Let’s briefly discuss them.

1.1.6.1 Flatness of the Universe

The total energy density defines the geometry of space. If the total energy density of the
Universe is equal to the critical density

3H 2
ρcrit ≡ ≈ 10−26 kg m−3 , (1.9)
8πG

then the Universe has flat Euclidean geometry. If the density if higher than the critical
value, the Universe becomes close. On the contrary, if the density is lower than the
critical value, then the Universe is open. Today the Universe seem to be flat, as the
measured value for the total density is very close to the critical one. We can introduce
the density parameter Ω, which is fraction of the density with respect to the critical
density of the Universe:

ρ 8πGρ κ
Ω≡ = 2
= 2 2 +1 (1.10)
ρcr 3H a H

where the last equality follows from the equation (1.4) with zero cosmological constant.
As we see, this density parameter Ω defines the geometry of the Universe: Ω − 1 =
κ/(a2 H 2 ). It is convenient to introduce another parameter Ωk ≡ Ω − 1, which is just
difference of the total energy density from the critical value. At the epoch of radiation-
1
domination the scale factor depends on time as a ∝ t 2 , the first derivative on time of
1
scale factor - as ȧ ∝ t− 2 and

|Ωk | ∝ t ∝ a2 . (1.11)

2 1
Similarly, for matter-domination epoch we find a ∝ t 3 and ȧ ∝ t− 3 , so

2
|Ωk | ∝ t 3 ∝ a. (1.12)

Thus we find that the value of |Ωk | always increases with time. Today we measure it as
|Ωk,0 | ∼ 6 × 10−3 . Applying the laws of time dependence from the equations 1.11 and
Chapter 1. CMB and its fluctuations 11

1.12 one can find that at the Planck time tP l ∼ 10−43 s the value of Ωk must be about
8 × 10−62 , which means that our Universe looks like if it is fine tuned (the "Fine tuning"
problem): the initial value of energy density at Planck time must be so precisely tuned
to the critical value, that it is hard to believe that it just happened by chance. This is
called the flatness problem of the Big Bang model.

1.1.6.2 Horizon problem

One can define the cosmological horizon: It is the maximal distance at which two objects
may have influenced each other since the Big Bang. For a photon the distance that photon
covers for time dt in a flat Universe is

dt
dr = . (1.13)
a(t)

Integrating this equation one can get the size of horizon for a photon emitted at time ti
in terms of comoving distance:

t0
dt0 a(t0 )
Z Z
d ln a
λhor (t0 ) = = . (1.14)
tP l a(t0 ) a(tP l ) aH

The size of the observed Universe today, as it was observed at some moment t, informally
introduced in the beginning of this section, is

l(t0 , t) = a(t)λhor (t0 ) (1.15)

The figure 1.4 illustrates the horizon problem. The region that we observe today on
the CMB is almost as big as the horizon today. So the CMB contains many causally
disconnected regions: these regions at the moment of recombination could not share any
information since the Big Bang. The angular scale of causally connected regions on the
CMB today is about 1.1◦ . And the statistical properties (as we will learn in more details
below) of the CMB on the scales more than one degree are almost constant. So it raises
a question: either the fluctuations in the early Universe went exactly same way in all the
different parts of the Universe, which looks supernatural to physicists, or there is some
better explanation, that puts all the different patches of CMB to causal connection.
Chapter 1. CMB and its fluctuations 12

Figure 1.4: Illustration of horizon problem: today we observe the CMB, that con-
tains many causally disconnected regions. However, they all have the same statistical
properties, which is hardly possible to have by chance.

1.1.6.3 Matter-antimatter assymetry

In 1928 Paul Dirac predicted the existence of anti-particles, which have exactly the same
properties as their partners, but inverse electric charge. Soon these particles were found
experimentally. The symmetry between particles and anti-particles is called C-symmetry
[30]. As the most of the mass of the matter consists of protons and neutrons, the usual
matter is often called baryonic matter. Baryons are particles made of quarks. Proton
and neutron have baryonic charge B = 1. Due to the conservation of the color charge
protons, the lightest baryons, could not decay. Thus the B-charge of a particle sample
conserves. The anti-protons and anti-neutrons together with anti-electrons (positrons)
can form anti-matter. Together matter and anti-matter annihilate, freeing energy which
is equal to the mass of the annihilated particles.

It is an experimental fact that we don’t observe anti-baryons in the Universe. If there


would be some regions with the dominance of anti-matter, for example a galaxy made
of anti-matter, it would appear very bright on the sky because of annihilation of in-
tergalactic matter with the anti-matter of that imaginary galaxy. The abundance of
Chapter 1. CMB and its fluctuations 13

anti-particles in the Universe is very small. That poses a question: how the observed
matter-antimatter asymmetry could arise?

For the theory of Big Bang the observed B-asymmetry of the Universe is a serious
problem, as the primordial singularity, due to the model, must be perfectly neutral,
including perfect symmetry of matter and anti-matter. In 1967 A. Sakharov formulated
a list of necessary conditions for creating the C-asymmetric Universe [31]. It is

• Baryon number violation (B-violation),

• C- and CP-violation,

• State out of thermal equilibrium.

Let’s illustrate these conditions on one nice example (look at 1.5). Usual matter we will
illustrate with white grapes and white wine. Correspondingly, antimatter is illustrated
with red grapes and red wine. White and red wines should be never mixed together in
one glass – ask any Frenchman, if you doubt. Just as usual and anti-matter should not
be mixed, otherwise they explode.

You may know, that it is possible to produce white wine from red grapes – usually red
grapes have white juice and it is the skin that gives the color. It is the illustration of
B-symmetry violation. Correspondingly, there must be process to produce matter from
antimatter. In perfectly B-symmetric world it never happens, as the B-charge perfectly
conserves. Today we don’t observe yet any hint for B-violation. It is one of the most
important open questions for the modern physics.

C-conjugation changes the matter to antimatter. It is equivalent to symmetry of the


picture 1.5 on vertical axis. It means, that with perfect C-symmetry the B-violating
process that produced matter from antimatter will be counterbalanced by C-symmetric
process of antimatter production from matter. We need C-violation to prohibit this
process. It is exactly what happens with wine: the fact that we cannot produce red wine
from white grapes is analogous to C-asymmetric picture.

To illustrate the necessity for CP-violation, let’s include white and red plums to our
illustration. P-symmetry describes the symmetry of our world on spatial inverse. P-
conjugated matter remains to be matter, the same for antimatter. In our illustration
P-conjugation transforms grapes to plums, conserving their color. But we find that in
perfect CP-symmetric world, if we allow the production of white wine from red grapes,
then CP-conjugated red grapes are white plums, and white grape wine becomes red plum
wine. Thus again, in CP-symmetric world, the overproduction of white grape wine will
Chapter 1. CMB and its fluctuations 14

Figure 1.5: Winery illustration for Sakharov conditions of matter-antimatter unbal-


anced Universe.
Chapter 1. CMB and its fluctuations 15

be counterbalanced by production of red plum wine from white plums. To prevent it, we
need CP-symmetry violation. Similarly for matter and antimatter. The violation of C-
and CP-symmetries are experimentally observed phenomena [32–34].

Finally, the thermal non-equilibrium: imagine we have a machine, that produces wine.
This machine is able to produce white and red wine, depending on the color of fruits we
load to it. At the beginning, we load the machine with equal number of white and red
grapes and plums. It is logical, that in the output we have more white wine. So white
wine we bottle, while we have not enough red wine to make even a single bottle, so we
just pour it out. Our machine works only in one direction: it produces wine from fruits,
not the other way round. Otherwise we would be able to load it with equal amount of
white and red wines and produce more red fruits than white. Similarly, our Universe
is primordially "loaded" by equal number of particles and anti-particles. With all the
described processes we produce more matter than antimatter. But if our Universe would
be in thermal equilibrium, the processes of matter production would be counterbalanced
by time-reversed processes, and we end up with neutral Universe. The non-equilibrium
state is guaranteed by the Big Bang model.

Nowadays we reached energies of order 1013 eV on the LHC accelerator [35], which cor-
responds to the energies during the quark epoch of the early Universe, when the quarks
were not yet bound to the nuclei. The physics we study on LHC is well described by
the Standard Model of particle physics (SM), which doesn’t include explanation of the
baryogenesis. The B-asymmetry problem of cosmology remains unsolved. Probably, we
should search the solution of this problem in earlier epochs.

1.1.6.4 Magnetic monopoles

The Grand Unified Theory (GUT) is the theory that predicts the merge of the gauge
interactions (strong and electro-weak) into one single force. The simple motivation for
this theory is the coincidence of absolute values of electron and proton electric charges,
which is not explained by SM. If GUT is actually correct, then probably in the early
Universe there was an epoch of grand unification. GUT predicts existence of magnetic
monopoles – elementary particles with magnetic charge [36]. They appear as topological
defects in the early Universe. These monopoles are stable, thus they had to remain until
the present days. The appearance of the magnetic monopoles is causal process, that is
the distance between to neighbour monopoles must not be larger than the horizon at the
epoch of grand unification. Today the horizon is much larger than then, with the same
reasoning as described in 1.1.6.2 section. But no observations proove the existence of the
magnetic monopoles.
Chapter 1. CMB and its fluctuations 16

1.1.7 Inflation

When we considered the horizon and flatness problems of the Big Bang model, the source
of these problems was the fact that the product of a(t)H(t) decreases fast with time in
the hot expanding Universe. The main idea of inflation as a way to solve those problems
is to introduce an epoch in the early Universe when the product of a(t)H(t) is a fast-
growing function of time. Note, that aH = ȧ, so the increase of a(t)H(t) means positive
ä. In other words, the Universe should expand with acceleration [37].

In 2002, the fathers of the theory of inflation: Alan Guth, Andrei Linde and Paul Stein-
hardt, were awarded the Dirac Prize "for development of the concept of inflation in
cosmology".

Let’s see, how the accelerated expansion can solve the problems of the Big Bang model,
listed above. First let’s consider the size of horizon by the end of inflationary stage. If
inflation lasts from about tP l till tend , the present size of the region under horizon by the
end of inflation is

Z tend
d log a
a0 λ(tend ) = a0 . (1.16)
tP l aH

Since the aH grows fast with time, this integral is defined mainly by the lower limit and
assuming H ≈ const

a0
a0 λ(tend ) ≈ . (1.17)
a(tP l )H(tP l )

This value does not depend on tend , which means it is larger or of the same order as the
observable Universe today. It means that inflation epoch puts causal connection to all
observed Universe. The ratio of the size from equation (1.17) to the size of the observable
Universe today is

a0 λ(tend ) a(t0 )H(t0 )


≈ & 1. (1.18)
a0 λ(t0 ) a(tP l )H(tP l )

Note that to solve the flatness problem we need

Ωk (tP l ) a2 (t0 )H 2 (t0 )


= 2 & 1, (1.19)
Ωk (t0 ) a (tP l )H 2 (tP l )

which is satisfied in the equation (1.18). Thus the high rate expansion during the inflation
epoch solves both horizon and flatness problems of Big Bang model. Roughly speaking,
Chapter 1. CMB and its fluctuations 17

if we have any curvature in the early Universe by the end of Planck era, it would be
blown away during inflation stage.

The inflation theory could also be solution for baryon asymmetry and magnetic monopoles
problems. Tiny perturbations that happened during inflation could create little volumes
of baryon asymmetric and monopole-free regions. This little regions expanded very fast
to huge volumes. One of these volumes became our observed Universe.

Considering the second Friedmann equation (1.5) and assuming the positive acceleration
ä we have p < − 13 ρ. That is the inflation requires the negative pressure. Let’s assume
that the inflation potential V depends only on one homogeneous time-dependent scalar
field φ called inflaton. There are other models for inflation, but here we consider only
the very basic one. Density and pressure depending on the potential V (φ) are:

1
ρ = φ̇2 + V (φ), (1.20)
2

1
p = φ̇2 − V (φ), (1.21)
2

If we have

1 2
φ̇  V (φ) (1.22)
2

then the requirement to have the negative pressure is fulfilled.

If the kinetic term 12 φ̇2 of equations (1.20, 1.21) is zero, then the inflation lasts forever.
To exit the inflationary epoch one needs a non-zero kinetic term, but to have inflation
lasting long enough we also need the derivative of the kinetic term to be small. This could
be satisfied with an almost flat potential, where the field φ rolls slowly. The potential
should also have a minimum where inflation stops. This approximation is called the slow-
roll approximation and it is the simplest model for inflation, see figure 1.6. Considering
Klein-Gordon equation

dV
φ̈ + 3H φ̇ = − , (1.23)

we can write the requirement of small derivative of the kinetic term of (1.20, 1.21) as:

φ̈  3H φ̇. (1.24)
Chapter 1. CMB and its fluctuations 18

The Friedmann equation of an expanding scalar field, ignoring the curvature and Λ terms,
is:

 
2 8π 1
H = V (φ) + φ̇ . (1.25)
3m2P l 2

Together with the Klein-Gordon equation (1.23) it consitutes the system of equations of
motion. Taking into account the requirements (1.22, 1.24) the equations of motion turn
to

8πG
H2 = V (φ), (1.26)
3

1 dV (φ)
φ̇ = − . (1.27)
3H dφ

We can define the inflation parameters for the slope of potential

2
m2

1 dV (φ)
(φ) ≡ P l , (1.28)
16π V dφ

and for its curvature

m2P l 1 d2 V (φ)
η(φ) ≡ , (1.29)
8π V dφ2
q
~c
where mP l = G is the Planck mass. Then the necessary conditions (1.22, 1.24) for
the slow-roll approximation are:

  1, |η|  1. (1.30)

Figure 1.6: The slow roll potential


Chapter 1. CMB and its fluctuations 19

The inflation epoch could seed also the inhomogeneities of the Universe. If we introduce
the space-dependent term to the inflaton, such that

φ(~x, t) = φ(0) (t) + δφ(1) (~x, t), (1.31)

where the homogenous term φ(0) (t) is what we told about above, then these perturbative
term δφ(1) (~x, t) generates necessary fluctuations that grow after to all the structures of
the present Universe. These quantum fluctuations perturb both matter distribution
and space-time metric. The first arise from the scalar perturbations, while the second
are from the tensor. The scalar perturbations, coupled to the density of radiation and
matter, make the Universe inhomogeneous. While the tensor fluctuations make the
primordial gravitational waves. These gravitational waves were pretty significant in the
early Universe, while today, due to the redshift, they are hardly detectable. But they
leave a specific imprint on the CMB polarisation as B-modes, which we will discuss below
[38].

The spectra for the scalar and tensor inflationary perturbations are:

PS (k) = A2S k nS −1 , (1.32)

PT (k) = A2T k nT , (1.33)

where nS,T are spectral indices and AS,T are amplitudes of the fluctuations.

As it is shown in [39],

1 V
PS (k = aH) = , (1.34)
24π 2 m4P l 

2 V
PT (k = aH) = , (1.35)
3π 2 m4P l

One can define the tensor-to-scalar ratio r as

A2S
r= . (1.36)
A2T

From (1.34, 1.35) we get:


Chapter 1. CMB and its fluctuations 20

r ≡ 16. (1.37)

The tensor-to-scalar ratio is an experimentally measurable value. Its detection is one of


the major challenges for the modern cosmology as it can provide important insights for
the inflationary physics.

1.1.7.1 Observational hints in favor of inflation

Although we don’t have yet the observations that would tell us about the tensor pertur-
bations in the early Universe, there are some observational facts that one can consider in
strong favor of inflation. Note, that none of these facts were known, when the inflationary
model was first proposed in 1981 [40].

From the slow-roll model one can make certain predictions for the spectrum of primordial
scalar perturbations PS (k) [41]. The slow-roll parameters  and η to first order define
the scale dependence parameter as

ns − 1 = 2η − 6 (1.38)

Figure 1.7: Illustration for the evolution of density perturbations during inflation and
hot Big Bang. Blue line show the comoving scales, which remain constant. Red line
represents the comoving horizon, which shrinks during inflation and increases at late
time.

The creation and evolution of perturbations in the inflationary Universe are illustrated
on the figure 1.7. Perturbations originate from quantum fluctuations. Note that the they
are created only on sub-horizon scales. Thus when the perturbations exit the horizon
Chapter 1. CMB and its fluctuations 21

during inflation, they remain frozen until they re-enter the horizon at late time. Thus
one can deduce the power spectrum of perturbations from the inflationary potential,
because the potential defines the way the perturbations exit the horizon. The slow-roll
inflation predicts ns to be slightly less than 1. The COBE experiment and later Planck
mission measured the ns parameter and its modern value is 0.9667 ± 0.0040, in excellent
agreement with the inflationary predictions.

That’s probably the most strong reason why inflation is the leading paradigm today. But
it’s not all. We told that one of the motivations for inflation was the observed flatness of
the Universe. But in fact physicists in late 70’s – early 80’s didn’t know yet whether the
Universe is flat or not. It seemed flat, that’s true. However, the total observed matter in
the Universe gave the density which is about 4 times lower than the critical density. Only
with discovery of the dark energy (see paragraph 1.2.4) it was finally understood that
the Universe is indeed flat, as predicted by the inflationary model. As we already said,
the trace of tensor perturbations is not found yet. However, even without it inflation
seems rather successful theory.

1.2 Observations in cosmology

In this section we will briefly discuss the observations in cosmology. One can roughly
classify the cosmological observations as the purely cosmological, i.e. dedicated to the
measurements of the properties of the Universe in the whole, and the complementary
studies of processes closely related to cosmology. The "purely cosmological" studies com-
prise cosmic microwave background observations, large-scale structures (LSS), measure-
ments of light elements abundances and elementary particles abundances, dark matter
and dark energy studies. Also cosmologists are interested in the results of accelerator-
based studies of early Universe physics and others. The cosmic microwave background,
as it is the subject of this work, deserves a separate chapter. Other subjects will be
briefly covered below.

1.2.1 Large-scale structures

The basic cosmological principle says that on the large scales the Universe becomes
homogeneous and isotropic. In fact, it is nothing but extreme extension of the Copernican
principle. If the Universe is not homogeneous and isotropic on the large scales, the place
we live in could become privileged, while Copernicus said that the Earth has no privileged
place in the Universe.
Chapter 1. CMB and its fluctuations 22

However, our Universe is homogenous only on average, on very large scales L & 200 Mpc
[42]. On smaller scales the hierarchical structure becomes evident like planets and stars,
star clusters (L ∼ 1 pc), galaxies (L ∼ 10 ÷ 100 kpc), clusters of galaxies (L ∼ 10 Mpc)
and super-clusters of galaxies (L ∼ 100 Mpc). The lasts form so-called cosmic web of
clusters, filaments and voids.

This hierarchical structure was produced by gravitational instability of some small per-
turbations of density. They are grown from physical processes on the early inflationary
stage and thus the large scale structures of the Universe are bound to the physics of
elementary particles. The large scale structures are often studied using the two-point
correlation function hδ(x1 )δ(x2 )i. The power spectrum P (k) being the Fourier transform
of such function:

d3 k ik(x1 −x2 )
Z
hδ(x1 )δ(x2 )i = e P (k). (1.39)
(2π)3

The physics of the large-scale distribution is described by the Boltzmann equation, which
is dependent on Ωm , ΩDE and other cosmological parameters. Thus by measuring the
power spectrum we can constrain those parameters [43]. The measurements of P (k) are
summarised on the figure 1.8.

Figure 1.8: Power spectrum of large-scale structures


Chapter 1. CMB and its fluctuations 23

1.2.2 Light elements abundances

From the very beginning of the Big Bang theory the success of this theory was based
mainly on the very good agreement of predictions with observations for the abundances
of the light elements in the Universe. The Nobel lecture of Arno A. Penzias is called
"The Origin of the Elements", though he was awarded the Nobel prize for discovery of the
cosmic microwave background [44]. At that time physicists thought that all the elements
were synthesised in the hot plasma in the beginning of the Universe evolution (precisely,
in his Nobel lecture Penzias analyses the history of views on the element formation and
marks that the opinion on this issue changed several times between 1930s and 1970s). It
was found out later, that only light elements could form in primordial plasma, and the
heavier elements appear much later during the stars evolution.

Figure 1.9: Constraints on the baryonic density [2]. The boxes are for the observa-
tions, for 3 He there is only upper limit. The vertical stripe fixes the baryon density
due to the measurement of deuterium.

The typical explanation of primordial nucleosynthesis is the following: as the Universe


cooled down, when the energy of photons dropped below the binding energy of some
nuclei, the photons did no longer break these nuclei to protons and the element began
to form. This mechanism is called freezing – it is interesting that in russian literature it
Chapter 1. CMB and its fluctuations 24

is called with the same word as used for the steel hardening, like the Universe is a huge
blacksmith‘s shop. Current results on studies of the light element abundances are shown
on the figure 1.9. The picture shows that these studies could constraint the baryonic
density in the Universe, though not all the measurements are consistent, which raises
one of the most intriguing questions in the modern cosmology.

1.2.3 Dark matter studies

In 1932 american astronomer Fritz Zwicky noticed, that besides the luminous bary-
onic matter of galaxies there are invisible hidden masses in the Universe, that manifest
themselves only through the gravitation [45]. Zwicky studied the galaxy cluster in the
constellation of Berenice’s Hair. And he discovered, that the speeds of the galaxies in
this cluster are very large, up to few thousand kilometres per second. To hold down
such fast moving galaxies within the cluster the huge gravitational force is needed, much
higher than the gravitational force from the galaxies themselves. Later, in 1970’s it was
discovered that the hidden masses present not only in cluster of galaxies but in the iso-
lated galaxies as well. Invisible Dark Matter (DM) forms spherical halo around galaxies.
The radius of the halo is typically 5-10 times bigger than the radius of the galaxy.

Dark Matter manifests itself in the following phenomena:

• Galactic motion in the clusters (v ≥ 1000 km/s),

• Rotation of galaxies (flat rotation curves),

• Hot gaz (T ∼ 108 K) in the galaxy clusters,

• Gravitational lensing of the light of far galaxies by the gravitational field of near
cluster,

• Motion of double and triple galaxies etc.

It is remarkable that all the different observations of DM phenomena give the same
estimation to the relative amount of DM. It is about 5 times more massive than the
baryonic matter.

One explanation of these effects could be a difference of the gravity law on big distances
from that on smaller scales. But so far the modifications of the theory of gravitation
could not explain completely the observations.
Chapter 1. CMB and its fluctuations 25

Another possibility for DM is the model of weak interacting massive particles (WIMP).
Many experiments around the globe are trying to detect the signal of such particles when
they pass through the detector volume, but yet unsuccessfully.

The existence of DM in the Universe and its leading role in determination of the Uni-
verse structure also manifests itself in the baryonic acoustic oscillations (BAO): acoustic
oscillations of baryonic matter in the potential wells of DM before the recombination.
BAO studies is a rich field in the modern cosmology, which helps to constrain the physics
of DM and LSS [46].

1.2.4 Dark energy

The observations of the supernova stars of type Ia show that today the Universe expands
with acceleration. This fact could be explained by the presence of some form of energy,
called Dark Energy (DE). In 2011 the Nobel Prize in physics was awarded to Saul Perl-
mutter, Brian P. Schmidt and Adam G. Riess for their leadership in the discovery of the
accelerated expansion of the Universe and hence DE.

There is at least two more evidences for the presence of DE:

• The evolution of galaxy clusters, studied with X-ray astronomy on different red-
shifts and in milimetric astronomy by the Sunyaev–Zeldovich effect. The growth of
galaxy clusters is defined by two counteracting processes: gravitational contraction
and repulsion due to the DE. By fitting the dependence of galaxy clusters mass on
the redshift we obtain the value of ∼ 70% for the relative presence of the DE in
the total energy-density budget of the Universe [47].

• The effect of gravitational lensing of the cosmic microwave background gives an


independent evidence for the presence of DE [48].

1.2.5 Results of accelerator experiments in application for cosmology

Cosmology is really bound to the processes on the subatomic scales. The phase tran-
sitions in the early Universe could be studied on the accelerators up to the energies of
order of TeV (on LHC). This energy corresponds to the time after the Big Bang about
0.1 ns. On the accelerators we can perform experiments in well controlled conditions and
study the physics of the early Universe in details.

Recent discovery of the Higgs boson has also significant meaning for cosmology [49]. As
we said in the section 1.1.7, the inflation is driven by some scalar field, called inflaton.
Chapter 1. CMB and its fluctuations 26

Higgs boson is the only scalar boson in the framework of the SM, and thus Higgs field is
a good inflaton candidate [50].

There are also accelerator-based searches for DM-particles, but so far these studies didn’t
lead to detection.

1.2.6 Other cosmological observations

Among other areas of inquiry for cosmology we can mention the followings:

• Primordial black-holes studies, which include models for their formation and evo-
lution. Recently observed gravitational waves (GW) [51] opens new amazing op-
portunity for these studies [52].

• Sometimes the astrophysical observations could give a clue to cosmological parame-


ters, like the recent paper of measuring baryon density in the intergalactic medium
using the radio signal from a fast radio burst [53].

• Testing theoretical basis, like, for example, the equivalence principle of general rel-
ativity [54]. Although it is not a cosmological observation, it is crucially important
foundation of cosmology.

We tried to list all the main tendencies in the modern cosmology. However this list
could never be complete, as in any scientific research a significant result may come from
completely unexpected area. As researchers, we should be always open for new insights
and interpretations and try to figure out the nature of our Universe.

In this chapter we briefly summarised the progress in cosmology from prehistorical times
until the present days. Unlike the neolithic people, we dispose so many sophisticated
instruments and theories to study and describe the Universe. And unlike them we un-
derstand so clearly how big the Universe is and how little we know about it.
Chapter 2

Cosmic microwave background and


its fluctuations

This chapter is dedicated to the physics of the cosmic microwave background (CMB).
We briefly discuss the historical points of the discovery and observations of CMB, the
physics of the CMB temperature and polarization fluctuations, introduce the
power-spectrum of these fluctuations and discuss the issue of foregrounds and secondary
anisotropies. In this chapter we mainly stress the problematics of primordial B-modes
observations.

2.1 Cosmic microwave relic background radiation

The cosmic microwave background radiation (CMB or CMBR) is one of the most im-
portant evidence for the theory of the hot Universe and takes an outstanding role in the
modern cosmology. It is the oldest light in the present Universe and encodes information
about all the epochs. This information is currently available for studying due to the
progress in observational techniques.

Let’s first naively describe the physics of CMB emission. When the Universe was young
and hot, the photons were energetic enough to break the hydrogen atoms to protons and
electrons. The Universe was ionised and thus opaque. With expansion, the temperature
decreased and at some point the energy of photons became not enough high to keep the
plasma ionized. The protons and electrons bound to the neutral hydrogen and the space
became transparent. The light, freed from the plasma, started to travel through the
space. Now we observe it redshifted due to expansion. The redshift of CMB is about
1000.

27
Chapter 2. CMB and its fluctuations 28

2.1.1 History of CMB discovery

Before scientists started to understand the physics of hot Universe and predicted the
relic radiation, there were some observations of CMB. At that time these observations
were not explained.

In 1941 W. Adams [55] performed observations of interstellar absorption in the light of ξ


of Ophiucus in the CN spectrum and discovered, that the molecules absorb light not only
in the main state but also in the first excited rotating state. McKellar, assuming that
the relative population of energetic states follows the Boltzmann formula estimated the
temperature of the radiation that excites the CN molecules as ∼ 2.3K [56]. The source
of the radiation remained unknown for long time. But the observations of spectra of
other stars proved that the source is isotropic. Only in 1966 the source of the molecules
excitation was identified as CMB [57].

Direct observation of CMB was performed on the horn antenna on the wavelength 3.2
cm in USSR by T. Shmaonov in 1957 [58]. The measured temperature of radiation is
4 ± 3 K and doesn’t change with time. In the popular science book "Black Holes and
the Universe" [59] I. Novikov writes: "In the fall of 1983 a scientist of the Prokhorov
General Physics Institute in Moscow T. Shmaonov called me and said that he’d like to
speak with me about the cosmic background radiation discovery. We met the same day
and Shmaonov told me that in the middle of 50’s under supervision of famous soviet
radioastronomers S. Khaikin and N. Kaidanovsky he worked on his PhD thesis... Unfor-
tunately, neither T. Shmaonov himself, neither his supervisors or any radioastronomer
who knew about these measurement, knew nothing about possibility to detect the relic
radiation and didn’t pay much attention to these results. And soon they were forgotten.
It is funny to mention that even the author of the discovery didn’t attach any importance
to it, not only in 50’s, which would be easy to explain, but even after the publication of
the CMB discovery in 1965 by A. Penzias and R. Wilson. To say the truth, at that time
Shmaonov worked on another area. Only in 1983, in some occasional conversation he
drew his attention on the old measurements and gave a talk on Bureau of Department
of General Physics and Astronomy of Academy of Sciences of USSR."

And later Novikov writes: "And even this is not the end of the story. When the author
was about to finish the book, he got to know that there were some measurements by the
Japanese radioastronomers in the beginning of 50’s, who – supposedly – discovered the
background radiation. These work, as well as Shmaonov’s work, neither then or later
never drew any attention and were completely unknown".
Chapter 2. CMB and its fluctuations 29

In the spring of 1964 A. Penzias and R. Wilson from the Crawford Hill Laboratory
in Holmdel, NJ, prepared to the measurements of continuous galactic radiation on the
wavelength 20 cm (near to the line of neutral hydrogen 21 cm) [44, 60].

The equipment was very sensitive, with very low noise level. Originally, they intended
to use this equipment to receive the signal reflected by satellites. The scientific program
was to study whether the antenna and receiver noises allow to make absolute measure-
ments. But they found the registered noise on the wavelength 12.5 cm exceeded the noise
observed in the laboratory. At first they supposed that the noise was comming from the
Earth. But D. Wilkinson, who was invited to judge the reasoning, said that it might
be the relic radiation, that astronomers expect due to the model of the hot Universe.
Detailed history of CMB discovery one can find in the J. Peebles’ book [61].

Not a long time before this A. Doroshkevich and I. Novikov [3] computed the spectrum
of radiation that might be observable in the present Universe and was emitted by early
galaxies. On the theoretically computed spectra of galaxies they overplotted the equi-
librium Planck spectrum with temperature 1K, showing that in the range of frequencies
below 5×1011 Hz this radiation dominates (see figure 2.1). At the end of this short paper
they point, that the radio observatory in Holmdel would be an ideal site to measure the
relic radiation.

Penzias and Wilson didn’t know about possibility to explain the observed noise as CMB.
All the instrumental noises were studied in the laboratory, except the noise in the antenna
– the horn reflector with aperture about 6 m. To study this noise in details they tuned
the receiver on the wavelength 7.35 cm and the antenna was pointed on the dark part of
the sky outside the Milky Way. The observed signal was very large – 3.5 ± 1 K.

They spent about a year to check the instrumental equipment. This signal had unusual
properties: it didn’t depend on time, on direction, on the Sun position on the sky, neither
on the position of the antenna in respect to the Earth surface. This behaviour would be
easily explained by a noisy resistance, but after careful check of all systems they made
sure that it was not so. Penzias and Wilson proved that the source of the signal is neither
in the antenna nor in the receiver.

They made the following assumptions for the nature of the source: either it is on the
Earth, or in the Solar System, or in the Galaxy, or, finally, outside the Galaxy. The
first three possibilities were excluded, mainly because the signal was isotropic. For the
extragalactic sources they first assumed some distant discrete radio-sources. But the near
radio-sources were already well studied at that time. Supposing that the distant radio-
sources has the same nature, the observed radiation had to have different properties.
Chapter 2. CMB and its fluctuations 30

Figure 2.1: Spectrum of the metagalaxy in assumption of great quantity of neutrinos:


ρ̄ = ρcrit = 1.86 × 10−29 g/cm3 . The equation of state is P = ρ̄c2 /3. Two different
assumptions for galactic spectrum is plotted: a) galaxies become luminous at time
when the mean distance between them l is 10 times less than the present l0 , b) galaxies
become luminous at the same time, but more precise assumptions are made for the
galaxy evolution. Plot c) presents the equilibrium Planck radiation with T = 1K.
Crosses denote experimental points [3]

In parallel with the studies in Holmdel a group of scientists in Princeton (50 km from
Holmdel) headed by Robert Dicke made observations on the wavelength 3 cm, consciously
dedicated it to search for the cosmological background. Dicke supposed to observe radi-
ation with temperature of few Kelvin. After some time two teams got know about each
other and met. Two articles: one with experimental results [60] and another one with
theory [62] came out in the same issue of Astrophysical Journal. Few months later the
group of Dicke confirmed the results of Penzias and Wilson, getting the temperature of
3.0 ± 0.5 K.

By 1972 the CMB properties were confirmed by more than 15 independent groups of
experimentalists on wavelengths from 0.27 to 73.5 cm. In 1975 the observations continued
at wavelength 0.1 mm, which is below the frequency of maximum of CMB. Nowadays
CMB is a source of extensive information about early stages of Universe history.
Chapter 2. CMB and its fluctuations 31

For the discovery of CMB Penzias and Wilson were awarded the Nobel prize in physics
in 1978.

The FIRAS spectrometer (Far-InfraRed Absolute Spectrophotometer) [63], placed on


the COBE satellite (COsmic Background Explorer) [64] in 1989 measured the frequency
spectrum of the CMB. It turned out that it has a spectrum of a black-body with tem-
perature T0 = 2.7277 ± 0.002 K (see figure 2.2). It is the most precise measurement in
cosmology. The maximum is on the frequency 2.822 × 1011 Hz, which corresponds to the
wavelength λmax = 1.062 mm. The point on the spectrum, that divides the integral of
radiation by two has frequency 1.9910 × 1011 Hz and the wavelength 1.506 mm [65].

Figure 2.2: Uniform spectrum and fit to Planck black body. Uncertainties are a small
fraction of the line thickness [4].

By the energy density and, especially, by photon concentration CMB exceeds all other
forms of background radiation. It is the dominant form of radiation in the present
Universe.

The CMB radiation is very isotropic. But, as it was first measured by a small apparatus,
mounted on a U-2 plane [66], CMB has the dipole anisotropy T = T0 (1 + vc cosθ). The
best measurement of the dipole anisotropy gives 3.343 ± 0.016 µK on the direction l =
(264 ± 0.3)◦ , b = (48.4 ± 0.5)◦ . After correction of Earth rotation around the Sun,
Sun around the Galaxy centre and Galaxy around the Local Group of galaxies, one can
Chapter 2. CMB and its fluctuations 32

get the speed of the Local Group relative the CMB which is 627 ± 22 km/s towards
l = (276 ± 3)◦ , b = (30 ± 3)◦ .

2.1.2 CMB temperature fluctuations

CMB temperature anisotropy takes place from the heterogeneity of matter distribution
in the early epochs of Universe expansion. The heterogeneity is small on the early
stages. Some of distribution fluctuations grow, which results in forming the large scale
structures of the Universe and reflects on the CMB. The fluctuations of mass distribution
is smoothed out in the early epochs, when the Universe is hot. But as it cools down to
temperatures of order 3000 K (time from the beginning of the expansion ∼ 105 years),
when the redshift is 1500 . z . 1000, the recombination of matter happens. The
freed radiation doesn’t interact with matter as actively as before, and it maintains the
fluctuations matter had before and during recombination.

Thus the epoch of recombination is the last period of history of the hot Universe, when
photons scattered on the free electrons. The light from the epoch of recombination comes
to us from a spherical shell around us called Last Scattering Surface (LSS, do not confuse
with Large Scale Structures, which have the same abbreviation; usually it is clear from
context what is meant). From this zone CMB carries the information about matter
conditions in the early Universe. The anisotropy of CMB temperature is expressed in
ratio of temperature fluctuations to the mean temperature: δT = ∆T /T .

2.1.2.1 Power spectrum

As we observe the CMB as a spherical surface, the anisotropy δT is considered as a func-


tion of direction ~n (~n is a unit vector). To study the statistical properties of CMB fluc-
tuations the function δT (~n) is decomposed to the spherical harmonic functions Y`m (θ, φ)
where θ and φ are zenith angle and azimuth of the vector ~n:

∞ m=`
X X
δT (~n) = a`m Y`m (θ, φ), (2.1)
`=1 m=−`

where ` is the multipole moment and m is the phase [22, 67]. The multipole moment is
related to the angular size on the sky via ` ' 180◦ /α. The set of complex parameters
a`m contains full information of δT (~n) function. From basic statistics we know that
for Gaussian random field the average and variance is enough to study the statistical
properties of the field. In case of a`m the average vanishes and the variance is given by:
Chapter 2. CMB and its fluctuations 33

ha`m a∗`0 m0 i = δ``0 δmm0 C` . (2.2)

C` only depends on `, because we assume that the statistical properties of CMB is


isotropic on the sphere.C` can serve as a characteristic value of CMB anisotropy. At
low ` the spectrum C` is proportional to [`(` + 1)]−1 due to the Sachs-Wolfe effect –
effect of gravitational redshift due to nonuniform matter distribution at last scattering
surface. This is the reason why in practice another value is often used, the angular power
spectrum:

`(` + 1)C`
D` = . (2.3)

Figure 2.3: The illustration of how C` encodes the properties of CMB fluctuations.
On the right the C` spectra are shown. Each one is equal to the delta function on
` = 1, 10and50. The corresponding maps are shown on the right.
Chapter 2. CMB and its fluctuations 34

The power spectrum is measured in the units of µK2 . One can show that the CMB
angular power spectrum is related to the matter distribution during the epoch of recom-
bination:

Z
dk 2
C` = 4π T (k, `)P (k), (2.4)
k

where T (k, `) is the angular transfer function which convert spatial fluctuations of matter
to angular fluctuations of CMB [68]. The 2.4 equation relates the cosmological parame-
ters and the C` . This relation is implemented in the CAMB code [69].

The illustration of how the D` can describe the statistical properties of CMB fluctuations
is shown on the figure 2.3. The monopole moment corresponds to the mean temperature.
The dipole, as already discussed in the section 2.1.1, corresponds to the peculiar velocity
of an observer relative the CMB. The higher multipoles describe the proper anisotropy
of CMB.

Let’s consider the physics of the CMB fluctuation formation. First consider some over-
dense region on the LSS. The gravitational field is stronger in such region, so for photons
it is like a potential well. It means that the CMB photons observed from the correspond-
ing direction would be colder than the other. The blue regions on the map 2.7 appeared
indeed like that. Vice versa, the red regions correspond to the regions on the LSS with
density lower than the mean density at that time.

The density fluctuations on the LSS are have particular angular sizes, defined by the
acoustic oscillations of plasma. The peaks on the power-spectrum correspond to the
extrema of the fluid oscillations at the time of decoupling, therefore over-dense regions
or under-dence regions [70].

Although the CMB anisotropies are almost perfectly gaussian, the non-Gaussianities
are studied as well [71]. It is proved that CMB anisotropies don’t follow the gaussian
statistics, the study of non-Gaussianities opens the room for new physics beyond the
standard cosmological model.

There are also some anomalies on the CMB. The detailed overview is given in the Planck
collaboration paper [72]. The deviations from statistical isotropy and Gaussianity is
robust. Citing the mentioned overview, "a satisfactory explanation based on physically
motivated models is still lacking."

We observe the CMB, emitted from the LSS, which is a spherical surface with the radius
almost hage of the Universei light years around an observer. That is the spherical surface
we observe is nothing but a spherical slice of continuous last-scattering-space. This
Chapter 2. CMB and its fluctuations 35

particular slice is defined by our position. For an observer somewhere in other part of the
Universe the CMB would look differently. A major assumption of the modern cosmology
tells that the statistical properties of CMB are the same in all the Universe. However
we never can prove it. This is called the problem of cosmic variance, which tells that
the statistical ensemble of observables is fundamentally limited by the only realisation
of the Universe that we observe. It is very likely that many large-scale anomalies could
be explained by taking into account the cosmic variance, which puts the uncertainty to
the measured C` that scales as inverse of the square root of all the possible samples:

r
2
∆cosmicvariance C` = C` . (2.5)
2` + 1

2.1.3 History of CMB fluctuations studies

The first experiment that measured the temperature fluctuations of CMB was the Soviet
Union / Russian satellite Prognoz 9 with the experiment aboard called "Relikt-1" [73]
(relikt, with accent on "i" – relic in Russian). The experiment operated on 37 GHz
frequency.

Figure 2.4: The radio map of Relikt-1 experiment in ecliptic coordinates. The white
parts have zero statistical weight and correspond to the Galactic plane and regions
illuminated by the Earth and Moon. The rectangle shows the observed anomalous dip
in brightness temperature.
Chapter 2. CMB and its fluctuations 36

The Relikt-1 experiment operated in 1983-1984. It was orbiting the Earth on orbit with
apogee 700000 km and period about 27 days. It was the first satellite-born experiment for
CMB observations. The discovery of CMB temperature anisotropy was announced on the
Moscow Astrophysical Seminar in January 1992 in GAISh and the paper, cited above,
went out in September 1992. The Relikt program was planned to continue with the
Relikt-2 experiment, which had to have much better sensitivity, operate on the Lagrange
2 point (L2) and be launched in 1993. Due to the lack of funding the experiment was
never launched.

The next experiment, analogous to Relikt-1, was COBE. The FIRAS instrument, placed
on the COBE satellite, was already discussed in the section 2.1.1. The DMR instrument,
placed on the same satellite, was dedicated to measure the CMB anisotropy on three
frequencies: 31.5, 53 and 90 GHz, see figure 2.5 [74]. The Nobel Prize in Physics was
awarded to the COBE leader scientists G. Smoot and J. Mather in 2006 "for their
discovery of the blackbody form and anisotropy of the cosmic microwave background
radiation".

After COBE the task of measuring the anisotropies of CMB became one of the most pop-
ular domains in cosmology. Here we don’t have the goal to list all the experiments in the
field. But there are some, that we could not avoid to mention. The BOOMERanG (Bal-
loon Observations Of Millimetric Extragalactic Radiation ANd Geophysics) a balloon
born experiment, flew twice in 1998 and 2003, made the best for that time measurement
of the angular power spectrum of CMB temperature fluctuations (see figure 2.6) [5]. The
obtained spectrum was fitted with 5 parameters model, thus were defined: baryon den-
sity Ωb , matter density Ωm , dark energy density ΩΛ , the primordial scalar index ns and
a parameter h, that defines the Hubble constant as H = 100h km sec−1 Mpc−1 . The
obtained values are: (Ωb , Ωm , ΩΛ , ns , h) = (0.05, 0.31, 0.75, 0.95, 0.70), determining the
Universe geometry as flat.

The next important experiment was the satellite-born Wilkinson Microwave Anisotropy
Probe (WMAP) [75]. It operated from 2001 to 2010. The WMAP measurement of the
CMB angular power spectrum (see figure 2.8) established the ΛCDM model, which tells
that the major form of energy, about 70% of total energy budget of the Universe, is
some Dark Energy (DE) of unknown nature and the main form of matter is some non-
relativistic (Cold) Dark Matter (CDM or just DM), of also unknown nature. The DE
replaces the cosmic constant Λ, introduced by Einstein to his cosmological equations to
get a stationary solution. The total energy density matches with high precision the crit-
ical density which is necessary to have the flat geometry of the Universe. The measured
curvature of the space is consistent with zero: (Ωk ) = −0.0027+0.0039
−0.0038 .
Chapter 2. CMB and its fluctuations 37

Figure 2.5: The DMR maps on three frequencies: 31.5, 53 and 90 GHz after the
dipole anisotropy removing.
Chapter 2. CMB and its fluctuations 38

Figure 2.6: The angular power spectrum of CMB temperature anisotropies, mea-
sured by the BOOMERanG experiment at 150 GHz. Color curves are for different
cosmological models, see original article for explanations [5]
.

The next major step in refining the cosmological parameters by exploring the CMB
anisotropies was made by the Planck experiment [7] – the space observatory operated by
the European Space Agency (ESA) from 2009 to 2013. The Planck instrument operated
on 9 frequencies from 30 to 857 GHz, among which 4 frequency bands are able to measure
the polarisation of the incoming radiation (100, 143, 217 and 353 GHz). Thus the Planck
was sensitive to E-modes of CMB polarisation, which we will discuss in the following
section. The obtained temperature angular spectrum is shown on the figure 2.8. The
cosmological parameters, measured by Planck with unprecedented precision are shown
in the table 2.1.

We’d like to mention also two important ground based experiments: ACT [76] and SPT
[77]. ACT is situated in Atacama desert in Chile (ACT stands for Atacama Cosmol-
ogy Telescope), it operates since 2007. SPT, the South Pole Telescope, located at the
Amundsen-Scott station in Antarctica, operates since 2007. Both experiments, due to
their high resolution and good sensitivity, are able to explore high ` regions of the power
spectrum (as a consequence they cannot easily map the region of low multipoles).
Chapter 2. CMB and its fluctuations 39

Table 2.1: Results of full Planck mission on the main cosmological parameters.

Parameter Symbol Value


Baryon density Ωb h2 0.02230 ± 0.00014
Cold dark matter density Ωc h2 0.1188 ± 0.0010
Thomson scattering optical depth due to reionization τ 0.066 ± 0.012
Scalar spectral index ns 0.9667 ± 0.0040
Hubble’s constant (km Mpc−1 s−1 ) H0 67.74 ± 0.46
Dark energy density ΩΛ 0.6911 ± 0.0062
Matter density Ωm 0.3089 ± 0.0062
Redshift of reionization zre 8.8+1.2
−1.1
Age of the Universe (Gy) t0 13.799 ± 0.021

Figure 2.7: CMB temperature map, measured by the Planck experiment.

It is common to define the major stages of CMB anisotropy observations by the number
of sensitive elements of the instrument (detectors). Each next stage has roughly an order
of magnitude more detectors than the previous stage. The approximate sensitivity for
each stage is shown on the figure 2.9 [78]. Nowadays we start the Stage III experiments.
The Stage IV experiments, that should answer the main questions of the origin of the
Universe and its nature, are planned to start observations in about 10 years.
Chapter 2. CMB and its fluctuations 40

Figure 2.8: Measured angular power spectra of Planck, WMAP (9 years of op-
eration.), ACT, and SPT. The model plotted is Planck’s best-fit model including
Planck temperature, WMAP polarization, ACT, and SPT (the model is labelled
[Planck+WP+HighL] in [6]). Error bars include cosmic variance. The horizontal axis
is logarithmic up to l = 50, and linear beyond[7].

2.1.4 CMB polarization

In previous sections we considered the temperature anisotropies of CMB. Now we can


study the fact that the CMB fluctuations are polarised. If we do the "precision cosmol-
ogy", the polarisation of CMB is one of the most necessary inputs to put more tight
constraints on cosmological parameters. It also provides an important test for the rel-
evance of the inflation model. The present description of CMB polarisation follows the
paper of J. Kaplan et al. [79].

Light is an electromagnetic wave propagating through space. The electric E and magnetic
B components of the electromagnetic wave are always perpendicular, thus it is sufficient
~ which
to speak about only one of them, let’s say E. If the vector of electric field E,
is orthogonal to the direction of propagation ~k, has fixed orientation then we say that
the wave is polarised. If the orientation of E ~ stochastically changes, one can study
its statistical properties. To describe such partial polarisation we introduce the Stokes
Chapter 2. CMB and its fluctuations 41

Figure 2.9: Approximate experimental sensitivities for CMB anisotropy observations


of stages I–IV [8].

parameters: choosing basis of ~ex and ~ey , orthogonal to ~k, we can write the "coherence
matrix" C:

! !
hEx2 i hEx Ey∗ i 1 I +Q U − iV
C= = . (2.6)
hEx∗ Ey i hEy2 i 2 U + iV I −Q

where I, Q, U and V are the Stokes parameters. The I Stokes parameter is equivalent
to the intensity of radiation. Q and U parameters describe linear polarisation on axes
rotated one relative another to angle 45◦ , see figure 2.10. The V Stokes parameter is for
circular polarisation, which is expected to be zero in case of CMB. The Stokes parameters
satisfy the inequality

I 2 ≥ Q2 + U 2 + V 2 , (2.7)

to guarantee that the polarised energy could not exceed the total energy of the wave.
For fully polarised light it becomes an equality.
Chapter 2. CMB and its fluctuations 42

Clearly, the Q and U parameters depend on the reference frame. When we change the
reference frame with basis (~ex ; ~ey ) to another frame with base vectors rotated by an angle
θ around ~k, then the Q and U rotate to Q0 and U 0 by an angle 2θ:

Q0 = Q cos 2θ + U sin 2θ, (2.8)


U 0 = −Q sin 2θ + U cos 2θ, (2.9)

or also:

Q0 ± iU 0 = e∓2iθ (Q ± iU ), (2.10)

which means the Q and U are spin-2 quantities.

As we know from basic optics, the polarisation of light appears due to reflection [80].
Similarly, CMB polarisation originates from rescattering of primordial photons on the hot
electrons on the last scattering surface. After scattering the outgoing photon carries the
polarisation orthogonal to the scattering plane. Thus, as the photon flux from different
directions is not isotropic, the polarisation of CMB photons carries information about
the density distribution on the last scattering surface.

The cross-section of the Thompson scattering is proportional to the square scalar product
of incoming 1 and outgoing 2 photon polarisations. It means that only monopole and
quadrupole remain. The Q and U polarisations measure the quadrupole part of radiation.

Let’s consider, how the polarisation could depend on the matter distribution in the
early Universe. The acoustic oscillations in hot plasma create fluxes of photon baryon
fluid from hot spots to cold ones (that is from under-dense regions to over-densed ones)
and reversely. In first case the velocities of neighbour particles tend to be diverged
radially to the vector of flux. In the second case the velocities are diverged orthogonally.
This induces a quadrupole flux anisotropy. Thus we can expect that the polarisation
anisotropies of CMB photons are correlated with the temperature anisotropies, as they
both originate from the same processes: from density fluctuations.

2.1.4.1 E-modes

As Q and U Stokes parameters depend on the reference frame, they are not the best
choice for studying the CMB polarisation fluctuations. Q and U are spin 2 objects, so
Chapter 2. CMB and its fluctuations 43

Q U

E E

B B

Figure 2.10: Upper third – polarisation for Q and U Stokes parameters. Below –
typical E- and B-mode polarisation patterns.

their decomposition to the spherical harmonics must be developed on spin 2 spherical


harmonics [81]:

X
(Q ± iU )(~n) = a±2`m ±2 Y`m (~n). (2.11)
`≥2,|m|≤`

Two real scalar quantities could be constructed from these spin 2 objects [82]:

X
E(~n) = aE m
`m Yl (~
n), and (2.12)
`≥2,|m|≤`
Chapter 2. CMB and its fluctuations 44

e-

Figure 2.11: Polarization from Thompson scattering.

X
B(~n) = aB m
`m Yl (~
n), (2.13)
`≥2,|m|≤`

where

a2`m + a−2`m
aE
`m = − , and (2.14)
2

a2`m − a−2`m
aB
`m = i . (2.15)
2

These scalar quantities are called E and B modes of polarisation. They have the opposite
behaviour under parity transformations: the E-modes have positive parity and B-modes
have negative parity. This fact is illustrated in the figure 2.10.

The CMB temperature fluctuations arise from the perturbations on the last scattering
surface of density and metric. Depending on the transformation properties under rotation
the metric perturbations could be classified as scalar, vector and tensor. Because of
Chapter 2. CMB and its fluctuations 45

expansion the vector perturbations get dumped and only scalar and tensor perturbations
remain. The scalar perturbations arise from the density perturbations.

cold hot cold hot


spot spot spot spot

Figure 2.12: Polarization direction depends on the velocity gradient on the last scat-
tering surface and hence correlates with temperature fluctuations. Polarization direc-
tion (shown in green thick line) is defined by the fact that the fluid velocities (thin
black arrows) are not isotropic in respect to the scattering point (little black circle).
The fluid motion from a hot spot to a cold one on the left plot (or from a cold spot to
a hot one on the right plot) is shown with dashed arrows.

The scalar perturbations of metric could only generate positive parity polarisation pat-
terns, which are described with E-modes of CMB polarisation fluctuations. The mecha-
nism of generation of E modes from the density perturbations is illustrated on the figure
2.12. The density fluctuations bring forth the flows in the plasma fluid. When the fluid
is accelerated from a hot spot (density deep) to a cold one (density peak), each scattering
point in between the spots experience a higher flow from the direction orthogonal to the
direction of the flow. Thus the polarization is preferably oriented parallel to the flow
direction. In case the fluid is accelerated towards a hot spot, the situation is reversed
and the polarization is oriented orthogonal to the flow direction. It means that E-modes
must correlate with temperature fluctuations. The correlation of E modes with temper-
ature is described by the T E spectrum which is not zero. It is well verified with WMAP
experiment.

2.1.4.2 B-modes

We already discussed the inflationary paradigm in the section 1.1.7 and told that infla-
tion necessarily produces tensor fluctuations of metric. Tensor fluctuations of metric,
propagating through space, are called gravitational waves. The gravitational waves from
Chapter 2. CMB and its fluctuations 46

inflation are called primordial, to distinguish with other gravitational perturbations in


later Universe.

The tensor metric fluctuations provide both E and B-modes. While the E-modes are
also created by the scalar perturbations, the only possible source of B-modes is tensor
perturbations. The B-modes of CMB, created by the primordial gravitational wave,
are called also primordial. The primordial B-modes are often called "smoking gun of
inflation": the measurement of B-modes allows to measure the  parameter of inflation,
due to the equation 1.37.

The power spectrum of B-modes is characterised by the main peak at around ` = 100.
The only parameter that defines the height of the peak is r. The slope of the spectrum
on higher multipoles is fitted with parameter nT . But for the current stage of CMB
experiments we only hope to measure the r, that is the main peak. To measure nT
the measurement of the second peak is required, and such a measurement is absolutely
beyond the current sensitivity level of experiments, especially because of leakage of E-
signal to B due to imperfect decomposition of the sky polarisation signal to E and B
modes and because of the lensing foregrounds (both these issues will be discussed below).
The current status of the B-mode power spectrum measurements is shown on the picture
2.14.

In 2014 a very loud discussion arose about the reported detection of primordial B-modes
by BICEP2/Keck collaboration [9]. The measured value of r was 0.2, barely compatible
with the upper limit set by Planck experiment. The measured maps of E and B-modes
are shown on the figure 2.13. The value r = 0.2 leads to a very strong inflation, this
was surprising in this result. However, it was met by the scientific collaboration with a
great enthusiasm as an experimental detection of the primordial gravitational wave and
hence a confirmation of the theory of inflation. Unfortunately, as it found out later, the
detected B-mode signal was not the primordial one. Later that year another article was
published [11], claiming that the galactic dust contamination in the region of BICEP2
is quite large and concluding that the measured B-modes are consistent with the null
hypothesis. The bad luck of BICEP2 gave an important lesson to other teams: the dust
contamination should be accurately controlled. Particularly, after these publications the
QUBIC concept was changed to have a dual band instrument in order to achieve the
good dust signal separation (the QUBIC instrument will be discussed in details in the
chapter 3).
Chapter 2. CMB and its fluctuations 47

Figure 2.13: Left: BICEP2 apodized E-mode and B-mode maps filtered to 50 < ` <
120. Right: The equivalent maps for the first of the lensed-ΛCDM + noise simulations.
The color scale displays the E-mode scalar and B-mode pseudoscalar patterns while
the lines display the equivalent magnitude and orientation of linear polarization. Note
that excess B mode is detected over lensing+noise with high signal-to-noise ratio in the
map (s/n > 2 per map mode at ` ≈ 70). (Also note that the E-mode and B-mode
maps use different color and length scales.) Figure caption is cited from [9].

2.1.5 Foregrounds

The measurements of B-modes of CMB is extremely challenging because of presence of


various foregrounds. The CMB foregrounds could be classified on galactic and terrestrial
foregrounds. The main source of galactic foregrounds is the thermal emission from difuse
galactic dust [11]. The typical size of the dust grains is 0.1 µm. If a grain is asymmetric, it
alines with the galactic magnetic field. The thermal emission from those grains becomes
polarised. The Planck experiment mapped the dust polarisation on the 353 GHz, where
the dust emission exceeds all other B-mode signals. The BB power spectrum is shown
on the figure 2.15

As you can see on the picture, the dust emission overcomes the primordial signal by
orders of magnitude. But there are two factors: the dust signal is different on different
direction on the sky. Thus one is able to choose a patch of the sky which is relatively
clean of dust. And also, the dust signal is heavily correlated over different frequency
channels. It means it could be effectively subtracted. These subjects will be discussed
in more details below.

The foregrounds of the terrestrial class are mainly due to the atmosphere [83]. Emission
from the water vapor and molecules of dioxygen dominates at millimeter wavelengths.
We can mitigate this effect by observing in atmospheric windows, see 2.16, where the
Chapter 2. CMB and its fluctuations 48

Figure 2.14: Current status of the BB power spectrum measurements from SPTpol,
ACTpol, BICEP2/Keck and POLARBEAR experiments. The solid gray line shows the
expected lensed BB spectrum from the Planck+lensing+WP+highL best-fit model.
The dotted line shows the nominal 150 GHz BB power spectrum of Galactic dust
emission. This model is derived from an analysis of polarized dust emission in the
BICEP2/Keck field using Planck data. The dash-dotted line shows the sum of the
lensed BB power and dust BB power [10].

atmosphere is much more transparent. But even the residual emission significantly in-
creases the optical power exposed on the detectors and therefore their noise level. As
the width and transparency of the atmospheric windows depends strongly on the wa-
ter amount in the atmosphere, it is crucial to choose the site for an experiment with
very dry atmosphere. Two best sites on the Earth in this sense are Antarctica and the
Puna-Atacama desert on the boarder of Chile and Argentina.

2.1.6 Secondary anisotropies

While the primordial anisotropies of CMB are related to the density fluctuations on the
last scattering surface, the secondary anisotropies are associated with the reionisation
of the Universe and the growth of structures. On the way towards us the primordial
CMB photons interact with the structures. Their energy and direction of propagation
are changed. The secondary anisotropies could be classified on two major classes. First
Chapter 2. CMB and its fluctuations 49

Figure 2.15: Planck 353GHz channel DBB power spectra in µK 2 ) computed on three
of the selected CMB regions that have the sky fraction fsky = 0.3 (circles, lightest),
fsky = 0.5 (diamonds, medium) and fsky = 0.7 (squares, darkest). The uncertainties
shown are ±1σ. The best-fit power laws in ` are displayed for each spectrum as a
dashed line of the corresponding colour. The corresponding r = 0.2 DBB CMB model
are displayed as solid black lines. In the lower parts of each panel, the global estimates
of the power spectra of the systematic effects responsible for intensity-to-polarization
leakage are displayed in different shades of grey, with the same symbols to identify
the three regions. Finally, absolute values of the null-test spectra are represented as
dashed-dotted, dashed, and dotted grey lines for the three regions. (Figure caption is
cited due to [11])

class is due to the interaction with free electrons. These are the Sunyaev-Zel’dovich (SZ)
effect, the Ostriker-Vishniac effect and the inhomogeneous light from the reionisation
epoch (the cosmic infrared background – CIB). The second class of secondary anisotropies
arises from the gravitational interaction with the structures. These are the Integrated
Sache-Wolfe effect, the Rees-Sciama effect and gravitational lensing.

2.1.6.1 Sunyaev-Zel’dovich effect

When the CMB photons pass through the galaxy clusters, they interact with the free
electrons in the hot gas through the inverse Compton scattering. This changes the energy
of photons. This effect is called Sunyaev-Zel’dovich effect (SZ) [84]. Generally, the SZ
effect is produced on the scale of galaxy clusters and superclusters. But it may also be
produced on very small scales by the first stars. The SZ effect could be subdivided into:
Chapter 2. CMB and its fluctuations 50

Figure 2.16: Atmospheric transmission from the Atacama plateau at the zenith for
different amounts of precipitable water vapour.

the thermal and kinetic SZ effect. The thermal SZ effect is characterised by the photons
scattering by the thermal motion of free electrons. The thermal SZ effect changes the
frequency spectrum of CMB photons. The kinetic SZ, also called Ostriker-Vishniac effect
is due to the common motion of the electrons [85]. The final spectrum remains to be
Planck black body spectrum because it is just the Doppler shift of the incident spectrum.
On the figure 2.17 the changes from the thermal and kinetic SZ effects are shown.

One can count the galaxy clusters through their SZ effect. It is a very important mea-
surement for the cosmological and cluster properties. But for the CMB observations the
SZ effect is a foreground and we need to correct for it. There are different methods of
such corrections. Generally they benefits from the specific spectral signature of the SZ
effect. The additional constraints are usually used like for example matched filters.

2.1.6.2 Cosmic Infrared Background

The low frequency tail of the cosmic infra-red background (CIB) provide an appreciable
foreground for the CMB observations. It is the light of all the galaxies ever existed, an
expected relic of structure formation processes. Its far tail is shown on the picture 2.1.

Inhomogeneities in the epoch of reionisation also lead to a polarized signal. This signal
is weak, amounting to no more than 10 percent of the primary signal, but could be
important when studying the B-modes of CMB [12].
Chapter 2. CMB and its fluctuations 51

Figure 2.17: Frequency dependence of thermal and kinetic SZ effects. The thick
line shows the frequency dependence of ∆T /Tcmb from the thermal SZ effect, the thin
solid line shows the same for the change in spectral intensity ∆I(x). The thin dashed
lines show the change in spectral intensity for kinetic SZ effect, the upper one for an
approaching source and the lower one for a receding source. The vertical dotted line
shows the scaled frequency at which TSZ is zero and KSZ effect is maximum. Here
frequency x, I0 and Tcmb are all scaled to unity [12].

2.1.6.3 Integrated Sachs-Wolfe effect

The integrated Sachs-Wolfe (ISW) effect appears when the CMB photons are traversing
a linear gravitational potential from large scale structures. The picture 2.18 illustrates
how the ISW effect works: for the gravitational well in rest a photon gains energy while
falling into the well, but then looses it and flies away unchanged. But in an expanding
Universe, when photon gets out from the well, the well itself becomes less deep than it
was before. Thus by the end of the day that photon gains some energy. The ISW effect
is related to the scale of the horizon at the time of large scale structure formation. This
corresponds to an angular scale of about 10◦ .
Chapter 2. CMB and its fluctuations 52

The Rees-Sciama (RS) effect is somewhat similar to the ISW. It is due to CMB photons
traversing a non-linear gravitational potential, for example from a gravitational collapse.
The relevant scales are the same as for galaxy clusters and superclusters, corresponding
to angular scales of 5-10 arc minutes.

Figure 2.18: The illustration of the integrated Sachs-Wolfe effect

2.1.6.4 Lensing

When the CMB photons pass the large-scale structures, they could be affected by the
gravitational lensing effect. It does not change the total power in the fluctuations. But
the fluctuations are redistributed by the lensing effect towards smaller scales. The effect
is significant on the scales below few arc minutes.

The lensing effect is especially important when detecting the B-modes. The primordial
B-modes from the inflationary gravitational waves fall off rapidly on scales smaller that
the horizon at the last scattering surface, which angular scale is of order a degree. The
lensing contributes to the B-mode generation from E-modes on smaller angular scales
and is a major foreground for B-modes from inflation [86].
Chapter 3

Bolometric interferometry and


QUBIC experiment

This chapter introduces to the concept of bolometric interferometry – novel promising


technique for CMB observations, which inherits the high sensitivity of imagers and
great systematics control due the self-calibration of interferometers. We present the
QUBIC instrument – Q and U bolometric interferometer for cosmology.

3.1 The concept of the bolometric interferometer

Before introducing the bolometric interferometer concept lets consider the standard ap-
proaches so far used in CMB studies. There are two major kinds of experiments in the
CMB field: imagers and interferometers.

3.1.1 Imagers and interferometers

The imager instruments, such as Planck (a reflector, [7]), or BICEP (a refractor, [87]),
form a sky image on the focal plane as in a classical telescope. The focal plane is tiled with
high sensitivity detectors. In recent years the bolometric detectors have become popular
as a very good solution because of their low intrinsic noise, lower than the photon noise
of the CMB radiation. Detectors with such property are called background limited.
Bolometer (from greek βoλo- meaning of thrown things; and -µτ ρoν, measurer) is a
detector that measures the intensity of radiation by monitoring the heating of a material
by measuring its electric resistance. The transition edge sensor bolometers (TES) explore
the strong drop in the resistance on the transition to the superconductivity (for QUBIC

53
Chapter 3. Bolometric interferometry and QUBIC experiment 54

detectors it is about 0.5K). The temperature of such detector is set to the temperature of
the transition and even very low change of its temperature leads to the strong change of
the resistance. TES bolometers are very popular now in the area of CMB measurements.
Another novel technique is KID – kinetic inductance detector – which are easier to
manufacture and read out. A KID detector is a superconducting resonator, for which
the absorbed photons change the inductance and hence the resonant frequency. The last
is measured and related to the absorbed power [88] (however, the performance of KID
detectors is not quite proven yet). The imager instruments have an advantage of high
sensitivity due to the usage of the background limited detectors. Another advantage of
imagers is their ability to handle a broad band. It will be discussed in the next chapters
why handling a broad band is not trivial for a bolometric interferometer. But for an
imager it is easy: the parallel rays of light from all the frequencies come to the same
point on the focal plane of the telescope, forming an integrated image of the sky on the
frequencies of the band. Thus an imager is able to collect more light and hence has lower
photon noise.

The interferometers, on the other hand, work on a different principle. They use the
correlations between spacially distributed antennas to reconstruct directly the Fourier
modes of I, Q and U skipping the map-making step, which is necessary for imagers.
The interferometer technic was heavily used for CMB observations: some well known
experiments are VSA, which measured the temperature anisotropies [89], CBI and DASI
which measured the E-mode polarisation anisotropies [90, 91]. The main disadvantage of
interferometers is that the sensitivity is reduced: the signal of CMB which is of frequency
about from tens of GHz to few hundred of GHz must be amplified and down-converted
to the lower frequencies to be detected. During this process the noise level raises and
the detector is no longer background limited. The main advantage of interferometers is
the ability to control systematics due to observation of interference fringes. But even
this advantage turns to additional complexity. To interfere the signals from different
antennas a special device called correlator is used. One correlator is required per each
antenna pair. Thus the instrumental complexity of an interferometer grows as a square
of number of channels. That’s why no one builds interferometers for the current stage
of the CMB observations.

3.1.2 Bolometric interferometry

The idea of bolometric interferometry is a fusion between the imager and interferometer
concepts. Like imagers, bolometric interferometers use focal plane covered with highly
sensitive detectors. They are interferometers with the optical analog of correlator. Let’s
Chapter 3. Bolometric interferometry and QUBIC experiment 55

consider the bolometric interferometer concept with the example of the QUBIC instru-
ment (Q and U Bolometric Interferometer for Cosmology).

Figure 3.1: The QUBIC instrument sketch. See text for explanations.

For the moment let’s skip the description of the polarimetry part of the instrument.
It is necessary for the observation of polarisation anisotropies, but it is not needed to
understand the basics of bolometric interferometer concept. QUBIC is a millimetric
analog of the Young interferometer. The incoming radiation from the sky is collected by
the array of horns. The horn array is an array of pairs of horns connected back to back,
so that the radiation collected by the input horn is reemitted by the output one. There
is a switch in each horn, thus each horn could be closed or open separately. The horn
array could be considered as an array of diffractive pupils for a classical interferometer.

The light from the horns is focused by two parabolic mirrors on the focal planes, covered
with bolometric detectors. When only one horn is open, the secondary beam that exposes
on the focal plane is just equal to the primary beam. When we start to open other horns
the gaussian beams from different horns start to interfere with each other, forming the
interference pattern on the focal plane. This is illustrated on the figures 3.2. The
interference pattern from the full horn array is called synthesised beam, which has a
specific multi-peaked feature (see the bottom sub-figures of 3.2). It means, that when
Chapter 3. Bolometric interferometry and QUBIC experiment 56

the instrument is pointed to some direction, it observes photons not only from that
direction, but also from multiple directions around it. Or, vice versa, when observing a
point source we got multi-peaked pattern on the focal plane.

3.1.3 Self-calibration

The interferometric nature of QUBIC allows us to do the self-calibration: a technique


that significantly reduces the systematics of the instrument. The basic idea of the self-
calibration is that, in a perfectly manufactured instrument, the interferometric pattern
from any pair of open horn pairs (a couple of horns is called a baseline) with relatively
equal position of horns (redundant baselines) must be identical. The self-calibration is
the following process: one observes an artificial point source with one baseline. Then
we repeat the observation with all the baselines redundant to the first one. Then the
process is repeated with all the possible baselines. After we fit the imperfections due
to the recorded interferometric fringes. Thus we are able to reduce systematics on such
factors as:

• Horn position,

• Transmission of horns, half-wave plate, polarising grid,

• Horn and half-wave plate cross-polarisation,

and many others. Detailed description of the self-calibration technique for a bolometric
interferometer and its application for QUBIC instrument can be found in the work [92].

The method of self-calibration is inspired by the classical interferometry, where the same
term is used to denote a slightly different technique. While in bolometric interferometry
we use an artificial source for calibration, in radio-interferometry the object of scien-
tific interest itself plays a role of calibration source. The self-calibration involves the
evaluation of so-called closure quantities. One has to find a null combinations of these
quantities. By observing these quantities by the real instrument one can fit the uncer-
tainties in the instrument [93].

In order to use self-calibration we have to model the instrument. For this purpose we
use the formalism of Jones matrices. The electric field collected by the detector q is:

" # " #
Eqx Ex
=J (3.1)
Eqy Ey
Chapter 3. Bolometric interferometry and QUBIC experiment 57

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

(m) (n) (o) (p)

Figure 3.2: Formation of the QUBIC synthesised beam: (A) and (B) – map of horn
array with 1 horn open and the beam on the focal plane. Then, similarly, (C) and (D)
are map and interferometric pattern for 2 open horns; (E) and (F) – 3 horns;(G) and
(H) – 20 horns; (I) and (J) – 50 horns; (K) and (L) – 100 horns; (M) and (N) – 200
horns; (O) and (P) – full horn array (400 horns) is open.
Chapter 3. Bolometric interferometry and QUBIC experiment 58

parameter No Self Cal. 1 day / year 100 days / year


σnominal−real σreal−recovered ratio σreal−recovered ratio
Horn location error 100. × 10−6 9.26 × 10−5 1.1 4.67 × 10−8 2141
Horn transmission 0.0001 2.84 × 10−6 35 3.50 × 10−8 2858
Horn cross-polarization 0.0001 2.47 × 10−6 40 2.68 × 10−8 3729
HWP transmission 0.01 1.88 × 10−4 53 1.31 × 10−5 763
HWP cross-polarization 0.01 1.85 × 10−4 54 1.04 × 10−5 962

Table 3.1: Results of self-calibration simulations for the QUBIC instrument with
400 horns, 992 bolometers array, 1000 pointings and all baselines measurements. The
column "No Self. Cal." shows the values for standard deviations between the ideal and
corrupted parameters. Columns "1 day / year" and "100 days / year" give the values
of standard deviation on the parameters after, respectively, 1 day per year spent for
self-calibration and 100 days. The ratio subcolumns show the ratio of reduce of the
systematic due to the self-calibration.

" #
Ex
where is the incoming radiation and J is the Jones 2 × 2 matrix, that describes
Ey
how the instrument transforms the polarisation components of the incoming radiation.
If an instrument has several components, its Jones matrix is the product of the Jones
matrices for each of the component:

JQU BIC = Jhorn Jp JTrot Jhwp Jrot . (3.2)

where Jrot is the rotation matrix, Jp , Jhwp and Jhorn are Jones matrices for polarising
grid, half-wave plate and one horn respectively. To model the systematic errors for a
bolometric interferometer the Jones matrices for each of the component of the instrument
could be described as following:

" #
1 − gx ex
J= (3.3)
ey 1 − gy

where gx,y are complex gain parameters and ex,y are complex coupling parameters.
Thus the systematic errors arising from each of the instrument components can be
parametrised.

By doing self-calibration for all the baselines of the instrument, for all bolometers and
by scanning the artificial source one can build a system of linear equations with the
unknowns, listed above. As one can see, the number of unknowns grows linearly with
number of horns. However, the number of constraints grows proportionally to the number
of baselines, which is nh (nh − 1)/2, where nh is number of horns. Thus the problems
becomes easily overdetermined and could be solved with a least square method.
Chapter 3. Bolometric interferometry and QUBIC experiment 59

As shown in the work [92], self-calibration applied for QUBIC results in very significant
reduce of the systematics, see table 3.1. The very idea of bolometric interferometry was
motivated by the opportunity to combine the advantage of high sensitivity of imagers
together with the ability to handle instrumental systematics effects of interferometers.
It is a pretty common question to QUBIC, if this advantage to use self-calibration is
really crucial. Nowadays we challenge the measurement of primordial B-modes, one
of the most demanding observations in the modern cosmology. And yet no team in
the world succeeded in this task. For the current level of sensitivity of imagers the
systematics effects are not quite important1 . But we know also that this sensitivity level
is insufficient for B-modes. Thus we face the need to think ahead and foresee the growing
importance of systematic effects for future CMB observations. The concept of bolometric
interferometer, incarnated in the QUBIC instrument, achieves an excellent handling of
systematic effects, unprecedented by any imager.

3.2 QUBIC instrument

3.2.1 QUBIC instrument subsystems

The 3-D model of the QUBIC instrument if shown on the figure 3.3 (you may also refer
to the picture 3.1). The size of the instrument is 1.547 m high, 1.42 m diameter and it
weigths about 800 kg. All the subsystems of the instrument are described in details in
the technical design report [14]. Here we list them briefly.

3.2.1.1 Mount system and baffling

QUBIC instrument explores rather standard way for mount an astronomical instrument
called alt-azimuthal mount. The mount system is shown on the figure 3.4. It allows
the rotation of the instrument on three axes: on azimuth, on elevation and around the
optical axis.

The instrument window is protected from undesired radiation by the radiation shielding
composed of the forebaffle and the ground shield, see figure 3.5. This baffling reduces
the possible contamination from such sources as Sun, Moon and ground.
1
This statement is correct for the ground-based and balloon-born experiments which usually use more
advanced technologies than the space missions.
Chapter 3. Bolometric interferometry and QUBIC experiment 60

Window,
Horn array HWP, Polarising grid

HWP rotator
Switches

Cold box

Dichroic

Pulse tubes

Mirrors
Focal plane

Figure 3.3: The 3-D model of the QUBIC instrument.

3.2.1.2 Cryostat

The QUBIC cryostat is a multi-staged system. It consists of:

• A vacuum jacket that prevents the heat exchange with the outer space.

• The main cryostat consists of two pulse-tubes (seen on the figure 3.3), cooling down
the experiment volume to 4 K. The beam combiner optics, the HWP, the polarizing
grid and the horn array are inside the pulse-tube refrigerator. It also serves as a
pre-cooling system for the inner cryostats.

• 4He refrigerator for the optical system, that cools the mirrors and the dichroic
down to 1 K.

• 3He refrigerator for the detector arrays, operates at 0.3 K.

For full description of the cryostat we refer to the QUBIC technical design report [14].
Chapter 3. Bolometric interferometry and QUBIC experiment 61

Figure 3.4: Mount system design of QUBIC with forebaffle.

Figure 3.5: Shielding for QUBIC, consisting the forebaffle and the ground shield.

3.2.1.3 Window, half-wave plate and polarising grid

Light comes through the window, shown on the top of the figure 3.3. This is the first
optical element encountered by the incoming radiation. And it also separates outside
atmosphere and cryostat vacuum jacket. To hold about 2.4 tons of atmospheric pressure
the window must be stiff, but also transparent for millimetre waves. QUBIC window is
20 mm thick slab of high-density polyethylene.

The half-wave plate, which is the first instrument element encounted by the incoming
Chapter 3. Bolometric interferometry and QUBIC experiment 62

light that modulates its polarisation, is made of metamaterials. The metamaterials


are developed using the embedded metal mesh filters technology. This technology was
already used in the past for CMB experiments such as NIKA and NIKA2 [88]. Rotation
of the half-wave plate is allowed by a stepper motor mounted outside the cryostat shell
and the motion is transmitted to the half-wave plate by magnetic friction. Thus the
half-wave plate of QUBIC is on step rotation.

The polarising grid is a 10 µm period wired photolithographic polariser. Note that both
half-wave plate and polarising grid are cooled down to 4 K.

Let’s consider how the polarimeter part of the instrument work – i.e. half-wave plate
and polarising grid. For this we can write down the Jones matrix for the combination of
rotating half-wave plate and polarising grid:

" #" #
1 0 cos(2φ(t)) sin(2φ(t))
J = Jpol Jrot hwp = , (3.4)
0 0 sin(2φ(t)) −cos(2φ(t))

where φ(t) is an angle of rotation of HWP at time t.


" #
Ex
Thus the two component electric field , passed through the system of a half-wave
Ey
plate and a polariser becomes

" # " #
Ex Ex cos(2φ(t)) + Ey sin(2φ(t))
J = . (3.5)
Ey 0

It is a mixture of polarisations of the incoming photon with known coefficients, defined


by the angle of rotation of the HWP φ(t). Writing down the intensity of the light after
the polarising grid we get

IP G = (Ex cos(2φ(t)) + Ey sin(2φ(t)))2 (3.6)


= I + Qcos4φ + U sin4φ, (3.7)

where I, Q and U are Stokes parameters of the incoming radiation.

The polarising grid reduces the total intensity by factor two. It may seem unreasonable
to loose half of incoming photons. But the chosen configuration has the significant
advantage of having no dependence on the cross-polarisation in the inner part of the
instrument. Whatever is going in between the polarizing grid and the focal planes, the
total intensity exposed to the detectors is defined only by the expression 3.7, that is only
by the angle of rotation of the HWP. Of course it puts strong requirements to the design
Chapter 3. Bolometric interferometry and QUBIC experiment 63

φ 2IP G
φ
0 16 I +Q
φ
1 16 I + Q+U

2
φ
2 16 I +U
φ
3 16 I − Q−U

2
φ
4 16 I −Q
φ
5 16 I − Q+U

2
φ
6 16 I −U
φ
7 16 I + Q−U

2

Table 3.2: Signal, passing through the half-wave plate and polarizing gird as a function
of the half-wave plate rotation angle φ.

and manufacturing quality of QUBIC polarimeter. But it also cancels a significant part
of possible systematics.

From the equation (3.7) we see that the Stokes parameters are modulated as sine and
π
cosine of 4φ(t). Which means that all the angles φ that differ by 2 give exactly the same
π
signal IP G . IP G (φ = 0) = 1/2(I + Q) and IP G (φ = 4) = 1/2(I − Q). That is with steps
π
of rotation of the half-wave plate equal 4 we never reach measurement of U . In contrary,
π 3π
for φ = 8 and φ = 8 the signal is IP G = 1/2(I ± U ). Thus with steps of half-wave plate
π
by 8 we observe either I ± Q or I ± U . To obtain their mixture the angle of rotation
must be stepped by π
16 = 11.25◦ (or even smaller steps). The signals passing towards the
horn array in dependence from the angle of rotation of the half-wave plate are shown in
the table 3.2. We conclude that the reasonable stepping for half-wave plate rotation is
11.25◦ .

3.2.1.4 Horn array

The next important element of the instrument accounted by the incoming radiation is
the horn array, which is the array of 400 pairs of horn waveguides. The map of horns
is shown on the picture 3.2. Horns are located on the orthogonal grid. The horn array
is made of thin aluminium plates, which allows to shape the horn profile with a great
accuracy: there are holes drilled in the plates according to the cross-section of horn, then
the plates are stacked together to form the horn array. The picture of the 8 × 8 horn
array, produced for the technological demonstrator, is shown on the left plane of figure
3.6. The corrugation on the horns and their profile allows to select spatial modes, makes
beam gaussian and reduces cross-polarisation [94].

The switches in the middle of each horn pair are shutters that operate independently for
each channel. They are used only during the calibration phase.
Chapter 3. Bolometric interferometry and QUBIC experiment 64

Figure 3.6: Picture of the horn array, produced for the technological demonstra-
tor of QUBIC (left). Close picture of a horn cut (center). Mirror, produced for the
technological demonstrator (right).

3.2.1.5 Mirrors

The light from the horns is focused by two off-axis mirrors on the focal planes. Mirrors
act as an optical equivalents of the correlator devices in usual interferometer concept.
The picture of the mirror, produced for the technological demonstrator is shown on the
right plane of the figure 3.6. The mirrors have supports with 6 degrees of freedom,
allowing to correct the alignment for some possible errors in manufacturing process that
may make additional aberration.

3.2.1.6 Dichroic and filters

After the mirrors the light is separated into two bands by the dichroic – 150 and 220
GHz. Thus QUBIC is a dual band experiment, which allows an efficient control of dust
contamination. Dichroic is an optical element that lets pass light with one frequency
and reflects light with another. So it transmits more than 90% of the 220 GHz band and
reflects more than 90% of the 150 GHz band. It is manufactured using the technique of
hot pressure, which provides good performance and flatness at cryogenic cycling.

The filters are designed to cut the off-band light. The filters are used on different tem-
perature stages from the half-wave plate down to the focal planes.

3.2.1.7 Focal planes

QUBIC has two focal planes, one for the 150 GHz band and another is for 220 GHz
band. One of the focal planes is shown on the bottom centre of the image 3.3, an-
other one is not shown. The focal planes are covered, as already been said, with TES
bolometers, which are background limited. Thus QUBIC inherits the main advantage of
the imager instruments – high sensitivity, which is absolutely necessary for primordial
Chapter 3. Bolometric interferometry and QUBIC experiment 65

B-mode observations. Each focal plane contains 992 detectors. The design of the focal
plane is shown on the figure 3.7. QUBIC TESs operate around temperature of 300 mK,
see figure 3.8. The TESs are made with NbSi thin film – quite popular choice for TES
production. They are not sensitive to polarisation. The total Noise Equivalent Power

(NEP) is 5.1017 W/ Hz at 150 GHz, with a time constant in the 10-100 ms range. The
light absorption is achieved with Palladium grid (red grid on the right botton plane of
figure 3.7).

Figure 3.7: Design of the 1024 bolometer array (left), one pixel of it (top right) and
the TES detector with its electrodes (bottom right). See text for the explanations.

The detection chain of QUBIC consists of TESs themselves, then each TES is amplified
by a SQUID – a superconducting amplifier. The SQUIDs are aranged in arrays by 32.
Four SQUID arrays are read by an ASIC – application-specific integrated circuit, thus
obtaining time domain multiplexing factor 128 per ASIC. Each quarter of each of the
focal planes is read by two ASICs (see figure 3.9).

3.2.2 The QUBIC site in the Puna desert

Initially QUBIC was supposed to operate at Concordia station in Antarctica. But finally
the steering comity of the experiment decided to make experiment in Argentina, in Puna
desert. The main differences regarding the two sites are:

• Water vapour in atmosphere,


Chapter 3. Bolometric interferometry and QUBIC experiment 66

Figure 3.8: Transition edge for four detectors distributed far from each other on one
quarter of the focal plane.

Figure 3.9: Two SQUID boards stacked (left) to finally obtain a SQUID box composed
of 4 PCBs, and thus 128 SQUIDs (center). TES thermo-mechanical structure showing
the 2 SQUIDs boxes near the TES array.

• Seasonal changes,

• Scanning strategy,

• Logistic.

The weather differences are illustrated on the figure 3.10. In Argentina we have to deal
with a higher level of water vapour, leading to a higher emissivity of atmosphere. The
seasonal changes are also stronger in Argentina, which reduces the observational efficiency
of experiment. On the other hand, the seasonal dead time, when the observations are
impossible due to the poor weather conditions, is indeed very comfortable time to perform
any instrument upgrades. And the daily dead time, when the field of interest is outside
the elevation range of the instrument, could be used for self-calibration and for recycling
fridges.
Chapter 3. Bolometric interferometry and QUBIC experiment 67

Figure 3.10: Noise Equivalent Power at both sites Argentina and Antarctica, for
frequencies 150 and 220 GHz, as a function of month of the year.

The Concordia station site was good in terms of scanning strategy, because we were able
to see the patch of the sky of our interest all day long, which is impossible in Puna desert.
But the daily dead time could be also used to observe some different patch, for example
closer to the galactic plane. These observations are crucial for testing the component
separation. QUBIC is able to observe the sky only on a limited range of elevation. This
is because the pulse tubes of the cryogenic system require near vertical positioning with
maximum tested inclination of 15◦ , thus allowing elevations from 35◦ to 65◦ (central
elevation, at which the pulse tubes are vertical, is 50◦ ). We should be able to reach
maximum inclination of 20◦ , which would allow to explore the inclination range from
30◦ to 70◦ . The QUBIC patch at right ascension 0.0 and declination −46.0 is the most
promising patch of the sky because of its low dust emission level, see figure 3.11, but in
Argentina it raises above the horizon only for a part of the day. We also analyse the
possibility to observe the sky patches used for the PolarBear experiment – RA4.5, RA12,
RA23 (the patches are named after their central right ascension) [86]. The PolarBear
site is situated not far from the QUBIC site – just about 150 km to the North-West,
so the horizontal coordinates of the PolarBear patches are not much different for two
sites. The PolarBear patches together with the QUBIC patch are shown on the figure
3.12. The PolarBear patches were chosen for their low dust intensity and availability
during the day. The availability of all the patches is shown on the figure 3.13: the field
is available for observations if it is within the allowed ranges of elevation. Analysing
Chapter 3. Bolometric interferometry and QUBIC experiment 68

this plot we can conclude that the following choice of the partition of the observational
time is reasonable: we look at the QUBIC patch from about 5 PM till 3 AM (exact time
depends on the scanning strategy). Then from 5 AM to 1 PM we can observe the RA12
patch in the Northern galactic hemisphere. This time, as well as the time from 3 to 5
AM and from 1 to 5 PM, could be also spent for the self calibration.
Planck Collaboration: Dust polarization at high latitudes

Fig. 8: Top: map in orthographic projection of the 150 GHz D`BB amplitudes at ` = 80, computed from the Planck 353 GHz data,
Figure 3.11: Maps of galactic dust emission, measured by Planck at 150 GHz. The
extrapolated to 150 GHz, and normalized by the CMB expectation for tensor-to-scalar ratio r = 1. The colours represent the
BB
estimatedDcontamination
` amplitudes at `in=rd80
from dust are
units plotted
(see details inon the5.3).
Sect. topTheand the associated
logarithm uncertainty
of the absolute value of rd forσ(r
a 400 ) 2 patch
deg
dust
is presented
on the bottom. BICEP2 deep-field region shown with the black contour. Center of the are not
in the pixel on which the patch is centred. As described in Sect. 3.3.2, the patches overlap and so their properties
independent. The northern (southern) Galactic hemisphere is on the left (right). The thick black contour outlines the approximate
QUBIC
BICEP2 deep-field region (see patch
Sect. 6).isBottom:
shown with the
associated black star.
uncertainty, (rd ). The picture is from [11].

These extrapolated estimates are divided by the value of that the expected contamination from dust at ` = 80 is equal to
the r = 1 primordial tensor CMB D`BB spectrum at ` = 80, the amplitude of the primordial tensor CMB D`BB for r = 0.1.
The logistic issues is something we cannot neglect when dealing with sophisticated equip-
6.71 ⇥ 10 2 µK2CMB , to express the estimated power in units For each of these estimates we also compute (rd ), the quadratic
that we denote rd . on
Because sum of the fit errors on ABB and the above uncertainty from the
ment. And thisthepoint
CMB primordial tensor B-mode
Argentinian site is much better. Basically, it permits to gain atBB
power scales linearly with r,13 a value rd = 0.1 would mean extrapolation to 150 GHz. Note that the fitted amplitudes A
14
least a year in time-line. Moreover, the maintenanceofof
13
for
This spectrum does not include the CMB lensing B-mode signal,
five these
thepatches are negative,
instrument but are consistent
at Concordia with
is ex-
14
which would become dominant even at ` = 80 for a very low r.
Negative values can arise in cross-spectra, as computed here.
tremely difficult. Usually, when anything happens to equipment during the polar winter,
13
Chapter 3. Bolometric interferometry and QUBIC experiment 69

Figure 3.12: QUBIC patch and PolarBear patches, overlaid on a full-sky 143 GHz
intensity map of Planck [7].

all the repairing works are postponed till summer. As QUBIC is a cryogenic instrument
it requires permanent maintenance, and if it could not be provided, the observational
efficiency can drop dramatically. Also far future plans for QUBIC include installation of
more modules, which will be only possible in Argentina: the electric power is a scarce
resource in Antarctica and running multiple QUBIC modules is just impossible there.

3.2.3 Time-line

Currently the main priority of the QUBIC team is the manufacturing of the technolog-
ical demonstrator, which should be achieved in 2017. This demonstrator will serve to
show the functionality of the concept, demonstrate performance and the ability of the
team to design, manufacture and test all the sub-systems. In case of any problems in
manufacturing and/or testing phase it is easier to solve them on the demonstrator, thus
getting ready for the manufacturing the full scale instrument. This growing complexity
method is good for any complex system as it allows to save time in the construction
phase and mitigates the technical risks.

The technological demonstrator is a reduced scale copy of QUBIC with 8×8 horns, small
mirrors, no dichroic and a small focal plane with 256 detectors (one quarter of a focal
Chapter 3. Bolometric interferometry and QUBIC experiment 70

Figure 3.13: Elevation of different fields above the horizon (dashed horizontal line)
for Puna site. Shaded regions show allowed ranges for elevation. The elevation of the
QUBIC patch as it is seen from Concordia station is shown for the comparison.

plane for the full QUBIC design), but with the nominal cryostat. At the moment, horn
array, mirrors and switches are already completed. The cryostat is under construction.

Fabrication of the full instrument starts in 2017 and should be completed by the end of
2017. Meanwhile, the works on the site must be completed: the road to the site, electric
power supply, basic on-site buildings, instrument foundation. The mount of QUBIC on
the site is planned for the March and the commissioning for the April 2018. The goal is to
have the first observational season of QUBIC on Summer 2018 (in Southern hemisphere
it is winter).
Chapter 4

Map-making in monochromatic case

In this chapter we discuss the basics of map-making for bolometric interferometers,


using the simplest case of monochromatic light. To make the introduction smoother we
first describe the imager map-making and then consider bolometric interferometer as an
imager that observes the sky with a complex synthesized beam. We introduce the
approximation of the synthesized beam that allows to make the map-making problem
computationally trackable. Then we elaborate the acquisition model by introducing the
fusion acquisition. Besides participating to the implementation of the map-making an
important contribution of the author was to refine the synthesized beam approximation
by taking into account some minor features of it and to test the map-making with
Monte-Carlo simulations.

4.1 QUBIC pipeline

The overall process of data handling of an experiment is called a pipeline. The input
for the pipeline is so-called time-ordered data (TOD): time-ordered array of signals from
each of the detectors on the focal plane. TOD contains one number (4 bytes) per each
detector (1984 detectors for two focal planes) for each sample (if the rate of taking
samples is 100 Hz, then there are 8640000 samples per day), so the computer memory
needed to keep 1 day of data is more than 60 Gb. Then we reconstruct sky map from
TOD. Map is a healpix three component map (see description of healpix package in [95]
), for 3 Stokes parameters. If we use maps with nside parameter equal 128, then number
of covered pixels is around 3000-4000, so around 40 kb of data. From data we reconstruct
power spectra, several binned arrays, so just few numbers. The last step in pipeline is
to estimate cosmological parameters, among which the most interesting is r. Generally,
the pipeline of any experiment tends to reduce amount of data and increase the physical

71
Chapter 4. Map-making in monochromatic case 72

meaning of the result. In case of QUBIC we are compressing gigabytes of TOD into just
one number with errorbars.

amount of data

TOD (>1010 numbers)


QUBIC acquisition
map-making
model
CMB map (~103 numbers)
spectra

da
healpix

ta
reconstruction

tio

a
ula

na
sim

lys
power spectra (~101 numbers)

is
spectra
CAMB
fitting
cosmological
(~100 numbers)
parameters

Figure 4.1: Sketch of the QUBIC pipeline

The simulations pipeline is just the reverse to the data analysis pipeline: we start with
cosmological parameters, compute spectra due to those parameters, from spectra we gen-
erate maps, from maps, using the model for instrument acquisition, we build simulated
TOD. The simulation and data analysis pipelines are sketched on the figure 4.1.

In this chapter we start to introduce the QUBIC pipeline from the most memory-
consuming steps: TOD simulation from map and reconstruction back to map. Doc-
umentation to the QUBIC simulation and analysis software package is in the appendix
A.

We are going to heavily use the following notations: np for number of sky pixels, nt for
number of time samples and nd for number of detectors on the focal plane.

4.2 Imager map-making

Before introducing the QUBIC map-making, let’s start with the simpler case of an imager
map-making. In this section we don’t aim to consider the imager map-making very
deeply. Instead we will try to discuss briefly the main ideas for map-making. Detailed
discussion of the imager map-making techniques could be found in [96]. The TOD of an
imager could be modeled as

y = Hx + n, (4.1)
Chapter 4. Map-making in monochromatic case 73

where x is the three component pixelized sky map (for I, Q and U Stokes parameters), y
is the TOD, n is the noise on the detectors and H is the acquisition model operator, that
includes the instrument beam, polarisation modulation and pointing information. This
equation could be simplified: instead of modelling the instrument beam and including it
to the acquisition matrix, one can convolve the sky x with the instrumental beam and
model the observation as if the instrument beam is infinitely narrow. The convolved sky x̃
is equal to Cx, where C is the beam convolution operator. Of course, this approximation
is valid only if the beam is the same for all the detectors of an instrument.

H is a sparse matrix operator of shape np × nd nt (in case of polarization sensitive ob-


servations it is 3np × nd nt , where factor 3 is for 3 Stokes parameters I, Q and U ), it
puts correspondence between sky pixels and detectors. Each row of such matrix holds
information about a pixel observed by one particular detector at given time sample. In
the discussed approach with TOD modeled as

y = H x̃ + n. (4.2)

each row of H contains only one number 1: each detector at every time sample observes
only one convolved sky pixel.

The equation (4.2) is a matrix equation where matrix H is not square. Thus this equation
could not be simply inverted. Instead the method of pseudo-inversion is used:

H T H x̃ = H T y. (4.3)

The matrix H T H is square and thus this equation could be inverted in the usual way.
The obtained solution is optimal (that is it maximizes the likelihood) and unbiased, but
only in case of uncorrelated uniform noise.

In the general case, the noise has a non-diagonal covariance matrix N :

N = nnT .


(4.4)

It has shape nd nt × nd nt . On practice the noise for ground-based CMB observations is


defined mostly by the atmosphere. And the atmospheric noise is not white. It is high on
low frequencies and has a long "white" tail. Noise with such properties is called brown or
1/f -noise: on the low frequencies its intensity is proportional to the inverse of frequency.
On the frequency range upper some frequency the noise becomes white. This frequency
is called the knee frequency. The low frequency noise induces striped structures on the
Chapter 4. Map-making in monochromatic case 74

reconstructed maps along the lines of scans of the instrument. The low frequency noise
can be filtered out from the TOD, making the noise covariance matrix diagonal:

 
σ 2 In
 1 t 
 σ22 Int 
N = , (4.5)
 
..

 . 

σn2 d Int

where σi2 is the noise variance for ith detector and Int is an identity matrix of dimension
nt . But then some part of the signal is also removed during the filtering.

To find the maximum likelihood solution of the equation (4.2) we use the Bayes’ theorem:

P (y|x̃)P (x̃)
L(y|x̃) = P (x̃|y) = . (4.6)
P (y)

The denominator describes the probability of taking data and does not change the posi-
tion of the maximum of likelihood function. Let’s consider a simple example of flat-prior
observations: P (x̃) = const. Then the probability of CMB sky given the data is propor-
tional to the probability of taking data having a CMB, which is obviously proportional
to the noise probability distribution: we expect that the data deviate from the noise-
less CMB by the gaussian noise. The noise probability distribution is a nt -dimensional
gaussian distribution:

 
1 1 T −1
P (n) = p exp − n N n . (4.7)
|(2π)nt N | 2

Using the equation (4.2) we obtain:

 
1 1 T −1
P (x̃|y) ∝ P (y|x̃) ∝ p exp − (y − H x̃) N (y − H x̃) . (4.8)
|(2π)nt N | 2

And the χ2 is

χ2 = −2 log L = (y − H x̃)T N −1 (y − H x̃)


(4.9)
= yT N −1 y − yT N −1 H x̃ − x̃T H T N −1 y + x̃T H T N −1 H x̃.

Terms yT N −1 H x̃ and x̃T H T N −1 y are equal scalars, so yT N −1 H x̃ + x̃T H T N −1 y =


2x̃T H T N −1 y.
Chapter 4. Map-making in monochromatic case 75

We are looking for the minimum of χ2 function:

∂χ2 ∂χ2
= = 0. (4.10)
∂ x̃ ∂ x̃T

Let’s take the derivative on ∂ x̃T . For this we need some rules for matrix derivatives [97]:

∂xT Ax
= 2Ax (4.11)
∂x

and

∂xT A
=A (4.12)
∂x

where x is a vector and A is a matrix. Applying these rules to take a derivative from (4.9)
∂ T −1 y)
and taking into account that ∂ x̃ (y N = 0, we get the following matrix equation:

H T N −1 H x̃ = H T N −1 y. (4.13)

And finally the least square solution of the equation (4.2):

x̃ = (H T N −1 H)−1 H T N −1 y, (4.14)

Here the N matrix weighs the measurements from different detectors according to their
noise level. In case of the noise covariance matrix proportional to the unity matrix,
solution (4.14) is equivalent to the simplified one (4.3).

The acquisition model for each time sample associates each of the detectors of the focal
plane with a certain direction on the sky and it is possible to solve the equation (4.2)
to reconstruct the input CMB emission x̃. In case of bolometric interferometer the
procedure is not that straightforward, though it is pretty similar.

4.3 QUBIC map-making

4.3.1 Initial assumptions for QUBIC simulation pipeline

At the moment of this thesis writing the QUBIC instrument is still in the construction
phase. Though we are trying our best to implement the instrument in the most realistic
Chapter 4. Map-making in monochromatic case 76

way, we still have to make some assumptions as long as the real instrument does not
exist. Our assumptions are:

• The centre of the reference frame of the instrument is placed right in the centre of
the horn array.

• The two focal planes are considered as absolutely identical, with equivalent optical
path to both. Thus in simulations we always consider QUBIC as a single-banded
instrument with one focal plane: the TOD is always constructed for only one focal
plane, the map-making is made only for one focal plane TOD etc. To consider the
difference between two frequency bands we run two separate simulations.

• We assume a perfect half-wave plate.

• And a perfect polarizing grid.

• We assume that both primary and secondary beams for each pair of back-to-back
connected horns are purely gaussian with full width at half maximum (FWHM)
13◦ .

• We neglect optical aberrations in the mirrors.

• We adopt a simplified model for detector acquisition which assumes that the in-
tensity of the synthesized beam is always constant within the area of the detector.
Thus the detector response, which is the flux integrated in the surface of the de-
tector, is calculated simply as the synthesized beam intensity in the center of the
detector times the area of the detector.

First we consider a simple monochromatic case: we suppose that the frequency filter of
the instrument passes only a δ-function of the continuous frequency range.

4.3.2 Synthesized beam

As already described in the chapter 3, a bolometric interferometer concept implies obser-


vation of the sky with a complex synthesized beam, which is formed due to the interfer-
ence of individual beams from each of the horn pair. A similar synthesized beam could
be demonstrated for the optical light, using two orthogonal diffractive gratings (see figure
4.2). Remember that for one diffractive grating the interferometry pattern looks like a
fender of lines, each line corresponds to a certain order of diffraction. When we observe
the point source through 2D grating, we can consider it as if the pattern produced by the
first grating is modulated by the second one and what lefts is a number of bright spots
Chapter 4. Map-making in monochromatic case 77

exactly on the positions where the interferometry lines from individual gratings would
intersect.

Figure 4.2: Diffraction of the beam of the green laser on the 2D diffractive grating.
Multi-peaked interferometry pattern resembles the synthesized beam of QUBIC.

Let’s consider how the synthesized beam is formed. The signal on the point r of the focal
plane at wavelength λ is the electric field from the sky E(n) re-emitted by the horns each
with its proper phase-shift:

Z X    2
x i r
S(r, λ) = E(n)Bprim (n)Bsec (r)exp i2π − n dn, (4.15)

λ Df
i

where Bprim (n) is the primary beam – the input beam for horns that acts in sky-direction
n space; Bsec (r) is the secondary beam, or the output beam from each horn, that acts in
focal-plane space r; xi is position of horn i; Df is the focal distance. The exponential term
under the integral is responsible for the interference between the beams from different
horns. Equation (4.15) could be re-written as

Z
S(r, λ) = |E(n)|2 BS (n, r, λ)dn, (4.16)

where BS (n, r, λ) is the synthesized beam (SB):


Chapter 4. Map-making in monochromatic case 78

   2
X x i r
BS (n, r, λ) = Bprim (n)Bsec (r)exp i2π −n . (4.17)

λ Df
i

In case of horns distributed on uniform orthogonal grid, the sum in the last equation
could be computed analytically, giving

h  i h  i
rx ry
sin2 nh π ∆x
λ D − n x sin 2 n π ∆x
h λ Df − n y
BS (n, r, λ) = Bprim (n)Bsec (r) h  f i h  i ,
r r
sin2 π ∆x
λ Df
x
− n x sin 2 π ∆x
λ Df
y
− n y
(4.18)
where nh is number of horns on one side of a square horn array and ∆x is distance
between them [98]. The synthesized beam in this case has that particular multi peaked
shape, already described in the beginning of this section. Figure 4.3 shows the radial cut
of the synthesized beam for two detectors on the focal plane: one in the centre and one
apart.

λ
The peaks of synthesized beam are situated on a grid with step equal θ = ∆x , their full
λ
width on half maximum is approximately equal to nh ∆x . For QUBIC with 400 horns
packed in a circle nh is approximately 20.

Figure 4.3: Radial cut of the synthesized beam for two detectors on the focal plane:
one in the centre of the focal plane (blue) and one 50mm apart (green). Modulating
primary beam is shown with red dashed line.

Let’s stress again on the distinctive features of a bolometric interferometer due to the
fact that it observes the sky with a synthesized beam:
Chapter 4. Map-making in monochromatic case 79

• Each detector observes a large fraction of the sky at the same time

• synthesized beam is not axisymmetric (which is evident on the picture 4.3).

• synthesized beam is different for different detectors

Thus the acquisition operator H for a bolometric interferometer becomes not sparse
which makes the entire problem of map-making computationally impossible on any of
existent supercomputers. It means that the precise acquisition model for a bolometric in-
terferometer is not possible and one has to build an approximate model of the synthesized
beam to handle it.

We can try to apply the imager operator H to reconstruct the TOD of a bolometric
interferometer on the example of QUBIC. What comes as a result is shown on the figure
4.4. It is evident, that the imager map-making does not work for a bolometric interfer-
ometry: although some fluctuations are reconstructed correctly, the overall performance
is poor. This is precisely because a large fraction of power in the synthesized beam,
about 70%, is contained in its secondary peaks. That is the poor performance of the
imager map-making is well expected because it doesn’t imply proper modeling of the
instrument. In order to reconstruct maps correctly we need to take into account all
the peaks of the synthesized beam. The map-making for a bolometric interferometer,
considered in particular case of QUBIC, is described below.

Figure 4.4: Reconstruction of the bolometric interferometer TOD with map-making


algorithm of an imager. From left to the right: input map of temperature CMB
anisotropies (simulation), convolved with the gaussian beam of 23.50 width; recon-
structed map; difference of the input and output maps. Units of the color axis are
µK. For the simulations we used the QUBIC simulation pipeline with random pointing
within a circle of radius 10◦ around the north galactic pole, 1000 samples, temperature-
only noiseless observations.
Chapter 4. Map-making in monochromatic case 80

4.3.2.1 Synthesized beam approximate model

Neglecting the minor features between the main peaks one can approximate the synthe-
sized beam of QUBIC as a sum of gaussian peaks distributed on the focal plane according
to the SB peaks. In other words, SB could be considered as a convolution of a narrow
gaussian with a 2D Dirac brush, modulated by the horn primary beam. For detector d
at time t the signal y is

T
yd,t = Bd,t x, (4.19)

where x is the sky (here we assume noiseless observation) and Bd,t is the synthesized
beam for the detector d at time t. We approximate Bd,t as

Bd,t = Φd,t CDd,t , (4.20)

where Φd,t is primary beam, C is gaussian convolution operator with FWHM equal to
that of the peaks on synthesized beam and Dd,t is Dirac 2D brush which is equal to 1 in
the centres of the synthesized beam peaks and 0 everywhere else. Applying this model
to the equation (4.19) we get:

yd,t = (Φd,t Dd,t )T Cx = P̃d,t x̃, (4.21)

where P̃d,t is projection operator that operates from sky pixel domain to the time domain.
And x̃ = Cx is the sky convolved due to instrument resolution. Applying this model
we neglect all the minor features of SB between the main peaks. Thus the acquisition
operator becomes sparse and the acquisition model becomes computationally tractable.
The picture 4.5 shows a radial cut of the SB, as modeled due to interferometry and the
gaussian approximation.

Taking into account the noise the equation (4.21) turns to

yd,t = P̃d,t x̃ + nd,t , (4.22)

where nd,t is noise level at detector d at time moment t.


Chapter 4. Map-making in monochromatic case 81

Figure 4.5: Radial cut of SB, the reference green line is due to interferometry and
the red line is approximation due to equation (4.20). Logarithmic scale on the vertical
axis.

4.3.3 Acquisition model

4.3.3.1 Acquisition model for a bolometric interferometer

The acquisition model for a bolometric interferometer is very similar to that of an imager:

y = H x̃ + n, (4.23)

The only difference here is hidden in the acquisition operator H. While for an im-
ager it was associating each TOD element with one sky direction, here H implies the
bolometric-interferometry specific mixture of signals from many directions. Considering
only temperature fluctuations observation H is a nd nt × np sparse matrix:

 0

0 ··· 0 P̃1,1 0 ··· 0 P̃1,1 0 ···
 . .. .. 
 .
 . . . 

0
 
 0
 0 P̃1,nt 0 ··· 0 P̃1,nt 0 ··· 

 . .. .. 
H =  ..
 . .
.  (4.24)
 0 
P̃n ,1
 d 0 ··· 0 P̃nd ,1 0 ··· 

 . .. .. 
 .. . . 
 
0
0 P̃nd ,nt 0 ··· 0 P̃nd ,nt 0 ···
0
where coefficients P̃d,t , P̃d,t etc. are coefficients correspond to the central, secondary and
lower order peaks of synthesized beam, as defined in the previous section. Each row of
H contains a SB for a certain detector at the moment t: let’s say a detector d sees the
Chapter 4. Map-making in monochromatic case 82

sky pixel p(n0 ) with the central peak, pixels p(n1,2,3,4 ) with the first order of diffraction
etc. Coefficients P̃d,t are equal to the SB values for these pixels. The zeros in H appear
due to the neglecting of the minor features of the SB.

4.3.3.2 QUBIC acquisition model

One can build the acquisition model for QUBIC implementing the components of the
instrument one by one exactly in the sequence as they are in the instrument: the half-
wave plate, then the polarising grid and the horn array. Let’s consider the shapes of such
operators, if they go exactly in this order:

• The rotation of the instrument due to the scanning strategy: S is an operator of


shape 3np × 3np nt responsible for this rotation.

• The half-wave plate operator W operates in from-sky-to-sky domain: it converts


the three component sky into a sky-like array with rotated polarisation. And W
also handles the rotation of the hwp, thus we must introduce the dependence on
time, so its shape is 3np nt × 3np nt .

• The polarising grid operator G removes one component of polarisation. If also has
to keep the dependence on time, introduced for W so it is 3np nt × np nt operator.

• The projection operator P is almost the same as the acquisition operator H for
temperature-only observations, introduced in the equation (4.24), but it is "fed"
with a sky, which is dependent on time. So the shape of P is np nt × nd nt

Thus the full acquisition model is

H = P GW S. (4.25)

One can easily notice that implemented like that the acquisition operator would be
extremely heavy: although the operators are mostly sparse, their dimensions are large.
Let’s consider the signal on detector d at time t for QUBIC. The CMB sky, rotated due
to the scanning strategy and convolved by the instrumental beam, is
 
˜
I(n)
 
Q̃(n)
  (4.26)
Ũ (n)
Chapter 4. Map-making in monochromatic case 83

If we consider only one particular moment, then the half-wave plate operator is:
 
1 0 0
 
W =
0 cos(4ω) sin(4ω) 
 (4.27)
0 sin(4ω) −cos(4ω)

and the polarising grid operator is


h i
G= 1 1 0 (4.28)

So the full acquisition operator for one detector and one time sample is:

yd,t = Hd,t x̃
= Φd,t Dd,t GW x̃ =
  
1 0 0 ˜
I(n)
h i   (4.29)
= Φd,t Dd,t 1 1 0 
0 cos(4ω) sin(4ω)  Q̃(n)
 
0 sin(4ω) −cos(4ω) Ũ (n)
= Φd,t Dd,t (I˜ + cos(4ω)Q̃ + sin(4ω)Ũ )

where D is the Dirac brush and Φ is the horn primary beam. Note again that the effect
of the synthesized beam is completely indifferent to polarisation: the signal that passes
towards the focal planes is defined only by the HWP and polarising grid. But here we
can use another application of this fact: we can painlessly rearrange the operators in the
acquisition model. We can move the projection operator in front and combine it with
the rotation operator (remember that we cannot rearrange the W and G operators: it
is forbidden by the fact that they both operate with the polarisation). The acquisition
model then becomes

H = GW P (4.30)

where

• P is the projection operator of shape 3np × 3nd nt .

• W is the HWP operator of shape 3nd nt × 3nd nt . Note that now it operates in the
TOD domain and thus it is much more compact than it was before the rearrange-
ment of the operators.

• G is the polarizer operator of shape 3nd nt × nd nt


Chapter 4. Map-making in monochromatic case 84

Thus the H operator becomes more compact and easier to handle. Let’s consider the
operators P , W and G more closely.

P operator is a sparse operator of shape 3nd nt × 3np with sparsely distributed terms
0 R(φ ) etc. Here coefficients P̃
P̃d,t R(φd,t ), P̃d,t d,t d,t are the same as the ones introduced for
a bolometric interferometer acquisition in general (4.24). Angle φd,t describes rotation
of the instrument relative to the sky due to the scanning strategy. R is the rotation
operator:  
1 0 0
 
R(φ) = 
0 cos(φ) sin(φ) . (4.31)
0 sin(φ) −hecos(φ)

The HWP operator W is a 3nd nt × 3nd nt block diagonal sparse matrix:


 
R(4ω1 ) · · · 0
..
 
 0
 . 0 0


 
 0
 ··· R(4ωnt ) 


W = .. 
(4.32)
 . 

 

 R(4ω1 ) · · · 0 

 .. 

 0 0 . 0 

0 ··· R(4ωnt )

Here ωt is the rotation angle of the half-wave plate and Thus it is clear that HWP
modulates the polarisation signal.

The polarising grid operator G is a nd nt × 3nd nt block diagonal sparse matrix:


 
1 1 0
1 .. 
G=  .  (4.33)
2  
1 1 0
h i
The meaning of one block of 1 1 0 is that the polarizing grid passes the intensity of
incoming radiation and one orientation of polarization and blocks another polarization,
perpendicular to the one that passes.

Here we omitted many other constituents of the acquisition model. Let’s list them:

• The unit conversion operator which converts the temperature of the sky into units
W
of the radiation flux density m2 Hz
,

• Aperture integration operator integrates flux density in the telescope aperture.


W W
Converts signal from m2 Hz
into Hz ,
Chapter 4. Map-making in monochromatic case 85

W
• Frequency filter operator converts units from Hz to W according to the filter trans-
parency on the given frequency,

• Detector integration operator integrates flux density in detector solid angles,

• Instrument transmission operator,

• And atmosphere transmission operator.

Currently in the QUBIC data analysis package these operators are implemented as just
constants. Later, when we will discuss the QUBIC acquisition model for more complex
cases, we will always omit these operators to make explanation shorter and more clear.
But they are always present in the model. Starting from the moment of technological
demonstrator tests these operators should be revised and replaced with more realistic
ones.

4.3.4 Map-making

The QUBIC map-making involves the solution of a matrix equation, similar to the one
introduced for an imager:
H T N −1 H x̃ = H T N −1 y. (4.34)

The solution for this equation is computed iteratively using the preconditioned conju-
gate method (PCG) [99]. It is a very useful method for the case of sparse systems of
linear equations (will be discussed in more details in the chapter 5). QUBIC acquisition
operator H for one full day of observations using one focal plane with sampling rate 100
Hz requires 880 GB of memory. Thus it is hard to run the QUBIC simulations on a
desktop computer and we are obliged to use supercomputers. Of course, this problem
will be even more important when the real data will come. The computing facilities used
for QUBIC map-making will be discussed in the following subsection.

To complete the comparison with an imager we show on the figure 4.6 the results of map-
making for QUBIC in the similar fast Monte-Carlo (see the next section for explanations),
as on the figure 4.4. Now, this result is much more satisfactory: the residual map is almost
zero, which means that we reconstruct the sky correctly. There are still some large scale
residual fluctuations. These fluctuations arise from the fact that the synthesized beam
is very wide and, when the instrument is pointed to the edge of the field, some signal
comes from very poorly observed pixels of the sky. This issue will be discussed in the
section 4.3.5. Let us just mention here that these induced fluctuations have the angular
scale larger than the fluctuations we are looking for with QUBIC.
Chapter 4. Map-making in monochromatic case 86

Figure 4.6: Reconstruction of the bolometric interferometer TOD. From left to the
right: input map of temperature CMB anisotropies (simulation), convolved with the
gaussian beam of 23.50 width; reconstructed map; difference of the input and output
maps. Units of the color axis are µK. We use the same TOD, as was used for simulations
shown on the figure 4.4. Note that the observed field is larger than on the figure 4.4
because of the side peaks of the synthesized beam.

4.3.4.1 Monte-Carlo simulations

As already been told in the begining of this chapter, the data analysis and especially
the map-making for a modern CMB experiment could be extremely heavy in terms of
computational power. To test the performance of QUBIC data analysis pipeline we run
the Monte-Carlo simulations. It is very difficult to run the full-scale simulations, so we
have to find ways to make lighter ones which would adequately represent the QUBIC
pipeline.

We already briefly mentioned the fast or random simulations for QUBIC (for the examples
of such simulations one can refer to the figures 4.4 and 4.6). These simulations could
be done even on a desktop computer. They include the simulation of a little number
of pointings, usually used 1000 pointings, distributed randomly on a circle with a given
radius (usually it is 10◦ ). 1000 pointings with realistic QUBIC sampling of 100 Hz
would correspond to just 10 s of the real observations. The noise is scaled down to
match the noise for the given observational period, usually one or two years. Such
random simulations are idealistic in terms of scanning strategy: the discussed problem
of striped structures on the reconstructed map does not occur for the random simulations.
Moreover the sky is sampled by the synthesized beam in the most varied way: when with
the realistic scanning strategy the instrument is pointed twice to the same direction n
with some short time difference between two samples dt ∼ 1/νknee , then the 1/f noise
could be efficiently filtered out. But, in the same time, two samples with the same
position of the instrument don’t add much information to the system of linear equations
Chapter 4. Map-making in monochromatic case 87

of map-making (4.34). With random pointings we simulate observations which can define
the measured fluctuations in the best possible way (however, it is not realistic).

We also use more complicated simulations, which we call realistic. The simulated period
of observations is usually one day – consistent with what we expect to have with the
actual QUBIC data analysis: we plan to analyze TOD day by day, then combine daily
maps into one. This approach is valid since the observational time is fragmented by 8-10
hours as described at the end of the 3, thus we don’t expect any correlations of the noise
for the daily observations. The noise is scaled down, as it is for the fast simulations. The
scanning strategy is the realistic one, though in the most cases we use reduced sampling
frequency: instead of 100 Hz of sampling we often use much lower values, down to 10 Hz
and below. The issue of sampling frequency will be discussed in the chapter 8. To run
the realistic simulations the power of a desktop computer is not enough and one has to
turn to the computations on super-computers. For QUBIC we use computing facilities
of NERSC and CURIE:

• The National Energy Research Scientific Computing Center (NERSC) is the pri-
mary scientific computing facility for the Office of Science in the U.S. Department
of Energy [100]. It serves for many scientists around the globe to run their sim-
ulations and data analysis. The QUBIC software is setup on Edison system of
NERSC, which is a Cray XC30 with 133,824 compute cores, 357 TB of memory,
7.56 PB of disk, and the Cray "Aries" high-speed internal network.

• The Curie supercomputer, owned by GENCI and operated into the TGCC by CEA,
is the first French Tier0 system open to scientists through the French participation
into the PRACE research infrastructure [101]. It is a system of 5040 B510 bullx
nodes, for each node there are 2 eight-core 2.7 GHz processors (80640 cores in total)
and 64 GB of operative memory. Global memory is 5 PB with 100 GB per second
bandwidth.

The third type of simulations is what we call pseudo Monte-Carlo. The idea of pseudo
Monte-Carlo is based on the fact that in case of absence of 1/f noise the noise on the map
is almost uncorrelated between different pixels. In fact there could be some correlation
at the angular distance of the separation of the peaks of synthesized beam (8.5◦ for 150
GHz band). But these scales are too large and we can neglect this effect and assume the
noise on the map to be uncorrelated. Then after running just one simulation (fast or
realistic with no 1/f noise) we can do the following:

• Get the coverage map: the coverage map COV is the map of number of hits to
each pixel. It is defined as
Chapter 4. Map-making in monochromatic case 88

COV = H T e (4.35)

where e is a matrix of ones with a shape equal to that of the TOD and the acqui-
sition operator H here acts on the one component sky map.

• Divide the coverage map on bins with almost constant coverage. On our experience
bin width equal to 5% of maximum coverage is fine.

• Take the noise standard deviation from the residual map on each coverage bin.

• Put gaussian noise to the pixels of a new map according to the bin mask and to
the standard deviation taken on the previous step.

The number of pixels is usually big and estimation of standard deviation of the noise
on each bin is well determined. Thus the maps built this way have noise of realistic
level, distributed in the same way as on the original map. This procedure is very easy
and fast and allows to have as many sky and noise realizations as one might want. We
have proven that the maps simulated in this way have the same power-spectrum as the
original one within the errorbars, see picture 4.7. However, the pseudo Monte-Carlo is
not assigned to work with 1/f noise.

4.3.5 QUBIC-Planck fusion acquisition

As already been told, each detector of QUBIC sees a large fraction of the sky because of
the synthesized beam. The distance between the central (zero order interference) peak
and the second order peak is about 16 degrees at 150 GHz band. That means each
detector sees sky at 16 degrees around the central peak of the synthesized beam. Thus
when the instrument is pointed to the edge of the coverage field, its detectors see the
poorly observed pixels of the sky, which contribute as noise to the central pixels. The
PCG solver is poorly constrained on the periphery of the field.

This problem is solved by using data of another instrument as an additional constraint


for PCG. We choose Planck maps as such a constrainer, because it is the most recent
full-sky mission, but it could be data from any instrument, that observed a broad field
around the field of interest of QUBIC.

We introduce so called fusion QUBIC-Planck acquisition model:

" # " # " #


yQU BIC HQU BIC nQU BIC
= x̃ + (4.36)
yP lanck HP lanck nP lanck
Chapter 4. Map-making in monochromatic case 89

Figure 4.7: Comparison of the power-spectra, reconstructed from realistic and pseudo
Monte-Carlo (here "full" means realistic and "fast" means pseudo Monte-Carlo). The
bias is shown with the solid line and the level of the errorbars is shown with the dashed
lines. The errors are build as a standard deviation for 10 realizations.

where HQU BIC , yQU BIC and nQU BIC are acquisition operator, TOD and noise for
QUBIC, already described in the previous sections, HP lanck = I3np is the Planck ac-
quisition operator which is just an identity operator of dimension 3np , nP lanck is the
Planck noise and yP lanck = x̃ + nP lanck is the Planck map of CMB [102], convolved by
the QUBIC beam with 23.50 width at 150 GHz band and 160 at 220 GHz band.

The inversion of equation (4.36) is similar to that of QUBIC-only acquisition model


(4.34). The noise covariance matrix in case of fusion acquisition is a block diagonal
matrix composed of noise covariance matrices for QUBIC and Planck alone:

" #
NQU BIC 0
Nf usion = . (4.37)
0 NP lanck

To check the usability of the fusion acquisition we run fast Monte-Carlo simulations with
1000 random pointings. Results are shown on the pictures 4.8, 4.9 and 4.10. One can see,
that the fusion acquisition model allows us to reduce significantly the noise induced from
the poorly observed pixels, especially on the edge of the coverage field. The large angular
Chapter 4. Map-making in monochromatic case 90

scale fluctuations, briefly discussed at the end of the section 4.3.4, also disappeared. The
most valuable gain due to the usage of the fusion acquisition is seen on the distance of
15 – 20◦ from the center of the field, see picture 4.10. Here the noise is reduced by factor
2 and more, which allows us to broaden the field of QUBIC. At the distance 20◦ and
farther QUBIC has poor constraints and what we reconstruct is just the Planck map.

If there is residual polarization systematics in the Planck Q and U maps, this can propa-
gate in the QUBIC-Planck fusion maps and thus may induce E to B leakage. This issue
needs to be studied in details in the following steps:

• first we model the cross-polarization of Planck maps, using the Jones matrix for-
malism,

• then we run several realizations of Monte-Carlo with cross-polarization and with-


out,

• finally, we reconstruct the power-spectra and compare the errorbars of the BB


spectrum.

This study is currently ongoing.

Figure 4.8: QUBIC-only simulation results: three columns are for I, Q and U Stokes
parameters from left to right respectively. From top to bottom there are: input con-
volved maps, reconstructed maps and residual maps. Units on the color axes are µK.
Note high noise on the peripheral pixels.
Chapter 4. Map-making in monochromatic case 91

Figure 4.9: QUBIC-Planck fusion simulation results: three columns are for I, Q and
U Stokes parameters from left to right respectively. From top to bottom there are:
input convolved maps, reconstructed maps and residual maps. Units on the color axes
are µK. Note much lower noise on the peripheral pixels in comparison with simulations
shown on the picture 4.8. Outside the field of view of QUBIC there is just the Planck
map.

4.3.6 Second-order features of the synthesized beam

An attentive reader may noticed, that despite the good description of QUBIC synthe-
sized beam with gaussian peaks distributed on the Dirac comb, there are still some
unaccounted features around the peaks, that may be as big as the second order peaks.
Considering it more carefully we see, that they form almost axisymmetric ripples around
the peaks, see figure 4.11. The amplitude of these ripples reduces very fast as we are
getting farther from the peak. The first two ripples are almost perfectly axisymmetric.

There are two possible ways to treat the rippled structures around the peaks: first, we
can model the ripples in a similar way we do for the peaks: introduce another Dirak
brush, associate it with some amplitudes and widths etc. If the peaks of this brush are
situated close enough, we are able to model the continuous synthesized beam with any
required precision. Then all the features of the synthesized beam would be resolved.
However, the acquisition matrix H in this case becomes too heavy.

Another way is to change the gaussian convolution of the peaks to some other function
that would take into account the first two ripples (assuming that the first two ripples
Chapter 4. Map-making in monochromatic case 92

Figure 4.10: QUBIC-Planck fusion simulation results: profiles of the residual maps for
three Stokes parameters I, Q and U . Blue is for the QUBIC-only simulations, constant
red line shows the noise level on the Planck maps, green profile is for QUBIC-Planck
fusion acquisition.

are fully axisymmetric). Then we are not able to resolve these ripples. This means that
if, for example, we observe a point source, the image of the source on the reconstructed
map will appear with the ripples. But, since the ripples are axisymmetric we can easily
deconvolve the result just by dividing the reconstructed power spectrum by the spectrum
of the peak. This is the option of our choice. The used convolution function we call rippled
convolution.

To use the rippled convolution we get the power spectrum of the peak with two ripples.
The beam spectrum is called beam window function. The window functions will be
discussed in the chapter 7. The rippled beam window function for frequency 150 GHz,
compared to the gaussian beam, is shown on the figure 4.12. The width of the gaussian
window function in Fourier space is equal to the inverse of the width of the peak in real
space. And the last one is proportional to the inverse of frequency of the light. We can
conclude that the frequency dependence for the rippled beam must be the same: the
rippled beam window function width is proportional to the frequency.
Chapter 4. Map-making in monochromatic case 93

Figure 4.11: Zoom view of the synthesized beam. The rippled features around the
peak are evident.

Figure 4.12: Beam window function for rippled peak and gaussian peak for QUBIC
synthesized beam at 150 GHz.
Chapter 4. Map-making in monochromatic case 94

To model the frequency dependence of the shape of the rippled peak we make the spline
fitting of its window function and evaluate the spline on the different array of multipoles.
However, it turns out that the shape of the window function is not exactly the same for
different frequencies. We compare the modeled window function with the measured one
for different frequencies on the figure 4.13, left plot. On the multipole range 50 < ` < 300
the deviation of the modeled window function from the real one is almost perfectly linear.
We fit this deviation with a linear function and find the frequency dependence of the
coefficient of the linear function. Then we can introduce the correction to the modeled
window function:

C = a` + b. (4.38)

According to the fit,

a(ν) = 1.65 · 10−2 − 2.24 · 10−4 ν + 9.71 · 10−7 ν 2 − 1.40 · 10−9 ν 3 , (4.39)

and

b(ν) = −3.81 · 10−1 + 4.76 · 10−3 ν − 1.84 · 10−5 ν 2 + 2.38 · 10−8 ν 3 , (4.40)

where ν is in the units of GHz. The deviation of the modeled window function from the
real one after this simple correction is shown on the right plot of figure 4.13.

Thus we model the peaks of the synthesized beam together with two ripples around each
peak and the deviation of the modeled window function from the realistic one is below
few per cents for all the frequencies in the QUBIC frequency range. This approximation
is much more precise than the formerly used gaussian approximation. However, there is
still a room for making even better approximation.

4.3.7 Simulations

We check the map-making process (4.36) with realistic simulations that include: realistic
scanning strategy for 1 day from the Concordia station; noise level is scaled down to
match 1 month of observations, the knee frequency of the 1/f noise is set to 1 Hz; syn-
thesized beam is modelled as a Dirac, modulated with the primary beam and convolved
with QUBIC peak shape with two ripples. Results are shown on the figure 4.14. It is ev-
ident that the map-making process is able to handle the multi-peaked synthesized beam
Chapter 4. Map-making in monochromatic case 95

Figure 4.13: Correction of the rippled beam window function: left plot – deviation
of the modeled window function from the real one before correction, right plot – after
correction.

on the real scale TOD. The result is free of the striped structures which may appear
on the reconstructed map due to the high noise on low frequencies, as discussed in the
section 4.2.

4.4 Conclusions

In this chapter we analysed the simulation of TOD and map-making for QUBIC in
monochromatic case. It is very similar to what we have for imager instruments, except
that the QUBIC images the sky with a complex synthesized beam. If the synthesized
beam is modelled as a sum of relatively narrow axisymmetric peaks, sparsely distributed
on the focal plane, then the acquisition operator H becomes sparse and the map-making
problem becomes computationally trackable. We model each peak of the synthesized
beam together with 2 side lobes, which we call ripples. Window function of the modeled
peak deviates from the real one no more than by 2%.

To put additional constraints to the poorly observed pixels on the edge of the coverage
field we use so-called fusion acquisition model, which allows us to combine data of QUBIC
with results of some other experiment. This technique shows great improvement to
QUBIC map-making, reducing the noise level significantly.
Chapter 4. Map-making in monochromatic case 96

Figure 4.14: Realistic simulations of 1 month of QUBIC data reconstruction,


monochromatic case. Three columns are for I, Q and U Stokes parameters from left to
right respectively. From top to bottom there are: input convolved maps, reconstructed
maps and residual maps. Units on the color axes are µK.
Chapter 5

Map-making in polychromatic case

This chapter is dedicated to the development of the map-making in case of non-zero


bandwidth. We discuss the way to model the polychromatic synthesized beam and obtain
the parameters for the synthesized beam approximation. This part of work inherits from
the conclusions made by F. Incardona and extends them to the approximation with the
rippled peaks. We develop the map-making for QUBIC-only and QUBIC-Planck
acquisition models and discuss the choice of the preconditioner for the conjugate
gradient method. Finally we verify the map-making algorithm in simulations and check
the consistency of the analytic formula for the effect of the bandwidth smearing of the
polychromatic synthesized beam.

Turning to the more complicated polychromatic case we have to deal with the fact that
the QUBIC frequency bandwidth is not a δ-function at all. The relative bandwidth of
each of the bands is 0.25, meaning that the 150 GHz band ranges from 131 to 169 GHz
and the 220 GHz band is from 193 to 248 GHz. In the framework of this thesis we assume
that the bandpass has a top hat shape. This is not quite correct, hence the results of
the following chapters should be revised later. However, the bandpass, due the QUBIC
design, is almost top hat.

5.0.1 Polychromatic synthesised beam

The polychromatic synthesised beam is an integral over the frequency range of a band:

Z νmax
Bpoly (n) = Bmono (n, ν)J(ν)dν, (5.1)
νmin

97
Chapter 5. Map-making in polychromatic case 98

where Bmono (n, ν) is a monochromatic synthesised beam, its dependency on the fre-
quency is highlighted in the section 4.3.2; J(ν) is the frequency bandpass of the filter
and νmin,max are the boundary frequencies of the band.

5.0.1.1 How to model the wide frequency band?

We use an approximation of the integral in the equation (5.1) as a sum over a frequency
sample νi :

Nf
X
B̃poly (n) = Bmono (n, νi )J(νi ), (5.2)
i=1

where Nf is the number of frequency samples. Then a question arises: what is the
appropriate number of frequencies and how to sample the continuous frequency band?
For a very useful discussion of this issue see the master thesis of Federico Incardona [103].
Here we assume that the bandpass has a top hat profile, that is all the J(νi ) are equal.

We already mentioned before that the width of the peaks of the synthesized beam as
well as the distance between them depend on the wavelength. To recall, the peak width
at half maximum is F W HM = λ/(P ∆x) and the distance between the central peak and
the peak in the n-th order of interference is θ = nλ/∆x, where ∆x is the spacing of the
horn array and P is the number of horns on one side of a horn array in case the horn
array is square packed. This is illustrated on the figure 5.1. For QUBIC ∆x = 1.4 cm
and P is approximately 20.

We need to sample the continuous frequency band with a finite number of frequencies in
such a way that at the end the modeled synthesized beam has smooth shape. If the fre-
quency band is not well sampled, the modeled synthesized beam becomes discontinuous.
On the other hand if the frequency band is oversampled, the computation complexity
grows without purpose.

In order to have a uniform frequency sample it is reasonable to set the following re-
quirements on the frequency sample: the distance between the peaks of two synthesised
beams of two close frequencies must be some fraction k of the sum of their widths. For
n-th order of interference this requirement reads:

 
n∆λ1,2 λ1 λ2
∆θ1,2 = =k + . (5.3)
∆x P ∆x P ∆x

As a consequence
Chapter 5. Map-making in polychromatic case 99

Figure 5.1: Radial cut of the synthesized beam for two different frequencies. Primary
beams are shown with dashed lines. Peaks widths and the distance between peaks are
highlighted.

!
k
n+ P
λ2 = λ1 k
, (5.4)
n− P

Thus we can recursively define λn as multiplication of λn−1 with some constant coeffi-
cient. That is the correct way to sample the frequencies is the logarithmic scale.

Now, the next question is how many frequencies should we use to sample the continuous
frequency band? In other words, what is an appropriate value of k? Let’s suppose we’d
like to have at least 99% of the beam intensity accurately modelled. Then, considering
the 150 GHz band, we are looking only for the two first orders of interference (therefore
the synthesised beam consists of 25 peaks). The distance between the close peaks for
2∆λ
two close frequencies in the second order is twice of that in the first order, that is ∆x ,
while their width is the same. What is the maximum distance between peaks to have a
nice flat profile of their sum?
Chapter 5. Map-making in polychromatic case 100

For gaussian peaks we can draw a sum of two gaussian peaks and vary the distance
between them, see figure 5.2. One can see that in case of gaussian approximation the
0.4λ1
distance between the peaks must be at least not more that P ∆x + 0.4λ
P ∆x (that is k = 0.4).
2

In approximation that all the wavelengths within a band are equal from equation (5.3)
0.8λ 2∆λ ∆λ 0.8
P ∆x = ∆x . From this we have λ = 2P ∼ 0.02. Having the relative bandwidth
for QUBIC 0.25 we finally can conclude that 0.25/0.02 ≈ 13 frequencies per each of
the QUBIC bands is enough to sample the continuous frequency wide band using the
gaussian approximation of the synthesised beam.

Figure 5.2: Sum of two gaussians with F W HM = 1.. One has mean 0 (blue line),
while the mean of another one is varied (shown in grey). For the second one the mean
is equal to some fraction of F W HM . The sum of the two gaussian peaks is shown in
red, the line styles repeat those of the grey lines.

The problem is in fact for the peripheral detectors even the third order of interference is
still significant: for a peripheral detector the synthesized beam is highly asymmetric and
its third order peak is large enough (see figure 4.3). That means we should multiply our
estimations for the frequency numbers by factor 1.5. Finally, for gaussian approximation
we suggest using 19 frequencies. This result is perfectly consistent with the result of F.
Incardona, which means we can apply the similar estimation for the rippled peaks.
Chapter 5. Map-making in polychromatic case 101

To make the similar estimation for the rippled peak model, we have first to fit the shape
of it. This fit is used only to adjust the peaks over the bandwidth. We choose the
 2
function sin(θ)
θ which is pretty common in signal processing. The fit is shown on the
figure 5.3. One may notice that the fit is not very accurate for the ripples. But in order
to find an appropriate value of k we care only about the peak itself and we can neglect
the ripples. Then we repeat exactly what we do for the gaussian peaks, see figure 5.4.
One can see that the rippled peaks are a bit wider than the gaussians and having two
peaks 1F W HM apart from each other gives a flat-shaped sum. This finally results in
15 frequencies for accurate sampling of the QUBIC band.

 2
Figure 5.3: Fit of the synthesized beam peak with sin(θ)
θ function. The synthesized
beam is taken from the pixelized map of the beam, its step-like structure is from the
map pixels. The parameter nside is 1024. Units of the horizontal axis are degrees.

This conclusion is valid for the 150 GHz band. For the 220 GHz band, which has
the synthesised beam more shrunk, we have to take into account the fourth order of
interference, thus resulting in 25 and 20 frequencies correspondingly for gaussian and
rippled approximations. Comparison of 15 frequencies synthesised beam for 150 GHz
channel with the rippled approximation of it is shown on the figure 5.5.

To check the validity of the approximation of the synthesised beam we compute the
synthesised beam for 50 frequencies at 150 GHz band, supposing that it is a very good
Chapter 5. Map-making in polychromatic case 102

Figure 5.4: Sum of two rippled peaks with F W HM = 1.. One has mean 0 (blue
line), while the mean of another one is varied (shown in grey). For the second one the
mean is equal to some fraction of F W HM . The sum of the two peaks is shown in red,
the line styles repeat those of the grey lines.

Figure 5.5: Left: interferometry synthesised beam for 15 frequencies at 150 GHz band
for one of the central detectors of QUBIC. Right: its approximation with rippled peaks.
The minor features on the right plot are numerical effects of map-to-alm conversion and
back.
Chapter 5. Map-making in polychromatic case 103

approximation of the real continuous synthesised beam. F. Incardona has modelled the
synthesised beam with 400 frequencies and shown that starting from several tens of
frequencies the model becomes redundant in respect to the number of frequencies. We
reconstruct the power spectrum C`interf of the 50 frequencies interferometric synthesised
beam and compare it to the power spectrum C`approx of the 15 frequencies approximated
synthesised beam. The results are shown on the figure 5.6. The similar comparison made
for the 220 GHz band is shown on the figure 5.7.

Figure 5.6: Comparison of the realistic synthesised beam to the approximate one for
frequency band 150 GHz.

From this comparison one can see that the approximation mimics the realistic synthesized
beam: the main features are the same, the ratio of the spectra of approximated beam to
the realistic one is overall constant over the range of multipoles ` from 1 to about 600.
Although it is biased. This bias is constant on average but "bumpy". This bumps are
especially strong on the low multipoles. They arise from the facts that we neglect the side
lobes after the first two ones. The radius of the ripples is proportional to the inverse of
frequency. But as this radius is anyway very small, the ripples from different frequencies
sum up, as all the ripples appear at approximately the same place. Thus the ripples which
we could neglect in monochromatic case, become pretty significant in polychromatic case.
We model two first ripples and neglect the rest. But this approximation is not quite valid
Chapter 5. Map-making in polychromatic case 104

Figure 5.7: Comparison of the realistic synthesised beam to the approximate one for
frequency band 220 GHz.

in polychromatic case. Modeling the synthesised beam in more details is a complicated


task and makes the acquisition model hardly trackable. The relative deviation of the
spectrum of the approximate beam in respect to the realistic one in the range of ` from
50 to 100 is 0.06 for 150 GHz band and 0.12 for 220 GHz band. Naively, this adds a bias
to the estimated C` , that we can correct to and thus completely mitigate the undesired
effects of imperfect estimation of the synthesized beam.

5.0.2 Map-making

In the polychromatic case the TOD is modelled as

Nf
X
y= Hνi Jνi x̃νi + n, (5.5)
i=0

where Hνi is the monochromatic acquisition operator for frequency νi and x˜νi is the
CMB map, convolved by the beam, specific for this frequency. Nf is the total number
of frequencies. Defining the polychromatic acquisition operator
Chapter 5. Map-making in polychromatic case 105

Nf
X
Hpoly = Jνi Hνi , (5.6)
i=0

and assuming that the convolved signal is independent of frequency, the map-making
equation could be approximated as:

T
Hpoly N −1 Hpoly x˜ν1 = Hpoly
T
N −1 y, (5.7)

where we attempt to reconstruct the CMB sky, convolved by the lowest frequency beam
(that is the widest one).

One also can define the QUBIC-Planck fusion acquisition in polychromatic case. The
fusion TOD is just a combination of the QUBIC TOD with Planck map at the closest
frequency, convolved by the lowest frequency beam.

The polychromatic acquisition operator is much less sparse than the monochromatic one:
for the monochromatic case the acquisition operator matrix contained only few numbers
per line correspondent to the number of synthesized beam peaks seen by detector at a
time sample. Now with polychromacity the number of peaks has grown according to
the equation (5.6). Thus the requirements to the convergence of the PCG method have
grown and we need to introduce the preconditioner to succesfully solve the equation
(5.7).

5.0.2.1 Preconditioned conjugate gradient method

The conjugate gradient method is a numerical method to solve systems of linear equations
like

Ax = b, (5.8)

where matrix A is symmetric, positive definite and sparse. The method works with not
sparse matrices too, but then it gives no gain in comparison with other methods. The
conjugate gradient method minimizes the quadratic function of x [104]:

1
f (x) = xT Ax − bT x → min. (5.9)
2

The function f (x) is scalar and its gradient is


Chapter 5. Map-making in polychromatic case 106

∇f (x) = Ax − b. (5.10)

Thus to solve the equation (5.8) we can find instead the minimum of its quadratic
function.

Very often we need some preconditioning in order to solve the equation (5.8). The
preconditioned conjugate gradient method is equivalent to the usual cojugate gradient
method applied to solve the equation

E T AE x̂ = E T b (5.11)

where we apply linear change of coordinates x = E x̂ and EE T = M is a symmetric


positive-definite matrix called preconditioner. The preconditioner helps to define the
minimum of the quadratic function more clearly and can significantly boost the conver-
gence of the method. The preconditioned conjugate algorithm is the following:

• Pick an initial assumption x0 . Calculate the residual r0 = b − Ax0 ;

• z0 = M r0 and the direction of descent is p0 = z0 ;

• Then for each k-th step we


rT
k zk
– calculate the coefficients to define the next optimal position αk = pT
;
k Apk

– then the optimal position is xk+1 = xk + αk pk ;


– the new residual is rk+1 = rk − αk Apk ;
– zk+1 = Mrk+1 ;
– we calculate the coefficients to define the next optimal direction of descent:
zT
k+1 rk+1
βk = zT
;
k rk

– and the optimal direction is pk+1 = zk+1 + βk pk ;


– repeat the iterations while the residual is not sufficiently small.

What is the efficient preconditioner M that would point the algorithm directly to the
minimum of quadratic function? We are on the minimum when the vector p is a zero
vector. Considering the 0-th step

p = z = M r = M b − M Ax = 0 (5.12)

hence
Chapter 5. Map-making in polychromatic case 107

M Ax = M b (5.13)

and so the good choice for the preconditioner is A−1 , because then the last equation
turns to the inversion of 5.8. However, in most applications the direct use of A−1 is not
possible, so usually people use some approximation to it.

Let’s see what should we use as a preconditioner for QUBIC map-making problem. For
this let’s first try to understand the meaning of the left-hand side of equation (5.7). We
can neglect the matrix N −1 as it is just diagonal and approximately proportional to the
identity matrix. So we can concentrate on the product of H T H. Let’s imagine that H is

 
1 0 0
 
0 0 1
H= . (5.14)
 
0 1 0
 
0 0 1

It is oversimplified and naive example that should help us understand the thing. Here
the "sky" consists only 3 pixels, it is observed with, let’s say, two detectors in two time
samples. On the first sample the first detector sees pixel 1 and the second detector sees
pixel 3. On the second sample it is, respectively, pixels 2 and 3. Now it is easy to
calculate that

 
  1 0 0  
1 0 0 0   1 0 0
 0 0 1
T
    
H H=
0 0 1 0  =0 1 0 (5.15)
0 1 0  
0 1 0 1   0 0 2
0 0 1

Clearly the diagonal of the result is the coverage vector (we remind that coverage COV is
defined in (4.35) as a hit map): we observed first two pixels ones and the third one twice.
So, according to the conclusions we made before, the good choice for a preconditioner for
the QUBIC map-making is the diagonal matrix with the diagonal equal to the inverse
of coverage vector. Actually, this conclusion is valid for imagers too. In case of QUBIC
use of coverage as a preconditioner allows the map-making process to converge 5-6 times
faster than without a preconditioner.

The conclusion made above is valid for QUBIC-only acquisition, where the N −1 ≈
2
1/σnoise I. Here σnoise is the noise standard deviation for QUBIC in the map domain.
In fact, if we want M = (H T N −1 H)−1 , then we should choose the preconditioner
Chapter 5. Map-making in polychromatic case 108

2
M = σnoise /COVdiag , where COVdiag is the diagonal matrix with the diagonal equal
2
to the coverage vector COV. The coefficient σnoise doesn’t change the position of the
minimum of the quadratic function, so we can neglect it. However, for the fusion acqui-
sition the N −1 can not be approximated as proportional to identity matrix. For fusion
acquisition we have:

" #" #
h i N −1 HQ T −1
T I
HQ Q
= HQ NQ HQ + INP−1
l I
NP−1
l I
COVdiag I
' 2 + 2 = (5.16)
σQ σP l
σP2 l COVdiag + σQ
2I
= 2 σ2 ,
σQ Pl

where the index Q is for QUBIC and P l is for Planck and we take the Planck acquisition
equal to identity matrix. We can forget about the denominator and take the following
preconditioner:

2
M = (σP2 l COVdiag + σQ I)−1 . (5.17)

The meaning of this result is pretty clear: we should weight QUBIC and Planck acqui-
sitions according to their noise level. However, we don’t know the level of noise for the
QUBIC reconstructed map: we can measure the noise level on the TOD in the units
of radiation power exposed to the bolometers, but the translation of this TOD noise to
the map noise is not obvious. It depends on scanning strategy, effective observational
period and on the choice of map-making method. For the moment we suggest using an
approximate preconditioner M = (COVdiag + kI)−1 . The noise on the QUBIC map is
supposed to be much lower than the Planck map, so k must be small. We ran several
simulations trying to find the value for k that would allow the PCG converge faster and
finally decided on k = 0.001 for both frequency bands of QUBIC. But we admit that this
choice is probably not optimal. It should be revised after choosing the scanning strategy.
And k should depend on the observational period.

5.0.3 Simulations

We check the polychromatic map-making with fast simulations for CMB observations
with no foregrounds, 15 frequencies at 150 GHz band and 20 frequencies at 220 GHz,
the peaks of the synthesised beam are with ripples. The detector noise is down scaled to
Chapter 5. Map-making in polychromatic case 109

match that of 2 years of observations. The figure 5.8 (figure 5.9) shows the simulations
for 150 GHz (220 GHz) band for QUBIC-only acquisition. The figure 5.10 (figure 5.11)
shows the simulations for 150 GHz (220 GHz) band for QUBIC-Planck fusion acquisition.

Figure 5.8: Simulations of QUBIC-only map-making at 150 GHz band. Three


columns are for I, Q and U Stokes parameters from left to right respectively. From top
to bottom there are: input convolved maps, reconstructed maps and residual maps.

The variance of the residual Q and U maps under the threshold COV > 0.2max(COV)
is: 0.017 and 0.007 µK2 /pixel for 150 GHz band for QUBIC-only and QUBIC-Planck
acquisition models respectively and 0.021 and 0.040 µK2 /pixel for 220 GHz band for
these acquisition models. Clearly, the fusion acquisition works fine for 150 GHz band,
but gives wrong result for 220 GHz band. This situation is not clear so far and requires
the continuing of the investigations.

The smearing of the synthesized beam due to the bandwidth for the bolometric inter-
ferometer leads to the increase of the errorbars on the power-spectrum. From [105] the
effect of the bandwidth smearing for a bolometric interferometer is described by the
quantity κ1 (`), which is, in case of gaussian primary beam reads
Chapter 5. Map-making in polychromatic case 110

Figure 5.9: Simulations of QUBIC-only map-making at 220 GHz band. Three


columns are for I, Q and U Stokes parameters from left to right respectively. From top
to bottom there are: input convolved maps, reconstructed maps and residual maps.

s
(∆ν/ν)2 2
κ1 (`) = 1+ ` , (5.18)
σ`2

where ∆ν/ν is the relative bandwidth, σ` = π/σprimary is the resolution in the multipole
space. Estimating the smearing at the position of the primordial B-modes peak we get

κ1 (70) = 1.13. (5.19)

The errorbars of the BB power spectrum for a bolometric interferometer are proportional
to the square root of κ1 in case of noiseless observations. So the polychromatic case should

give the errorbars 1.13 ≈ 1.07 times larger than the monochromatic case, considered in
the previous chapter. This statement could be verified with the Monte-Carlo simulations:
we run similar noiseless simulations for poly- and monochromatic cases and reconstruct
Chapter 5. Map-making in polychromatic case 111

Figure 5.10: Simulations of QUBIC-Planck fusion map-making at 150 GHz band.


Three columns are for I, Q and U Stokes parameters from left to right respectively.
From top to bottom there are: input convolved maps, reconstructed maps and residual
maps.

the power-spectra of obtained maps. The measured ratio of the errorbars of the power-
spectrum for the polychromatic case to the errorbars of the monochromatic case is equal

to 1.07 ± 0.03 and is perfectly consistent with the value of κ1 . Thus we prove that the
analytic estimation for the bandwidth effect is actually correct.

Broad bandwidth is necessary for CMB instruments as it increases amount of incoming


light and thus increases sensitivity. For a bolometric interferometer we have an additional
complexity due to the beam smearing. These two competitive factors come at the end
3/2
κ1 √
as factor O( ∆ν ) to the power-spectrum errorbars. The measured value of κ1 = 1.07 ±
0.03 proves the conclusions derived earlier and confirms the reasoning for the chosen
bandwidth of QUBIC.
Chapter 5. Map-making in polychromatic case 112

Figure 5.11: Simulations of QUBIC-Planck fusion map-making at 220 GHz band.


Three columns are for I, Q and U Stokes parameters from left to right respectively.
From top to bottom there are: input convolved maps, reconstructed maps and residual
maps.

5.1 Conclusions

In this chapter we analysed the map-making process for QUBIC in polychromatic cases.
While in the monochromatic case we considered QUBIC as "almost"-imager with the
only difference from an imager that it observes the sky with the synthesised beam, the
polychromatic case is much more complicated. For a polychromatic beam we have to
take into account the fact that the position of the peaks of the systhesised beam depends
on the frequency. Thus the polychromatic synthesised beam becomes looking more like
a fancy snowflake rather than like a brush: the non-zero order peaks become elongated.

The acquisition operator is approximated as a sum of monochromatic operators. The


frequencies that sample the continuous frequency band should be distributed in a loga-
rithmic scale. The number of frequencies is 15 for the 150 GHz band and 20 for the 220
GHz band. The inaccuracies in the approximation of the synthesised beam lead to the
systematic increase of the beam window function in average by factor 1.06 for 150 GHz
Chapter 5. Map-making in polychromatic case 113

band and 1.12 for 220 GHz band. This deviation seems to be huge. However it can be
probably mitigated in the future by taking into account more and more details of the
synthesised beam. Another issue that must be tested in the future simulations is: what
is the reduce of the sensitivity on r one can expect, having a certain level of inaccuracy
on the definition of the synthesised beam? An answer to this question would require
the simulation of the TOD with a realistic synthesised beam. Then, by reconstructing
the map with an approximated beam, we would be able to resolve this issue. But the
involved simulations are very heavy.

Another question, that arises while studying the polychromatic acquisition model is: why
the fusion acquisition does not work properly in polychromatic case for 220 GHz band?
Most likely, the problem is in the preconditioner. But one can try also to mask or weight
the Planck acquisition. In principle, the fusion acquisition should only improve the map
resolution, so most likely the matter is in some numerical effects of convergence of PCG.
Chapter 6

QUBIC as a spectro-polarimeter

In this chapter we explore the possibility to reconstruct multiple sub-bands within each
of the broad bands of QUBIC, which allows us to have an unprecedented frequency
resolution. We introduce the multi-band acquisition model and the fusion version of it.
Then we discuss what is the appropriate number of sub-bands. At the end we introduce
the internal linear combination method implemented for QUBIC pipeline.

6.1 Multifrequency map-making

The fact that the synthesised beam changes with frequency gives an amazing opportunity,
unique to bolometric interferometer, to reconstruct several maps in narrow frequency
ranges within each one of two wide frequency bands, making a bolometric interferometer
act like a spectro-imager.

As we said in the chapter 4, the TOD for QUBIC is:

y = H x̃ + n, (6.1)

where x̃ is monochromatic sky, n is noise with covariance matrix N and H is monochro-


matic acquisition function, specific for this frequency. This model could be extended
to apply for the polychromatic input signal. In polychromatic case the continuous fre-
quency spectrum of CMB could be estimated as a sum of monochromatic bands. Then
the TOD is constructed as:

νN
f
X
y= Jνi Hνi x̃νi + n, (6.2)
νi =ν1

114
Chapter 6. QUBIC as a spectro-polarimeter 115

where x̃νi is the sky map at frequency νi and Hνi is acquisition model for that frequency.
Coefficients Jνi weight the frequencies according the bandwidth. This equation could be
approximated as

y = Hpoly x̃ + n, (6.3)

where Hpoly is a polychromatic acquisition operator which is a weighted sum of monochro-


matic operators for frequencies νi . This equation is invertible. However using this ap-
proximation we neglect the frequency modulation of the input signal and reconstruct
only an average map over each of the two wide frequency bands.

Instead one can use another approximation:

 
x̃1
 
h i  x̃2 
y = Hpoly,1 Hpoly,2 . . . Hpoly,Nb  .  + n, (6.4)
 
 .. 
 
x̃Nb

where the indeces 1, 2 etc. up to Nb denote the number of sub-band. We say it is


an approximation because each x̃i is an average map over some frequency sub-range
[νi,min ; νi,max ], where νi,max = νi+1,min and ν1,min is the minimal frequency of the wide
band, νNb ,max is the maximal frequency. This is the same kind of approximation we
do when we invert the equation (6.3), but here we reconstruct the average maps over
narrow frequency sub-bands. Here we explore the fact that the shape of the synthesised
beam depends on the frequency. Exactly this feature of the synthesised beam allows us
to reconstruct multiple subbands within each of two wide bands of QUBIC.

Figure 6.1 demonstrates how the inversion of the equation (6.2) works on practice: the
simulated sky contains two point sources, each is monochromatic with frequencies 140
GHz and 159 GHz (which corresponds to two sub-bands within QUBIC 150 GHz band).
Each of the reconstructed maps contains only one of two sources. To understand how
the method works imagine the synthesized beams from these two sources. The one on
the left will be more stretched, while the right one will be more shrunk. One can naively
interpret the map-making process as following: we have templates of the synthesized
beam and we try these templates on the TOD. The shrunk template for higher subband
does not fit for the left source and vise versa.
Chapter 6. QUBIC as a spectro-polarimeter 116

Figure 6.1: Simulation of QUBIC observation of two monochromatic sources with


frequencies 140 and 159 GHz and its reconstruction as two separate maps.

6.2 Component separation

Having maps for numerous frequency bands allows to make the component separation
very efficiently. Each component of the microwave emission of sky cν depends both on
the direction of observation and on frequency. We assume that each component could
be separated into spatial and spectral parts:

cν = A(ν)c, (6.5)

that is the template c scales with frequency as A(ν) on all the observed sky.

As QUBIC can have very good frequency resolution, we could easily apply the simple
internal linear combination method (ILC) [106]. This method relies on very little number
of assumptions about the components of the signal. The observed maps xν are a linear
combination of CMB, foregrounds and noise:

x ν = s + f ν + nν , (6.6)
Chapter 6. QUBIC as a spectro-polarimeter 117

where s is the CMB signal, fν is foreground map at frequency ν and nν is the noise
contribution. We suppose that the CMB is the same for all the frequencies. Here,
to make the formulas less cumbersome, we omit the tildes which we used before to
distinguish the maps convolved with the instrument beam. We are looking for the CMB
estimator ŝ as a linear combination of observed maps on different frequencies:

X
ŝ = wν x ν , (6.7)
ν

where wν are the weights for the linear combination, wν is a single number for each
map xν . The problem is to find the weights that maximise a certain criterion about ŝ
P
and keep ν wν = 1. The simplest choice for the criterion is that the weights have to
minimise the variance of ŝ. Let’s denote a vector of maps [x1 , x2 , ...] as y. Then we can
define the covariance matrix C = yyT . It is shown in [106] that the minimum of the
variance is obtained with

C−1
P
j ij
wi = P . (6.8)
ij C−1
ij

which is the solution of the Lagrange multiplier method. The indexes i, j stand for the
frequency channels. The variance of the ILC estimation is

σ 2 = wT Cw, (6.9)

where w is a vector of weights wν .

We apply ILC to the maps of QUBIC obtained on different frequencies, hence these maps
have different resolution due to the width of the synthesized beam peaks, which depends
c
on the frequency of light as ν∆xP . The input maps for ILC must be of equal resolution
otherwise the method will attempt to remove the CMB signal difference due different
resolution. We apply additional convolution to the reconstruced maps with convolution
operator equal Cν1 /Cν where Cν is the synthesized beam peak convolution operator at
frequency ν and ν1 is the minimal frequency.

6.2.1 Dust emission

Polarized dust emission is the main source of foregrounds that prevents the observation
of primordial B-modes. And it is the main reason to measure the sky at more fre-
quency channels. As measured by Planck collaboration [11], the dust polarization power
Chapter 6. QUBIC as a spectro-polarimeter 118

spectrum is well described as a power law of multipole:

C`dust ∝ `α , (6.10)

where α = −2.42 ± 0.02 for both EE and BB dust spectra. The frequency dependence
of the emission intensity is described by a modified black body spectrum with spectral
index 1.59 and temperature 19.6 K. The effect of dust contamination is illustrated on
the figure 6.2.

Figure 6.2: Microwave radiation in Mollweide projection for the Q component of


polarization, simulation according the theoretical power spectra. Upper left – clear
CMB emission. Upper right – clear dust emission (note the different color range).
Lower left (right) – total Q signal in the bandwidth of QUBIC 150 (220) GHz band.

6.3 How many sub-bands?

Let’s estimate the reasonable number of sub-bands for each of QUBIC bands. As we
discussed in the chapter 5, the distance between the peaks of the synthesised beam is
defined as λ/∆x, where λ is the wavelength and ∆x is the distance between the horns,
while the full width at half maximum of the peak is λ/P ∆x, where P is number of horns
in one side of the horn array in case of a square packed array. In case of QUBIC the
horn array is circle-packed, but the parameter P for 400 horns could be estimated at 20.
Chapter 6. QUBIC as a spectro-polarimeter 119

Thus when we sum up different frequencies, the central peak of the synthesised beam
becomes the sum of central peaks for all the frequencies, while the surrounding peaks
don’t match the same places, rather forming ray-shaped structures around the central
peak.

Two synthesised beams at two close frequencies could be resolved if their peaks are
separated enough. That is the difference in peak positions in the first order of interference
∆θ = (λ2 − λ1 )/∆x = ∆λ/∆x is greater than the width of the peaks:

∆λ λ λ ∆ν 1
≥ ⇒ =P ⇒ = (6.11)
∆x P ∆x ∆λ ν P

where λ and ν are the geometrical means of λ1,2 and ν1,2 respectively.

∆ν
Thus the spectral resolution of a bolometric interferometer ν is equal to 1/P . In
case of QUBIC the bandwidth of each of wide bands is ∆bw ν/ν = 0.25. Thus the
0.25ν ν
number of sub-bands is ∆x / P ∆x = 5. With two wide bands we have 10 sub-bands
centred at frequencies: [134.6, 141.6, 148.9, 156.5, 164.6] GHz for 150 GHz band and
[197.5, 207.6, 218.3, 229.6, 241.4] GHz for 220 GHz channel. This gives a unique opportu-
nity for spectra-imaging of CMB with a good spectral resolution, which allows to control
the foreground contamination much more efficiently.

Five sub-bands within each wide band for QUBIC is the ultimate limit, we are unable to
do any more sub-bands due to the spectral resolution equal to P . There are several other
issues. First, the spectral resolution is limited by the spatial resolution, that is by the
angular size of detectors which is equal to √d , where d ≈ 3mm is the size of detectors
πF
and F is the focal length. Then the observed sky is convolved with the top-hat function
correspondent to the integration over the detector area. Thus the angular separation of
the peaks of two sub-bands should be

s  2
c∆ν  c 2 d
≥ + √ (6.12)
ν 2 ∆x P ν∆x πF

∆ν ∆ν
leading to ν |ν=150×109 ≥ 0.055 and ν |ν=220×109 ≥ 0.060. Thus the estimation for the
number of sub-bands reduces to 0.25/0.055 = 4.5 and 0.025/0.060 = 4.2 for 150 and 220
GHz band respectively.

Second, the amount of light for each sub-band reduces when increasing the number of
sub-bands, thus increasing the noise on the map. In the case of a uniform bandwidth

the noise scales as 1/∆ν or in other words as square root of number of sub-bands Nb
and hence the errorbars on the C`BB grow as Nb in the limit of no B-modes on the CMB.
Chapter 6. QUBIC as a spectro-polarimeter 120

Assuming that the noise is not correlated for the different frequency sub-bands and that
all the sub-bands have equal bandwidth, the noise on the ILC map is proportional to the
inverse of square root of number of maps. Thus at the end the noise on the map should
be the same, independently from the number of sub-bands. (Although it is a very naive
estimation).

Finally the third effect is related to the component separation. Ideally, ILC method
works better with a higher number of sub-bands, but this also depends on the noise on
the maps. All these effects we study with Monte-Carlo simulations.

6.3.1 Noise increase

To study the dependence of the multi-band maps noise on the number of sub-bands we
run fast Monte-Carlo simulations. The input map is a linear combination of CMB and
dust emissions, where CMB is the simulated maps with input spectrum that does not
contain B-modes and the dust is modeled according [11], as described in the section
6.2.1. TOD is modeled according to (6.2) and reconstructed with multiple sub-bands
according to (6.4). Figure 6.3 presents the dependence of the noise in the reconstructed
maps on the number of sub-bands.

As you can see on the figure 6.3, the noise on the 220 GHz band maps follows the

predicted dependence ∝ Nb pretty well up to Nb = 4. This law for noise comes from
the shrinking of the bandwidth and hence the reduced incoming light per sub-band and
it does not take into account the spectral resolution of QUBIC. Thus it is not surprising
that the points for Nb > 4 go higher: it just means that we attempt to resolve sub-
bands beyond the capabilities of the instrument. For the 150 GHz band the picture
is completely different. One can fit the 150 GHz points and find that they follow the
power law with power about 1.15. This means that for lower frequencies the multi-band
reconstruction does not work well. We suspect that it is because the detectors on the
periphery see the beam from the horns under angle ∼ 9.5◦ and the distance between zero
and first order peaks of the synthesized beam for 131 GHz (the lowest frequency for 150
GHz band) is about 9.4◦ (see figure 6.4 for explanation). That is the synthesized beam
for low frequency band hardly fits to the focal plane. (Note, that it does not cancel our
conclusions about the necessary number of frequencies we made in the previous chapter:
the third order peaks still play role.) The multi-band acquisition model explores the
fact that the synthesized beam is different for different frequencies. When we use a focal
plane too small for the broad low frequency synthesized beam we effectively imitate an
imager and the spectral resolution of an imager is obviously null. That’s why we have
so poor spectral resolution for the 150 GHz band. The synthesized beam for 220 GHz
Chapter 6. QUBIC as a spectro-polarimeter 121

Figure 6.3: Amount of noise on the reconstructed map as a function of number of


sub-bands Nb , normalized to the noise in case of only 1 band. Errorbars present the
standard deviation of noise level between different sub-bands.
√ Smooth line represents
the theoretical dependence Nb .

band is more narrow, hence the resolution is better. Whether this reasoning is true or
not should be checked with Monte-Carlo simulations: either one should simulate the 150
GHz band of QUBIC with a bigger focal plane or the 220 GHz with a smaller one and
build a figure similar to 6.3.

6.3.2 Component separation

To define the optimal number of sub-bands for the ILC algorithm we use the pseudo-
simulations as described in 4.3.4.1. From these simulations we reconstruct the CMB
maps and look at the residuals compared to the ILC output map. Note that we run
component separation for all the sub-bands for both wide bands. The results are shown
on the figure 6.5. The result is not clear – the errorbars are growing, but the average value
is almost constant. However we would advise to take as many sub-bands as possible.
And the first reason for this is that the ILC method, as it is implemented for QUBIC,
is very basic one. Probably it can be improved (or one can use more efficient method of
component separation).
Chapter 6. QUBIC as a spectro-polarimeter 122

9.5º

The synthesized
beam reaches the
very edge of the
focal plane

Focal plane

Figure 6.4: At 150 GHz band the synthesized beam barely fits to the focal plane.
This explains poor frequency resolution for this band.

Finally, we propose to split the 150 GHz band into 2 sub-bands and 220 GHz band into
3. These conclusions are based mainly on the analysis of the figure 6.3. Note that the
second point is the only one of the 150 GHz points which corresponds to the theoretical

Nb dependence. For 220 GHz we are choosing between 3 and 4 sub-bands. Note
that the 4-th point goes slightly upper the solid line. So we decide to be "on the safe
side" and choose 3 sub-bands. Let’s write down the central frequencies for the sub-bands:
[140.0, 158.8, 200.9, 218.5, 237.6] GHz, where the first 2 number are for the 150 GHz band
and the rest are for 220 GHz.

6.4 Monte-Carlo simulations

To verify the ability of the map-making algorithm to recover maps for separate frequencies
we run fast Monte-Carlo simulations of QUBIC observations. The sky model includes
both CMB and dust. We don’t use the 1/f noise here. The results are shown on the
figure 6.6. The central part of the field is pretty clear on the Q and U residual maps.
There are strong fluctuations with large angular size on the temperature maps. Note,
that in case of polychromatic map making (figure 5.9) one can see the similar effect, but
not that strong. It is quite clear that in the case of multi-band data analysis the noise
Chapter 6. QUBIC as a spectro-polarimeter 123

Figure 6.5: Standard deviation σres of the values on the residual maps of ILC as a
function of number of sub-bands Nb per band (that is the total number of sub-bands
is twice of it). We calculate σres for each of 20 realizations of CMB and noise and then
plot the average between all the σres with dots. Errorbars show the standard deviation
of σres values.

induced from poorly-observed pixels to the edge of the coverage patch is more important,
so we introduce the Qubic-multiband-Planck fusion acquisition model in the next section.

After reconstructing all these sub-bands we make the component separation with ILC.
The results are shown on the figure 6.7. The result is more than satisfactory: the residual
map is very flat and we can say that dust contamination is removed well. However the
residuals remained on the ILC output map for Q and U components have variance about
6% of clear CMB variance. This means we still can have dust spoiling the B-modes.
However, one can hope that the component separation could be improved later. Then
the dust contamination will be reduced even more.

6.5 QUBIC multi band plus Planck acquisition model

Just like the fusion model for monochromatic QUBIC-Planck, introduced in the chapter
4, we can write down an analogous acquisition model in case of multiband analysis. It
Chapter 6. QUBIC as a spectro-polarimeter 124

(a) (b)

(c) (d)

(e)

Figure 6.6: Reconstruction of multiple sub-bands within each of QUBIC wide bands.
Sub-band central frequencies are: [140.0, 158.8, 200.9, 218.5, 237.6] GHz, they are plot-
ted respectively on the sub-plots A, B, C, D and E. Input convolved maps, output maps
and their difference are plotted for each frequency for I, Q and U Stokes parameters.
Chapter 6. QUBIC as a spectro-polarimeter 125

Figure 6.7: Reconstruction of CMB emission from 5 frequency bands of QUBIC,


using ILC component separation method. Input convolved maps, output maps and
their difference are plotted for I, Q and U Stokes parameters.

reads:

" # " # " #


yQU BIC X HQU BIC, ν nQU BIC
= x̃ν + (6.13)
yP lanck ν HP lanck nP lanck

where HQU BIC, ν is the monochromatic acquisition operator at frequency band ν, HP lanck
is an identity operator and x̃ν is a true sky map at frequency ν, convolved with the
QUBIC beam at that frequency. We can define the monochromatic fusion acquisition
operator:

" #
HQU BIC, ν
Hν = . (6.14)
HP lanck

Then the multiband map-making is the solution of equation (6.13) rewritten as:
Chapter 6. QUBIC as a spectro-polarimeter 126

    
HνT1 N −1 Hν1 ··· HνT1 N −1 Hνn x̃ν1 HνT1 N −1 ··· HνT1 N −1
    "
 H T N −1 Hν HνT2 N −1 Hνn  x̃ν   H T N −1 · · · HνT2 N −1 
#
 ν2 1 ···   2   ν2 y
 QU BIC
.. .. ..  .  =  .. .. ..
  ..  
 
. . . . . .   yP lanck

   
Hνn N −1 Hν1
T ··· T −1
Hνn N Hνn x̃νn T
Hνn N −1 ··· T
Hνn N −1

(6.15)

where n is number of frequency sub-bands.

We implement the code for the multiband-QUBIC-Planck acquisition model. The result
of the simulations, using this fusion map-making, is shown on the figure 6.8.

Then we can run the component separation with ILC. The results of it are shown on the
figure 6.9. The ratio of the variances for the residual map and the true CMB map is 14%
(we remind that for the QUBIC-only map-making it was only 6%). Thus, for the moment,
we cannot recommend to use the fusion acquisition with multi-band approach. However,
the future progress in development of the map-making and component separation can
probably fix this problem.

6.6 Possible CMB space-born instrument

Imagine an instrument, similar to QUBIC, but space-born. An amazing opportunity to


resolve frequency spectrum and map the fluctuations at the same time gives a strong
favor to the bolometric interferometry technique among the other possible instruments
for CMB observations. In space we don’t have the atmospheric emission lines. Thus we
are not limited by narrow atmospheric windows (we called our frequency bands "wide",
but in fact they cover only a small region of a broad black-body spectrum of CMB). It
means that we can use only one focal plane. This is a huge advantage: any experiment
needs cryogenic system for detectors and making multiple focal planes is particularly
difficult for space-born experiments. A bolometer interferometer would have the same or
better spectral resolution, while maintaining the simplest single focal plane configuration.

Let’s consider a the instrument configuration with a bandwidth from 60 GHz to 600 GHz
to cover a large fraction of CMB spectrum. The horns for QUBIC with ∆x = 1.4 cm,
designed for 150 GHz frequency, should be rescaled for 60 GHz frequency thus giving
∆x ≈ 4 cm. If we assume the diameter of horn array being 1.2 m, we can have about
700 horns. The optics of this instrument could be either reflective like in QUBIC or
∆ν
reflective. Assuming the focal length 2.5 m we have the dependence of ν from the
frequency shown on the figure 6.10. One can see that the resolution strongly depends on
Chapter 6. QUBIC as a spectro-polarimeter 127

(a) (b)

(c) (d)

(e)

Figure 6.8: Reconstruction of multiple sub-bands within each of QUBIC


wide bands, using the fusion map-making. Sub-band central frequencies are:
[140.0, 158.8, 200.9, 218.5, 237.6] GHz, they are plotted respectively on the sub-plots
A, B, C, D and E. Input convolved maps, output maps and their difference are plotted
for each frequency for I, Q and U Stokes parameters.
Chapter 6. QUBIC as a spectro-polarimeter 128

Figure 6.9: Reconstruction of CMB emission from 5 frequency bands of QUBIC, using
ILC component separation method. We use the fusion maps for the input to the ILC,
as shown on the figure 6.8. Input convolved maps, output maps and their difference
are plotted for I, Q and U Stokes parameters.

the frequency and on the high frequencies is limited by the detector angular resolution.
But still the spectral resolution is very high. Such an instrument has strong advantage of
taking data on multiple frequencies and allowing to recover the pure CMB emission, free
from dust and other foregrounds contamination. At the same time it does not require
multiple focal planes, which reduces its cost significantly.

6.7 Conclusions

In this chapter we introduced the multi-band map-making process, that allows to recon-
struct multiple sub-bands within each of the bands of QUBIC. This is possible thanks
to the fact that the synthesized beam has its shape dependent of frequency of light. The
complex synthesized beam, which was introduced only due to our wish to use the self-
calibration, gives another amazing advantage to QUBIC. Observation of sky on multiple
Chapter 6. QUBIC as a spectro-polarimeter 129

Figure 6.10: Frequency resolution ∆ν ν of a space-born QUBIC-like instrument as a


function of frequency. On high frequencies the resolution is limited due detector finite
size.

frequencies allows to perform the component separation very efficiently. For this we use
the ILC method, implemented for QUBIC.

We conclude that the number of sub-bands should be 2 (3) for 150 (220) GHz band.
This conclusion is derived from the dependence of the noise on the reconstructed maps
from the number of sub-bands (and hence from the bandwidth per sub-band). It is
proven with simulations that the multi-band map-making for QUBIC works, it allows
to separate components very well and obtain a clear CMB sky. We remind that in the
framework of this thesis we decided to assume top hat bandpass. With realistic bandpass
the optimal number of sub-bands can change. However we’d like to stress that the most
important result of this chapter was the demonstration of the fact that with multi band
acquisition we do not loose anything in terms of signal-to-noise ratio (see again figure
6.3). As soon as this is satisfied the exact number of sub-bands (and their bandwidths)
is an issue for further discussions.

We also introduce the QUBIC-Planck fusion acquisition, that might help to reconstruct
the peripheral pixels better. However for the moment of this thesis writing the develop-
ment of multiband QUBIC-Planck map-making is not completed and its results are not
Chapter 6. QUBIC as a spectro-polarimeter 130

satisfactory.
Chapter 7

Spectra reconstruction

This chapter is dedicated to the discussion about the power spectra reconstruction. We
introduce Xpol, Xpure and Spice methods and compare their performance.

7.1 Spectra reconstruction problems

The ultimate goal of cosmological studies is to reconstruct the cosmological parameters.


These parameters define the statistics of the CMB temperature and polarisation fluc-
tuations. Thus to recover the cosmological parameters from the measured fluctuations
one has to study the statistical properties of those fluctuations. A handy instrument
to describe this statistics is the the decomposition of the CMB anisotropies in the ba-
sis of spherical harmonics, which is analogous to the usual Fourier transform, but on a
spherical surface.

For the true CMB temperature anisotropies we have (here n is the direction in the sky):

∞ X
X `
T (n) = aT`m Y`m . (7.1)
`=0 m=−`

where aT`m are the coefficients of the decomposition. If the CMB temperature fluctuations
T (n) are assumed to be gaussian, then the coefficients aT`m are gaussian variables with
mean zero ( aT`m = 0) and covariance


T T∗ cosmo
a`m a`0 m0 = δ``0 δmm0 C`T T , (7.2)

131
Chapter 7. Spectra reconstruction 132

cosmo
where C`T T is called the temperature power spectrum and the angle brackets stand
for ensemble average over all the possible realizations of the Universe. This power spec-
trum is defined by the true cosmological parameters. We are trying to estimate it from
the only realization of the Universe we dispose:

`
1 X
C`T T = |a`m |2 , (7.3)
2` + 1
m=−`

which is distributed according a χ2 distribution with 2`+1 degrees of freedom. It deviates


cosmo cosmo p
from C`T T with standard deviation C`T T 2/(2` + 1) (compare with equation
2.5).

7.1.1 Noisy sky with realistic resolution

Now the measured map contains noise and the resolution of the map is limited by the
beam resolution and the map pixelization, so the measured map is equal to T̃ (n) + N (n),
where T̃ (n) is the CMB true sky, convolved by the instrumental beam and pixelized (it
is what we called x̃ in the 4th chapter) and N (n) is the noise map. Here we consider an
experiment that observes the full-sky with uniform coverage. The map T̃ (n) could be
expressed in the spherical harmonics as

∞ X
X `
T̃ (n) = aT`m p` B` Y`m . (7.4)
`=0 m=−`

where p` and B` are the pixel and beam window functions respectively (this approxima-
tion is correct only if the beam is axisymmetric and the pixels are much smaller than
the beam resolution). Assuming that the noise is not correlated with the signal, the
covariance matrix of the measured map is

  T  D E

= T̃ (n1 )T̃ (n2 )T + N (n1 )N (n2 )T



T̃ (n1 ) + N (n1 ) T̃ (n2 ) + N (n2 )
X 2` + 1 (7.5)
= C` (p` B` )2 P` (n1 , n2 ) + N,

`

where N is the noise covariance matrix and P` (n1 , n2 ) are the Legendre polynomials.
Thus the observed power spectrum is (p` B` )2 C` . Note, that the functions p` and B` are
known, so we can correct for them.
Chapter 7. Spectra reconstruction 133

7.1.1.1 Pixel and beam window functions

Unlike other instruments, QUBIC observes the sky with a complex synthesized beam.
And the synthesized beam is clearly not axisymmetric. We already mentioned in the
previous section that the beam window function B` could be used only in case of the
axisymmetric beam. Otherwise the convolution of a map with a not-axisymmetric beam
leads to the need to rotate the convolution kernel of the beam in a`m space. In other
words, convolution with an asymmetric beam is not equal to the multiplication in `-space.
And it is true for QUBIC. But we do the deconvolution from the axi-asymmetric features
of the synthesized beam on the map-making stage (the map-making could be considered as
an effective deconvolution from the multi-peaked features of the synthesized beam). After
that the reconstructed sky remains convolved only by an axisymmetric peak function.

When observing the sky with a finite resolution beam, the sky map is effectively convolved
by the beam window function. Thus the beam function multiplies the input spectra
and to recover true spectra from reconstructed ones we have to divide them by the
spectrum of the instrument beam. In case of gaussian beam the beam window function
is approximated as

1
B` (σ) = exp[− `(` + 1)σ 2 ] (7.6)
2
where σ is the width of the beam [107].

In case of QUBIC we use approximation of the peaks of the synthesized beam, described
in the chapter 4. The QUBIC beam is not axisymmetric and hence its window function
is very nontrivial. But what we call a QUBIC beam window function B` is not the
spherical harmonic representation of the synthesized beam, but the window function of
only one peak of the synthesized beam. And precisely from this window function we
deconvolve our spectra.

The pixelization of the sky acts in the similar way as the beam. It smoothes the CMB:
we have no access to the angular scales below the pixel resolution [95]. By definition

C`pix
p2` = (7.7)
C`unpix

where C`pix is the spectrum, measured from the pixelized sky map and C`unpix is an ideal
unpixelized spectrum. The pixel window function p` is approximated as an average of
window functions for all the pixels on the map. Pixel window function for several nside
parameters are shown on the figure 7.1.
Chapter 7. Spectra reconstruction 134

Figure 7.1: Pixel window functions for nside equal 64, 128 and 256.

7.1.2 Pseudo-spectrum

For any real experiment, even for those measuring the full sky, like WMAP or Planck,
the coverage is not uniform over the celestial sphere. In order to correctly estimate the
power spectrum over a not uniformly covered sky we should somehow weight the sky
pixels according to the number of hits to each pixel. This is defined by the window
function, which multiplies the map T (n) and hence convolves the measured C` [108].

In the equation (7.1) we introduced the spherical harmonics decomposition of the CMB
fluctuations map. The coefficients aT`m of spherical harmonics decomposition of the tem-
perature fluctuations are defined as:

Z
aT`m = T (n)Y`m (n)dn. (7.8)

The fact that a real instrument observes the noisy sky with incomplete coverage distorts
the measured values of aT`m . The coefficients of spherical harmonics decomposition of the
measured sky are called pseudo-a`m [109]:
Chapter 7. Spectra reconstruction 135

Z Z
ãT`m = T̃ (n)Y`m (n)dn = T (n)W (n)Y`m (n)dn, (7.9)

where W (n) is the instrument window function. The pseudo-a`m relate to the real ones
as

X
ãT`m = K`m,`0 m0 aT`0 m0 , (7.10)
` 0 m0

∗ dn is the convolution kernel due the window function.


R
where K`m,`0 m0 = W (n)Y`0 m0 Y`m
It induces coupling between different angular scales.

Now it is possible to construct so-called pseudo-spectrum as a direct decomposition of an


experimental map to spherical harmonics:

`
1 X
C̃`T T = ãT`m ãT`m∗ . (7.11)
2` + 1
m=−`

Using equation (7.10) this turns to

`
1 X X X
∗ T∗
C̃`T T = K`m,`0 m0 K`m,` T
00 m00 a`0 m0 a`00 m00 . (7.12)
2` + 1 00 00 0 0
m=−` ` m ` m

To get the estimator of a true power-spectrum from here we use the frequentist approach:
let’s calculate the ensemble average of the pseudo-spectrum:

`
D E 1 X
T T∗
C̃`T T = ã`m ã`m
2` + 1
m=−`
`
1 X X X


T T∗

= K`m,`0 m0 K`m,` 00 m00 a`0 m0 a`00 m00
2` + 1 0 0 00 00
m=−` ` m ` m (7.13)
` `0
1 X X X
= C`T0 T |K`m,`0 m0 |2
2` + 1
m=−` `0 m0 =−`0
X
= C`T0 T K``0 ,
`0

where
Chapter 7. Spectra reconstruction 136

` ` 0
1 X X
K``0 = |K`m,`0 m0 |2 . (7.14)
2` + 1
m=−` m0 =−`0

is the convolution kernel that describes the effect of partial sky coverage. K``0 mixes
spectra at different multipoles. This mixing is explained by the fact that the spherical
harmonics Y`m are not orthogonal on a cut sky.

It is evident that for a real experiment we are not able to take an ensemble average since
we observe only one sky. Instead we define an estimator Ĉ`T T

X
C̃`T T = K``0 Ĉ`T0 T . (7.15)
`0

D E
This estimator is unbiased, that is Ĉ`T T is equal to the true underlying spectrum C`T T .
Taking into account the conclusions made for the full noisy sky in (7.5) we can finally
write down the equation that defines the estimator for the CMB power-spectrum

X
C̃`T T = K``0 (p`0 B`0 )2 Ĉ`T0 T + N` , (7.16)
`0

where N` is the noise power spectrum. The CMB estimator Ĉ`T T can be obtained by
inverting this equation. This estimator is debiased from the noise and from the beam and
pixel window functions. To avoid the multipole mixing one has to invert the kernel K``0 .
Below we will consider the power-spectrum reconstruction methods Xpol and Xpure
which are based on this approach.

The first step to estimate the power spectrum is the pseudo-spectrum. From equation
(7.16) it is evident that the pseudo-spectrum is biased by noise. If the noise is gaussian
with constant amplitude over the map, then it cancels out on the large scales, but gives
a significant bias on the lower scales. This is illustrated on the figure 7.2.

7.1.3 Leakage problem

When we measure the polarization fluctuations of CMB, we describe it in terms of


Q and U Stokes parameters. However, the Q and U parameters have not much of
cosmological meaning and it is more convenient to represent the sky in terms of E and B
modes of polarization, because those are directly linked to the primordial cosmological
preturbations, see chapter 2. To obtain a spherical harmonics decomposition of E and
B fields we first introduce the spin-(±2) fields [110] as
Chapter 7. Spectra reconstruction 137

Figure 7.2: Illustration of the noise bias for the pseudo-spectrum. Blue line is for the
theoretical spectrum. Green line with errorbars is obtained with the pseudo-spectra of
100 simulations of full sky, according to the theoretical spectrum, plus gaussian white
noise with standard deviation σnoise = 21 σmap , where σmap is the standard deviation of
the sky temperature fluctuations. Strong bias on the high multipoles is evident.

P±2 ≡ Q ± iU. (7.17)

These spin fields can be expressed in the harmonic space in the spin-weighted basis
±2 Y`m :

X
P±2 = ±2 a`m ±2 Y`m . (7.18)
`,m

It turns out that the a`m representations for the E and B fields are

1
aE
`m = − ( 2 a`m + −2 a`m ), (7.19)
2

i
aB
`m = ( 2 a`m − −2 a`m ). (7.20)
2
Chapter 7. Spectra reconstruction 138

From the measured sky Q and U polarization we have direct access only to the ±2 a`m .

In the similar way as we did for the temperature fluctuations, we can define spin-2
pseudo-a`m :

Z Z
±2 ã`m = P̃±2 (n) ±2 Y`m (n)dn = W (n)P± (n) ±2 Y`m (n)dn. (7.21)

Where W (n) is the observation window function defined in (7.9). Now we can define the
E and B pseudo-a`m :

1
ãE
`m = − ( 2 ã`m + −2 ã`m )
2Z
1 h i
=− P̃2 (n) 2 Y`m (n) + P̃−2 (n) −2 Y`m (n) dn (7.22)
2
Z
1
=− W (n) [P2 (n) 2 Y`m (n) + P−2 (n) −2 Y`m (n)] dn,
2

i
ãB
`m = ( 2 ã`m − −2 ã`m )
2Z
i h i
= P̃2 (n) 2 Y`m (n) − P̃−2 (n) −2 Y`m (n) dn (7.23)
2
Z
i
= W (n) [P2 (n) 2 Y`m (n) − P−2 (n) −2 Y`m (n)] dn.
2

Using decomposition of P±2 into spherical harmonics (7.18) one can finally write

Xh i
+ −
ãE
`m = K`m,` E B
0 m0 a`0 m0 + iK`m,`0 m0 a`0 m0 , (7.24)
`0 m0

Xh i
− +
ãB
`m = −iK`m,` E B
0 m0 a`0 m0 + K`m,`0 m0 a`0 m0 , (7.25)
`0 m0

where

Z
± 1
K`m,` 0 m0 = − W (n) [ 2 Y`∗0 m0 2 Y`m ± ∗
−2 Y`0 m0 −2 Y`m ] dn. (7.26)
2

Thus for the pseudo-a`m for E and B modes we have a mixture of the real E and B a`m .
Defining the convolution kernels
Chapter 7. Spectra reconstruction 139

` 0
`
± 1 X X
± 2
K`` = |K`m,` 0 m0 | , (7.27)
2` + 1
m=−` m0 =−`0

we can relate the pseudo-spectra to the real ones:

D E
C̃`EE
" # "
#
X K +0 −
`` iK`` 0 C`EE
0
D E = . (7.28)
− +
C̃`BB C`BB

`0 −iK ``0 K`` 0 0

Finally, for the case of realistic noisy observations

" # " #" # " #


+ −
C̃`EE X K``0 iK`` 0 Ĉ`EE
0 N`EE
= (p`0 B`0 )2 − +
+ . (7.29)
C̃`BB `0 −iK`` 0 K`` 0 Ĉ`BB
0 N`BB

This is called the leakage problem: the pseudo-spectra for polarization are some mixture
of the real EE and BB spectra. Since the B signal is much lower than E, it is often
called "E-to-B leakage", because the leakage of B modes into E is negligible. In principle
it is possible to invert the last equation and obtain an unbiased estimator Ĉ`EE,BB . But
event then the variance of the E modes leaks to the variance of B [111].

7.1.4 Errorbars on the reconstructed spectra

Let’s estimate the idealistic errorbars which one would obtain using the most optimal
estimator from a noisy sky. And let’s do it by introducing the likelihood function [112]:

"   #
X `(` + 1) C̃`
− 2 ln P (x̃|C` ) = (2` + 1) log (C` p2` B`2 + N` ) + , (7.30)
2π C` p2` B`2 + N`
`

that defines the probability to measure the map x̃ having the underlying CMB power
spectrum C` . You may recognize here the beam and pixel window functions p` and B` ,
the noise spectrum N` and the power-spectrum C̃` . The maximum of the likelihood is
at Ĉ` = (C̃` − N` )/(p` B` )2 and the errors of this solution are defined by the inverse of
square root of the likelihood second derivative, which is

∂ 2 logP (x̃|C` ) 2` + 1 N`
= (C` + )−2 δ``0 , (7.31)
∂C` ∂C`0 2 (p` B` )2

so the spectra errorbars are


Chapter 7. Spectra reconstruction 140

r
2 N`
∆C` = (C` + ). (7.32)
2` + 1 (p` B` )2

Note that the square root that multiplies the formula is similar to what we have for the
cosmic variance. So it is quite natural to introduce here the effect of incomplete sky
coverage and finally get

s
2 N`
∆C` = (C` + ). (7.33)
(2` + 1)fsky (p` B` )2

where fsky is the fraction of observed sky. This formula is the ultimate limit of sensitivity
of any method of spectra reconstruction since it ignores the effect of leakage.

7.2 Xpol

The Xpol method mainly dedicated to estimate the power spectra using cross-power spec-
tra between different input maps of the same experiment or from different experiments
[111]. Assuming that noise is uncorrelated between maps, which is a fair assumption, the
estimation built with Xpol is not biased by noise. The cross-power spectra are combined
using a Gaussian approximation for the likelihood function.

If aA,B
`m are temperature a`m ’s of two independent maps, one can build a cross-spectrum
of them:

`
1 X
C`AB = aA B∗
`m a`m . (7.34)
2` + 1
m=−`

The pseudo-cross-spectrum takes into account the noise and window function bias of the
measured cross-spetrum:

X D E
AB 02 A B
C̃`AB = K``0 p` B`0 B`0 Ĉ`AB
0 (7.35)
`0

The noise cross-spectrum N`AB does not appear here because of the reasons explained
in the beginning of this section. From this one can derive the estimator for the cross-
spectrum Ĉ`AB . If we have N maps and hence N (N − 1)/2 different cross-spectra, we
can combine them making use of the likelihood approach. Approximating the likelihood
function as a gaussian we have
Chapter 7. Spectra reconstruction 141

Xh j j
i
− 2 log L = (Ĉ` − Ĉ` )|Ξ−1
`` 0 |ij
(Ĉ ` − Ĉ ` ) (7.36)
ij

where indexes i, j are for the different pairs of maps and Ξij
``0 is the cross-correlation
matrix, which is analytically computed. Maximizing this likelihood function we obtain
an estimator Ĉ` of the power-spectrum of the sky.

Currently we use Xpol method to estimate the power spectra from the QUBIC simula-
tions for only one band, so we don’t use the main advantage of the method and apply
it to only one map. Thus the obtained result is biased by noise. But the Xpol can be
useful to take the cross-spectra for two bands. The problem is that then we should do
the component separation severally for each band. The component separation works well
when the input maps are measured on much different frequencies. Thus the advantage
of using Xpol will be mitigated by the worse component separation.

Also Xpol can be useful when we will have data from the RA12 PolarBear field. Then
we will be able to correlate maps of two experiments and improve the power-spectra
resolution. However, to combine two data sets with to different scanning strategies and
different TOD filtering is very not trivial task. Moreover, PolarBear experiment is mainly
concentrates on the region of high multipoles (we remind that PolarBear aims to measure
the lensing effect on BB spectrum). So it is hard to know now whether this analysis will
be useful or not.

7.3 Xpure

Both in pseudo-spectrum approach and Xpol we attempted to define the coefficients


of the spherical harmonics decomposition on a cut sky. The spherical harmonics are
orthogonal only on the full sky, so the definition of a`m ’s on a cut sky necessarily leads to
the leakage of E to B. The Xpure method use another approach – to weight the spherical
harmonics themselves by the window function, thus defining a pure basis to define the
E and B modes [110]. If ð is the spin-raising and ð̂ is the spin-lowering operators, then
we can define the pure pseudo-a`m ’s as

r
`−2
Z h
1 i
ãE
`m =− P2 (n)(ððW (n)Y`m (n))∗ + P−2 (n)(ð̂ð̂W (n)Y`m (n))∗ dn, (7.37)
2 `+2
Chapter 7. Spectra reconstruction 142

r
`−2
Z h
i i
ãB
`m = P2 (n)(ððW (n)Y`m (n))∗ − P−2 (n)(ð̂ð̂W (n)Y`m (n))∗ dn, (7.38)
2 `+2

In such definition the pure pseudo-aE,B


`m ’s contain only E and B modes correspondingly.
Thus the power spectrum estimator is completely free of the leakage. To allow the
decomposition on the pure basis the window function mush satisfy a sufficient condition
of having W = 0 and ðW = 0 on the edge of the field. Therefore the Xpure method
requires the apodization of the binary mask on the sky under which we measure the CMB
anisotropies. Apodization length is the angular distance on which the mask changes from
0 to 1. The shorter the apodization length is the worse the method works (means, more
leakage remains on the reconstructed spectrum). But the increase of the apodization
length means reduce of the sky coverage and thus increase of the sample variance. When
dealing with the Xpure method we have to adjust the apodization length carefully to
achieve a balanced solution between these two counteracting effects. The choice of the
apodization length for QUBIC analysis is shown on the figure 7.3. It is evident that
the effect of apodization is the most important on the low multipoles. The increase of
the errorbars due the sample variance is included on the figure, though it is relatively
small: it is two orders of magnitude lower than the Xpure proper errorbars. On the
multipoles from around 60 and higher the difference is negligible. Remember that the
peak of primordial B-modes is expected on ` ∼ 70. So we are free to pick any value for
apodization length. We choose 900 since it gives slightly better result on ` = 50 than the
sharper apodizations.

7.4 Spice

The Spice method [13] differs from Xpol and Xpure by introducing the angular correlation
function of the signal at distance θ:

X 2` + 1
ξ(θ) = C` P` (θ), (7.39)

`

where P` (θ) is the `-th Legendre polynomial. Thus the recipe to extract power spectra
from map is the following: at first, we measure the two-point correlation function. Next,
we smooth it with Gaussian kernels centered on the roots of Legendre polynomials and
integrate to obtain the C` . The full sky C` is given by
Chapter 7. Spectra reconstruction 143

Figure 7.3: Xpure errorbars of BB power spectrum for different apodization lengths
from 0 to 3◦ . Sample variance for r = 0.02 included.

X
C` = 2π wk ξ(θk )P` (θk ), (7.40)
k

where wi are the weights of the Gauss-Legendre quadrature.

For CMB maps, measured in an experiment, signal at pixel i contains contribution both
from CMB and noise. Noise is assumed to be gaussian with correlation matrix Nij . Thus
the full pixel-to-pixel correlation matrix is

Cij = ξij + Nij . (7.41)

To estimate ξij (θ) we use:

X
ξ˜ij (θ) = fij (Ti Tj − Nij ) (7.42)
ij
Chapter 7. Spectra reconstruction 144

P
where coefficients fij = 0 unless pixels i and j belong to particular bin in θ and ij fij =
D E
1. This estimator is unbiased: ξ(θ) ˜ = ξ(θ), where the angle brackets are for the
ensemble averaging.

Figure 7.4: C` calculated with Spice in 1298 BOOMERanG-like simulations and then
rebinned into flat C` bands with a width of 50. The small points show the individual
measurements, with the error bars representing the standard deviations in each band.
The theoretical error bars of equation 7.33 are displayed and shifted to the right for
clarity. The arrows point to the effective beam and pixel scales [13].

The noise contribution and sample variance contribute to the errors of the method.
Figure 7.4 demonstrates good efficiency of the method. This figure shows the errorbars
of the method itself together with the theoretical errors, obtained by the formula (7.33).
It is clear, that the result is unbiased and has nearly optimal errorbars.

The PolSpice method is an extension of Spice to reconstruct the polarization spectra of


the CMB. Here we call it just Spice.

7.5 Choosing coverage threshold

An important step for the spectra reconstruction is the choice of the map mask under
which we attempt to reconstruct the spectra. The mask should exclude the unseen
Chapter 7. Spectra reconstruction 145

pixels as well as the noisy ones. In the framework of this thesis we use only these two
criteria, but in principle the bright point sources on the sky should be also masked. The
uncertainty on the power spectrum is defined by the formula (7.33). Omitting the sample
variance, it is proportional to N ET 2 / fsky , where N ET is the noise equivalent power
p

on the reconstructed map.

The noise in a pixel is roughly proportional to the square root of number of hits to
this pixel. It is not an exact dependence since for bolometric interferometer we observe
a mixture of signals from different directions. Some of these directions correspond to
well observed pixels and some correspond to the poorly observed ones. Moreover, the
noise depends on the filtering, that is on scanning strategy. However, we can apply the
mask based only on the coverage map, since the noise variance on the pixels with equal
coverage is almost the same. The NET, estimated on the reconstructed map under the
mask, is smaller if we take a tighter mask, because then we take only well observed pixels
and reject the poorly observed ones. On the other hand tight mask means high sample
variance. On our experience, one should use the coverage threshold 0.2 to reconstruct
the power spectra from individual sub-bands and 0.05 for the ILC map. The coverage
threshold t defines the mask as COV > t · max(COV).

7.6 Choosing method

To choose the proper method we run Monte-Carlo simulations of 24 hours of QUBIC


performance at Dome-C site, as this site allows to observe the sky field of interest all day
long. The noise level is normalised to simulate 2 years of data taken. Knee frequency
of 1/f -noise is set to 1 Hz, which is a realistic value, see the value used for BICEP-2
experiment in [113]. Simulated no-foreground CMB sky with r = 0. Observations are
simulated at 150 GHz monochromatic band. We use gaussian approximation for the
peaks of the synthesized beam (we recall that we model the peaks of the synthesized
beam either as gaussians or as more precise "rippled shape". The second one is more
precise, but it anyway doesn’t change the results of spectra reconstruction in simulations).
Number of realizations is 8. After reconstruction, spectra are corrected for the QUBIC
beam and pixel window function. Reconstructed spectra for all listed methods are shown
on the figure 7.5. It is clear that in average all the methods give good results. We
estimate errorbars of reconstructed spectra as the standard deviation between different
realizations. One should take these results with care, since the number of realization is
quite small. The errorbars are shown on the figure 7.6.

It is clear, that Xpol gives the worst errorbars for BB-spectrum (however don’t forget
about little number of simulations used to plot this result). Xpure and Spice errorbars
Chapter 7. Spectra reconstruction 146

Figure 7.5: Reconstructed QUBIC power spectra. Green line – spectrum used as an
input for simulations, blue – Xpure reconstruction, red – Xpol, cyan – Spice. (The BB
spectrum is on the middle left plot).

Figure 7.6: Errorbars of reconstructed QUBIC power spectra. Line colors are the
same as on the figure 7.5. The BB spectrum errorbars are on the middle left plot.
Chapter 7. Spectra reconstruction 147

are much tighter, although Xpure has increased errors at low ` for BB-spectrum. It
is also remarkable that Spice errorbars are much lower for EB-spectrum, which is of
crucial importance for the B-modes detection: since E and B modes arise from different
physical effects they should be uncorrelated and the EB spectrum must be zero. So, we
certainly won’t use Xpol. Actually, Xpol is not designed to measure the auto-spectra
(spectra from a single map), so it is not surprising that it gives bad result.

The choice between Xpure and Spice is more difficult since they give almost the same
results. The errorbars of Spice are better than those of Xpure for almost all the spectra.
However, Spice gives a strange bias for TT and BB spectra. One technical difference
between the methods: Xpure is implemented and deployed on NERSC supercomputer,
while Spice could be ran on a personal computer, which is much more convenient. It
does not mean that Xpure code is more heavy and Spice is lighter. Both the codes have
complexity of O(N 3/2 ), where N is the number of pixels. The Xpure code we use is
originally implemented for PolarBear data analysis. Since PolarBear works with CMB
maps of much higher resolution, they need to run the code on NERSC. For us it is not
crucial.

This study does not allow to choose between Xpure and Spice, so we keep using both
methods. However, anticipating we can say that only Xpure is good for realistic simula-
tions of multi-band observations of QUBIC.

7.7 Conclusions

In this chapter we considered the problem of spectra reconstruction as a whole and exam-
ined three spectra reconstruction methods: Xpol, Xpure and Spice. All three methods
give excellent results with preference to Spice and Xpure. These methods we pick for
later use.
Chapter 8

Scanning strategy

In this chapter we discuss the issues concerning the scanning strategy. We run a scan
over the scanning strategy parameter space and find an optimal set of parameters that
allows to mitigate the 1/f noise and observational efficiency loss due to the dead time.

Scanning strategy is the way to orientate the instrument in time. By optimisation of the
scanning strategy one can hope to achieve better sensitivity of the experiment. It may
help to: mitigate the 1/f noise, reduce the sample variance, reduce the noise variance,
improve systematics, avoid noisy parts of the sky, reduce the E-to-B leakage by adjusting
the shape of the observed patch. In this chapter we consider ways to optimise the
scanning strategy, what results we can obtain, what are the limitations of the scanning
strategy and finally will propose a baseline scanning strategy for the QUBIC experiment.

8.1 Sensitivity of an imager

The sensitivity of an experiment depends on the parameters of scanning strategy. We


start the examination of this dependence from a simpler case of an imager. This question
is considered in the works [114] and [115].

We already introduced the formula (7.33) for the errorbars on the estimated power spec-
trum. Let’s repeat it:

s
2 N`
∆C` = (C` + ). (8.1)
(2` + 1)fsky (p` B` )2

Here fsky is the observed fraction of the sky, C` is the true underlying power spectrum,
N` is the noise power spectrum and p` and B` are the pixel and beam window functions,
148
Chapter 8. Scanning strategy 149

as described in 7.1.1.1. Here we already see one parameter that depends on the scanning
strategy: the sky fraction fsky . From (8.1) the errors on the C` are proportional to the
inverse of square root of fsky . From this point of view it is better to have a large fraction
of the sky observed, as it minimized the sample variance.

However, by increasing the fraction of the observed sky we also increase the noise vari-
ance. Broadly speaking, we have the number of time samples defined by the observational
time. We can distribute these samples as we want: either by observing a tiny patch of
the sky, but very deep, then the noise will be apparently reduced on this patch. Or we
can spread our samples over a large patch, but then each pixel will be measured only few
times and the noise will be increased. The noise variance arises from the instrumental,
atmosphere and foregrounds noises. While the first one is irreducible by other methods
but instrumental ones, the foregrounds and atmosphere contaminations depend on the
sky coverage. Naively, the noise variance in each pixel is inversely proportional to the
square root of number of hits to that pixel. Or, in other words, proportional to the
square root of sky fraction fsky . Thus by adjusting the sky coverage one can achieve the
minimum, balanced between the sample and noise variances.

The noise power spectrum N` is equal [105]:

2ηNET2 Ω
N` = , (8.2)
tim

where NET is the noise equivalent temperature of the detectors, that is the signal tem-
perature which is needed to match the noise level; t is the observation time; Ω is the solid
R
angle of the observed field, equal to cn d(n), where cn is the hit map, normalized to have
1 atR the maximum, Ω is equal to 4πfsky ; η is called the apodization factor and is equal
to R cn (n)dn ; im is the optical efficiency. Thus finally the errorbars on the reconstructed
c2n (n)dn
spectrum for an imager are defined as

s
8πηNET2 fsky
 
2
∆C`im = C` + , (8.3)
(2` + 1)fsky ∆` p2` B`2 tim

where ∆` – bin width for the binned reconstructed spectrum. The optimal choice for the
bin width is the multipole that corresponds to the angular scale of the observed field of
the instrument. For QUBIC we use ∆` = 20. In the equation (8.3) one can clearly see
that the sensitivity of the CMB imager is the sum of sample variance (first term in the
−1
brackets) and noise variance (second term). The sample variance term behaves as O(fsky2 )
1
and the noise variance as O(fsky
2
). Thus apparently there might be minimum between
these two factors. With a strong signal this minimum tends towards the high coverage.
Chapter 8. Scanning strategy 150

And with the increase of noise and reduce of observational efficiency the minimum moves
towards the low fsky . Since fsky depends on the scanning strategy, this minimum can be
achieved by adjusting the scanning strategy parameters.

On practice we are looking for the minimum between noise variance which is known
and the sample variance which is unknown. The sample variance is unknown because
it depends on the true power spectrum, which, on its turn, depends on the value of the
tensor-to-scalar ratio r, which is unknown. We are trying to find the optimal scanning
strategy for two cases: one with zero sample variance, that is zero r (here we don’t
account for the lensing signal). And another case is for the r = 0.02 which is twice
of target sensitivity of QUBIC. After all we will be able to propose a scanning strategy
depending on the value of r. Anticipating, we can say that the optimal scanning strategy
is almost indifferent on the exact value of r.

Another parameter that depends on the scanning strategy in the formula (8.3) is η. It
describes the the shape of the coverage field. In the ideal case of uniform coverage with
top-hat profile η = 1. The worst possible value of η is 2. In case of a realistic experiment
the apodization factor η has value between 1 and 2.

8.2 1/f noise

The formula (8.3) is obtained assuming the absence of correlations of the noise between
the pixels of the sky. But we know that the electronic and, especially, atmospheric noise
is characterised as a pink, or 1/f noise. The low frequencies of such noise are dominant.
To get rid of the low frequency component of the noise we apply the high-pass filter
to the data, but if the low frequencies of the noise are not completely removed, they
manifest themselves as stripe-looking features on the reconstructed map. Moreover, the
filtering not only reduces noise but also removes some signal.

1/f noise is characterised by the noise-equivalent power NEP of the background white
noise, the slope of the low-frequency part of spectrum and the knee frequency, where
the 1/f noise turns to white noise. The knee frequency fknee has typical value of order
1-2 Hz; for reference see [116] for Atacama desert atmospheric conditions (Atacama is a
Chilean name for the same desert as Puna), [117] for the South Pole and [118] for the
general information on the subject of atmospheric contamination. Because the 1/f noise
is not accounted in the formula (8.3) we might expect that the minimum of the spectrum
variance will move towards even smaller fsky .
Chapter 8. Scanning strategy 151

8.3 General approach to the scanning strategy and instru-


mental constraints

Taking into account all the factors listed above we derive the general approach to the
QUBIC scanning strategy. The manner we construct it is pretty common for the ground-
based CMB experiments. Scanning strategy could be written as an array of four functions
on time: (az(t), el(t), ψ(t), φ(t)), where az and el are azimuth and elevation of the in-
strument, ψ is angle of rotation around the optical axis and φ is angle of rotation of the
half-wave plate.

8.3.1 Azimuthal and elevation rotations

The atmospheric noise strongly depends on the thickness of the air through which we are
observing the sky. Thus the noise level depends on the elevation el(t). Since the 1/f noise
is partly removed by high-pass filtering of data, we don’t want the noise characteristics
to change fast, that is we want to keep elevation constant during a long period of time.
Thus we come to the general approach for az(t) and el(t): we scan back-and-forth on
constant elevation within range of ± 12 ∆az around the centre of the field which is constant
in the galactic coordinates. One back-and-forth scan will be called a sweep. In horizontal
coordinates the centre of the field is slowly moving due to the day rotation of the Earth.
The azimuth of the centre of the field is always kept on the centre of each sweep. During
number of sweeps N at constant elevation the field of interest moves away from the field
of view of the instrument, so we change elevation to match the centre of the field and
start sweeping again. This approach allows to filter out the 1/f -noise: during each sweep
we observe each point in the field of view twice with time interval of order the period of
azimuth sweeping. If the noise is strong on low frequencies, the noise component for these
two measurements will be correlated. Let’s return for a moment to the map-making. We
remind that the CMB map is reconstructed using this estimator:

x̃ = (H T N −1 H)−1 H T N −1 y. (8.4)

We are now interested in the noise covariance matrix N . In the approximation of a


perfectly stationary noise it is a Toeplitz matrix where each row describes covariance
of noise in time for each detector. Due to the reason explained above the TOD y
is also implies some time correlation. The multiplication of N −1 y effectively makes
prewhitening of the noise – makes the noise on TOD white. Thus we project out in
average the low frequency component of the noise. The trick works more efficiently if
Chapter 8. Scanning strategy 152

the period of sweeping is short. Thus keeping elevation constant helps to control the
atmospheric noise which changes with elevation: noise intensity is proportional to the
thickness of the atmosphere, which, in approximation of flat Earth, is proportional to
the inverse cosine of elevation and changes by factor 1.7 while changing elevation from
60◦ to 30◦ .

This general approach to the azimuthal and elevational rotation for QUBIC in case of
observations from the Concordia site is shown on the figure 8.1. For Puna desert site one
can plot a similar figure, but due to the large daily dead time it would be less illustrative.

Figure 8.1: Elevation (top panel) and azimuth (bottom panel) for the QUBIC scan-
ning strategy with time period on constant elevation 2 hours, constant angular speed
1◦ / s and dead time 5 sec. Dead time is shown as sections of constant azimuth at both
edges of each sweep.

The main instrumental constraints to the azimuthal rotation is the acceleration. For
any part of the instrument it must be less than g/2 (this requirement is satisfied with
a good margin of safety). For the elevation the instrumental constraint arises from the
design features of the pulse tubes used for cryogenic system of QUBIC. The maximum
inclination allowed for the pulse tubes is 20◦ and the QUBIC is designed to allow elevation
from 30◦ to 70◦ .
Chapter 8. Scanning strategy 153

8.3.2 Rotations around the optical axis of the instrument

As we said in chapter 3, the QUBIC mount system allows the rotation of the instrument
on azimuth, elevation and around the optical axis (see figure 3.4). But there is no point
to use ψ (around the optical axis) rotation of the instrument while scanning the sky.
The reasoning is following: let’s call angles of the instrument pitch on two subsequent
half-sweeps ψ1,2 . If ψ1 6= ψ2 then the same directions on the sky are observed with
different detectors and data filtering doesn’t work. It only works if ψ1 = ψ2 , that is
while sweeping back the instrument should repeat its own path of ψ. But then there is
no point to rotate the instrument on ψ, because it does not improve anything. Moreover,
due to the mechanical vibrations the angles ψ1,2 back and forth sweeps will be always
slightly different. So instead of rotating on ψ we should keep the angle ψ constant
during a long period of observations. Another reason to keep ψ constant is the fact that
otherwise the airmass for each detector varies during a sweep, and that is exactly what
we try to avoid by keeping the instrument on constant elevation during some long time.

On the other hand it is recommended to rotate the instrument on ψ from time to time
to observe the same sky with different detectors, as it allows to reduce the systematics
effects. One of the important characteristics of the scanning strategy quality is the
overlap of detectors in per cents:

R Qnd i 1
i=1 cn (n) nd dn
λ = nd R Pnd i × 100% (8.5)
i=1 cn (n)dn

where nd is the number of detectors, the numerator is the geometrical mean of the
coverages of different detectors cin , each normalized to one and the denominator is the
arithmetic mean of it. If the overlap is zero it means that each detector looks on its own
path on the sky. Low overlap of detectors leads to additional systematics due to the
construction differences between detectors. Figure 8.2 presents the overlap for several
different scanning strategies, which differ one from another by different ∆az.. One can
clearly see that the overlap for QUBIC is always more than 99%. Moreover, rotation of
the instrument on ψ from time to time allows to increase the overlap even more. Since
the overlap for QUBIC is almost perfect anyway, we don’t consider the overlap as a
crucial criterium for the scanning strategy.

Also we are planning to use the rotation around the optical axis for self-calibration. To
remind: the self-calibration implies observation of a point source with different horn
baselines. During self-calibration we have a background noise from the ground. To
mitigate it we can rotate the instrument on ψ and thus modulate the self-calibration
observations. After that the noise from the ground could be efficiently removed.
Chapter 8. Scanning strategy 154

99.40

99.38

99.36

99.34

99.32

Figure 8.2: QUBIC overlap, calculated for 100 randomly picked detectors of a focal
plane, considering observations from the Dome-C site.

Within the framework of this thesis we consider ψ angle to be constant and equal to zero.
The instrumental constraint for ψ is the same as it for the elevation: max(ψ) = ±20◦ .

8.3.3 HWP rotation and dead time

The half-wave plate (HWP) rotation allows to modulate the polarising signal. Due to
the same reasoning described in the paragraph above we don’t want the HWP to rotate
during the sweep. But after each back-and-forth sweep the HWP should be rotated (or
after some number of sweeps). We already discussed the HWP rotation in the chapter 3,
let’s quickly recall the conclusions. The signal on the focal plane is the linear combination
of Stokes parameters of the incoming radiation with the coefficients sin(4φ) and cos(4φ).
Thus the reasonable range for φ is [0◦ , 90◦ ] and the step of rotation is π
16 = 11.25◦ .

Instrumentally the rotation of a cold HWP is challenging and takes time. During the
HWP rotation we can’t know the real value of φ because of unavoidable vibrations.
Thus the data received by the instrument during the HWP rotation is bad, so this time
is considered as the dead time for QUBIC. In fact, we should have dead time on both
Chapter 8. Scanning strategy 155

edges of a sweep with the same reasoning: when the instrument changes the the direction
of azimuthal rotation, that is it moves with angular acceleration, we have the vibrations
of the whole instrument. These vibrations are not very strong and don’t influence the
QUBIC cryogenics. However, it spoils the pointing and we should better not take data
at the both end of the sweep. The dead time for QUBIC is about 1 second per sweep
edge and probably should be increased when we rotate the HWP. For our simulations we
use the dead time equal 1s. Long dead time in combination with fast azimuthal rotations
and narrow azimuthal range reduces the observational efficiency.

8.4 Sensitivity of a bolometric interferometer

In the section 8.1 we considered the sensitivity of an imager instrument and already de-
rived several important conclusions for the scanning strategy from it. Now let’s consider
the sensitivity of a bolometric interferometer. The fact that we observe the sky with a
synthesized beam, formed as an interferometric pattern between the beams of each pupil
(horn), changes the noise variance (in K) as [105]:


2 Nh 4πNETfsky
σnoise = √ , (8.6)
Neq (`) t

where NET, fsky , t, as defined in the section 8.1, are, respectively, noise equivalent
1
temperature (in the units of µK · s 2 ), sky coverage fraction and observation time. Nh
is the number of horns and Neq (`) is the number of equivalent baselines. Let’s discuss
these two last factors more precisely.

The synthesized beam was defined in the equation (4.15) as an interference of signals
from Nh horns. We said in section 3.1.3 that the interferometric pattern from all the
equivalent baselines should be the same, since all the equivalent baselines make the same
phase shift (we recall that the term baseline denotes a pair of horns; equivalent baselines
means the baselines with equal relative positions of horns). This is precisely the idea on
which the self-calibration technique is based. When the signals from all the equivalent
baselines are sumed up, the sinusoidal fringe pattern from each baseline is multiplied in
amplitude by Neq . This is what is called a coherent summation. It is shown in [119] that
2 . Each baseline has its own narrow range
in this case the noise variance scales as Nh /Neq
in `, which corresponds to the spatial period of the sinusoidal pattern for this baseline.
So the number of Neq is a function of `.
Chapter 8. Scanning strategy 156

Another effect, that we already briefly discussed


r in the chapter 5, is the bandwidth
(∆ν/ν)2 2
smearing, described by the factor κ1 = 1 + σ2 ` . Finally, the sensitivity of a
`

bolometric interferometer is [98]:

s !
2κ1 (`) 8πηNh NET2 fsky
∆C`bi = C` + 2 2 2 κ1 (`) . (8.7)
(2` + 1)fsky ∆` p` B` Neq (`)tbi

where bi is the optical efficiency of a bolometric interferometer. Just like in the case of
an imager, the formula could be divided into sample variance and noise variance parts.
One can identify several parameters dependent on the scanning strategy in the formula
(8.7): the total fraction of the sky coverage fsky and the apodization factor η directly
depend on the scanning strategy. The noise-equivalent temperature on the reconstructed
map becomes higher with increasing the coverage (therefore depends indirectly on the
scanning strategy).

We want to test the validity of the formula (8.7) in order to use it for optimizing the
scanning strategy parameters and avoid too heavy Monte-Carlo. We run 100 noise-
only simulations (with zero CMB signal) for the 4 sets of scanning strategy parameters:
azimuth angular speed is 2.6◦ /s and delta azimuth has values 15, 25, 35 and 45◦ . The
value for the angular speed was chosen because, as it will be shown in the next section,
the minimum for ∆C`BB is situated close to this angular speed. Note that each simulation
requires approximately 10 CPU hours and more than 100 Gb of operative memory, thus
it could be done only on a supercomputer: NERSC or CURIE. The QUBIC simulations
are really heavy and this is precisely the reason why we insist on using the formula
instead of Monte-Carlo.

For practical use the noise term of the (8.7) is replaced with deviation of noise measured
from reconstructed Q and U maps, weighted by the coverage. It is necessary because in
case of the realistic scanning strategy direct application of the formula is not trivial: the
NET is the noise equivalent temperature on the reconstructed map and we don’t really
know how it depends on the scanning strategy and map-making. It is easier to deal with
it using the Monte-Carlo simulations that includes all those factors. We estimate the
noise term in formula (8.7) as

8πηNh NET2 fsky −1


2 (`)t
κ1 (`)wpix (`)−1 2
bi = σnoise Spix (8.8)
Neq

where Spix is the pixel area and σnoise is the noise deviation weighted by the normalized
coverage cn (n). The noise map is the residual map (which is equals to the reconstructed
map when the input sky is just zero).
Chapter 8. Scanning strategy 157

The scanning strategy dependent terms of the formula (8.7) are shown on the figure 8.3.
Here we change the ∆az., keeping all the other parameters constant. We see that the
fraction of the sky becomes higher with more wide sweeps, as expected. With the broader
field we get the noise increased, just like predicted in (8.2). The behavior of apodization
factor η is also well understood: with a broader field of view its shape becomes more like
a top-hat, so the apodization reduces. However, this reduction is quite small.

Figure 8.3: Sky fraction (top), apodization factor (middle) and σnoise (bottom) in
dependence from the scanning strategy, namely from the delta azimuth.

We compare the formula (8.7) with the results of spectra reconstruction in the figure 8.4.
From this plot we can conclude:

• Both formula and spectra reconstruction give errorbars of the same order, which
means the formula (8.7) is correct. Although the exact behavior of the blue (ana-
lytic) line is not completely reproduced by the Monte-Carlo (green and red lines)
due to the high errorbars, we can follow the analytic formula (8.7) while looking
for the optimal set of scanning strategy parameters.

• Also here we see again that both Xpure and Spice give results of similar quality. It
is another confirmation of previously made conclusion that both Xpure and Spice
methods are good enough to use for QUBIC needs.
Chapter 8. Scanning strategy 158

• Relative changes of the ∆C` are about 10%, so the effect gained by choosing an
optimal scanning strategy is pretty strong. To define its optimal parameters one
needs to perform a scan over the parameter space.

• According to the blue line the smallest errorbars are expected when the coverage
is small. That is we need deep observations in a small field. The calculations
presented here don’t take into account the increase of the sample variance with the
reduce of covered sky fraction fsky . With the sample variance one can expect that
the minimum would move towards the large coverages, but only slightly. Sample
variance depends on the value of r, which we don’t know (the current upper limit
is r < 0.07 at 95% confidence level [11]).

Figure 8.4: Study of dependence of ∆C`bi on the scanning strategy, precisely on the
azimuth range, having all other parameters of scanning strategy unchanged. Xpure
and Spice lines are the standard deviations of reconstructed spectra in the wide ` band
from 50 to 150 for the corresponding spectra reconstruction methods. The ∆C`bi line is
calculated with the (8.7) formula.

8.5 Scan of scanning strategy parameters

To define the optimal scanning strategy parameters we make a scan: we vary the pa-
rameters and compare the results. For the simulations we use: noise-only sky (CMB
signal is zero), observations from Puna desert, observation time is 1 year, white noise,
monochromatic observations at 150 GHz. We run one simulation for each set of scanning
strategy parameters. That means we have a lot of simulations to run. We need to to
economize the computer resources and one of the ways to do it is to reduce the frequency
with which we are sampling the sky (we call it sampling frequency). The dependence
Chapter 8. Scanning strategy 159

of ∆C` from the sampling frequency, obtained with realistic Monte-Carlo simulations,
is shown on the figure 8.5. One can raise the following reasoning while choosing the
sampling frequency: the instrument sweeps the sky pixels of typical angular size θres
(from resolution) with angular speed ω. Then the sampling frequency should be at least
ω/θres to take one sample per pixel. Another factor that allows to reduce the sampling
frequency is the synthesized beam peak width which is equal to ∼ 240 for 150 GHz. For
comparison, the angular resolution of the sky pixels on the healpix map with nside 256 is
140 . From the results of the study of this issue with simulations, shown on the figure 8.5
one can conclude that the choice of sample time 0.5 s is relatively safe for simulations.
It makes the Monte-Carlo 20 times lighter with respect to the realistic simulations with
sampling frequency 100 Hz.

Figure 8.5: Normalized power spectrum D`BB ≡ `(`+1)


2π C`
BB
and its errorbars in the
region of the BB spectrum peak for for different sample time (inverse of sampling
frequency).

Now let’s return back to the scan of the scanning strategy parameters. We vary three
parameters: azimuthal angular speed in ranges from 0.5 to 3 degree per second with step
0.5, delta azimuth in ranges from 20 to 50 degree with step 10 and time during which
the instrument observes on constant elevation: 30, 60, 90 and 120 minutes. Dead time
on each edge of each sweep is 1 second. The results of this scan are presented on the
pictures 8.6 (apodization factor η), 8.7 (fraction of covered sky fsky ), 8.8 (noise variance),
8.9 (BB-spectrum errorbars in the bin ` ∈ [50, 150] due to the (8.7) formula) and 8.10
(formula errorbars with sample variance).
Chapter 8. Scanning strategy 160

Figure 8.6: Apodization factor η for studied scanning strategies.

Figure 8.7: Fraction of sky for the coverage field fsky for studied scanning strategies.
Chapter 8. Scanning strategy 161

2
Figure 8.8: Noise variance σnoise for studied scanning strategies.

Figure 8.9: BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB = 0.
Chapter 8. Scanning strategy 162

Figure 8.10: BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB corresponding to r = 0.02.

Let’s analyse the obtained results.

• The apodization factor is almost constant having value ∼ 1.6 for all the scanning
strategies. Although the variations of η almost repeat those of the sky fraction
fsky : the more the coverage field – the better the apodization. This behavior is
quite logical: when the coverage field is large, the shape of it becomes more like a
top-hat, hence the apodization decreases.

• The noise variance is proportional to the sky fraction and does not depend on the
angular speed. This is what expected for the white noise. We should expect that
with simulations that include the 1/f noise the noise variance will depend strongly
on the angular speed, because the noise filtering works better with high angular
speeds.

• The minimum of the ∆C` is obtained with the time on constant elevation 1 hour.
There are two minimums: on angular speed 1 degree per second and delta azimuth
20◦ and another one at 2.5◦ /s and 30◦ /s. But we suppose that the fact that there
is not a full column of minimums (like on the 8.7 picture) is explained by the fact
that sometimes, just by chance, the parameters fall to some exceptionally good
combination. You will see that this effect is much less important when we consider
the observations with 1/f noise.
Chapter 8. Scanning strategy 163

• When we add some sample variance (see figure 8.10) the minimums remains at
the same places. This is because the sample variance does not dominate for small
r and high noise. Otherwise, as we said, the minimums would move towards the
right (that is towards the high fsky ).

Now the next step is to add 1/f noise to the simulations and run the scan again. We
show that the 1/f noise with fknee = 1Hz changes the results dramatically, see figures
8.11 for ∆C` without sample variance and 8.12 with sample variance correspondent to
r = 0.02.

Figure 8.11: BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB = 0. 1/f noise with νknee = 1Hz included.

Let’s analyze this result. Clearly, the preferred scanning strategy is: fast rotation on
azimuth and low ∆az. The scanning frequency is inverse of the period of sweeping,
which is equal to 2∆az divided by the azimuth angular speed plus the dead time. In
the best case of fast and short sweeps it is about 0.1Hz. And the filter works more
efficiently if this frequency approaches the fknee . Of course, it is impossible to sweep
so fast to achieve 1Hz scanning frequency, because then the acceleration to the parts of
the instrument is too high. But we can do our best by choosing the fast sweeping. The
minimum (which is quite hard to see by eye) is at the time on constant elevation 60
minutes, angular speed 3◦ per second and ∆az = 30◦ .
Chapter 8. Scanning strategy 164

Figure 8.12: BB power spectrum errorbars for studied scanning strategies due to the
formula 8.7 with C`BB corresponding to r = 0.02. 1/f noise with νknee = 1Hz included.

Note also, that since the minimum is quite broad, these parameters could be changed
slightly without ruining the optimality. Let’s discuss the allowed ranges for the parame-
ters. To make the difference between different scanning strategies more evident we plot
value

∆C` − min(∆C` )
p= · 100% (8.9)
max(∆C` ) − min(∆C` )

which characterizes the deviation of the scanning strategy from the optimal one in per
cents of difference max(∆C` ) − min(∆C` ), see figure 8.13. The figure 8.4 shows that the
sensitivity (∆C` due the reconstructed spectra, green and red lines on that figure) is very
tolerant to the scanning strategy even though the change of the sensitivity according the
formula (blue line) could be quite strong. One can say, that all the strategies marked
with blue and light blue colors on the figure 8.13 are fine (p < 3-4%). That is all the
strategies with azimuth angular speed equal 3◦ /s and the those with angular speed 2.5◦ /s
and ∆az < 40◦ . We put the optimal scanning strategy parameters and their allowed
ranges to the table 8.1.
Chapter 8. Scanning strategy 165

Figure 8.13: Quality of scanning strategy. Smaller values (bluer bins) are for better
scanning strategies. The plotted value is defined by (8.9), where ∆C` values are taken
from the figure 8.12.

Table 8.1: Optimal scanning strategy parameters with allowed ranges to change with-
out spoiling too much the sensitivity.

Parameter Optimal value Allowed ranges


Time on constant elevation 60 min 30 - 120 min
Azimuth angular speed 3◦ per s 2.5 - 3◦ per s
20-50◦ with
Azimuthal range ∆az 30◦ azimuth angular speed 3◦ per s
and 20-30◦ with 2.5◦ pes s
From time to time.
Rotation around the optical axis No rotation
during scanning.
Half-wave plate rotation range 0 - 90◦
Half-wave plate rotation step 11.25◦ ≤ 11.25◦
Chapter 8. Scanning strategy 166

8.6 Pointing accuracy

It is crucially important to know the pointing correctly. However, due to the mechanical
imperfections the planned pointing could be spoiled. In practice in means the angles of
azimuth and elevation rotations az(t) and el(t) are known with some errors. If the error
for az(t) and el(t) is larger than the resolution of the map, then the acquisition operator
H does not correspond to reality and the correct reconstruction of the map becomes
impossible.

We study the pointing accuracy problem with fast simulations in monochromatic case
for 150 GHz band. We simulate the noiseless observations with a given error in az(t) and
el(t). The error is a gaussian error with a given standard deviation. The results of this
study are shown on the figure 8.14. We plot the standard deviation on the residual Q and
U maps (input convolved map of CMB minus the reconstructed one) under the coverage
mask COV > 0.2max(COV) as a function of pointing error. The planned pointing
accuracy of the mount system is 3 arcminutes, but it could be reduced to 20 arcseconds
using a stellar sensor. This will be done offline while analyzing the data. Both values
are shown on the figure 8.14 with vertical dashed lines. One can see that the pointing
inaccuracy on the planned level of 20 arcseconds does not spoil the reconstruction of the
maps. It is perfectly understood by the fact that the angular resolution of QUBIC is
about 20 arcminutes – 60 times larger than the pointing error. Thus the pointing error
does not change much the signal.

Even if the residuals on the reconstructed maps are not big, the pointing inaccuracy
can induce additional E-to-B leakage. We study it building the power spectra of the
reconstructed maps for the simulations, described above. The results are shown on the
figure 8.15. Again, the pointing accuracy of 20 arcseconds gives a satisfactory result –
the increase of the leakage is much smaller than the spectrum errorbars. It means that
the systematics from the pointing inaccuracy is very low. Later in this thesis we don’t
account for it, assuming a perfect pointing.

8.7 Conclusions

In this chapter we discussed how the sensitivity of a bolometric interferometer in general


and of QUBIC particularly changes with the scanning strategy parameters. We adopt
the common approach of constant elevation scans and try to adjust the angular speed,
azimuthal ranges and other parameters to achieve the best choice that allows to mitigate
the sample and noise variances. The 1/f noise puts very strong limitations to the scan-
ning strategy. To effectively filter out the low frequency component of the noise with
Chapter 8. Scanning strategy 167

Figure 8.14: The standard deviation of residual map as a function of the error on
the pointing angles az(t) and el(t). Errorbars are from the different realizations of
CMB and pointing errors (we use 10 realizations per point). The vertical dashed lines
shows the planned level of pointing accuracy for QUBIC: the right one at 2 arcminutes
shows the mount system pointing accuracy and the left one at 20 arcseconds shows
the accuracy of the stellar sensor. The horizontal dashed lines highlight the relative
increase of residuals at 20 arcseconds and 2 arcminutes.

the map-making we need to do pretty fast sweepings. Possibly, we would apply also the
high-pass filter to the TOD. This is the issue for future studies.

We check the validity of the formula (8.7) from [98] for the ∆C` of a bolometric interfer-
ometer with Monte-Carlo simulations and then use this formula to adjust the scanning
strategy parameters. For this we do a scan over the parameter space and look for the
minimum of ∆C` . Thus we define the optimal scanning strategy. Its parameters are
shown in the table 8.1.

The scanning strategy will probably need to be revised when more systematics can be
included, such as the more precise atmospheric contamination, the noise from the ground
etc.

We also study the issue of the pointing accuracy and show that the target accuracy of
20 arcseconds allows to reconstruct the CMB maps and power spectra without a strong
increase of systematics.
Chapter 8. Scanning strategy 168

Figure 8.15: The BB spectrum mean value (line) and standard deviation for 10
realizations (errorbars) in the ` bin from 50 to 150 as a function of the error on the
pointing angles az(t) and el(t). Relative values with respect to the first one are plotted.
The vertical dashed lines shows the planned level of pointing accuracy for QUBIC: the
right one at 2 arcminutes shows the mount system pointing accuracy and the left one
at 20 arcseconds shows the accuracy of the stellar sensor. The horizontal dashed lines
highlight the relative increase of residuals at 20 arcseconds and 2 arcminutes.
Chapter 9

Sensitivity of QUBIC

This chapter is dedicated to the discussion about the cosmological parameter


reconstruction from the power spectra. We run the realistic simulations for QUBIC and
predict the sensitivity on r.

9.1 Cosmological parameter estimation

In the previous chapters we described how to analyze the raw TOD data of QUBIC and
reconstruct the maps of the sky on various frequencies, how to distinguish the CMB
signal from other components of microwave emission and how to reconstruct the power
spectra from the measured CMB maps. On each step we reduced the amount of data
significantly. Now the last step that establishes the goal of any physical experiment
is the estimation of the parameters of physical laws. In QUBIC experiment we are
interested in the tensor-to-scalar ratio r. Let’s consider the techniques used to estimate
the cosmological parameters from the power spectra.

9.1.1 Likelihood approach to the parameter estimation problem

9.1.1.1 From CMB map to the cosmological parameters

The cosmological models predict the statistical properties of the temperature and polar-
ization fluctuations of CMB. Thus is seems straightforward to estimate the cosmological
parameters directly from the measured map.

We can construct a likelihood function which describes the probability to measure the
CMB temperature and polarization maps, which we write as a vector of data d, given a
vector of cosmological parameters Θ. In case of gaussian fluctuations it reads [120]
169
Chapter 9. Sensitivity of QUBIC 170

 
1 1 T −1
L(Θ) ≡ P (d|Θ) ∝ √ exp − d C d , (9.1)
C 2

where C is the pixel covariance matrix with elements

Cij = hd(ni )d(nj )i. (9.2)

As described in (7.5), this covariance matrix depends on the power spectrum of fluc-
tuations. The power spectrum, in it’s turn, depends on the fundamental cosmological
constants: Ω, H0 etc. Thus in principle it is possible to reconstruct the cosmological pa-
rameters directly from the measured maps. However the direct application of likelihood
is extremely cumbersome as one has to handle really heavy matrices of size npix ×npix . In
fact, the estimation of cosmological parameters from maps have only been done for few
experiments, for example see [121]. People used to use the tool of power spectra which
is much more illustrative and easy to handle than the plain maps of CMB fluctuations.

9.1.1.2 From C` to the cosmological parameters

Another approach divides the data analysis in two steps: first we reconstruct the power
spectra from maps, then estimate the cosmological parameters from the power spectra.
This is possible, because, for gaussian fluctuations, the power spectrum contains the
complete information about the statistical properties of the fluctuation field. Thus the
power spectrum does not loose information in comparison with the maps.

An often used approach to estimate the parameters Θ explores the Monte-Carlo simula-
tions of spectra with use of Markov chains (MCMC – Monte-Carlo Markov chain) using
the Metropolis-Hastings algorithm [122, 123]. An algorithm of search for Θ is following:
let’s suppose we have a starting point Θ1 in the Θ space. It could be obtained from the
previous experiments, theoretical expectations or other prior information available. The
method is based on creating the sequence of parameter estimators Θn , called a chain
with probability density function

p(Θ|M)
p(Θ|C` , M) = L(Θ) , (9.3)
p(C` |M)

where M is the cosmological model in which framework we estimate the cosmological


parameters. One can show that the Markov chain converges to a stationary state, thus
giving the set of parameters Θ according the data d.
Chapter 9. Sensitivity of QUBIC 171

The code CosmoMC is a standard tool for cosmological parameter inference from the
given spectra [124]. The CosmoMC package uses the CAMB library [69] to compute the
power spectra from given set of parameters.

9.2 Realistic Monte-Carlo of QUBIC

We run the realistic simulations for QUBIC with the following conditions: 2 years of
observations from the Puna desert site, the observational efficiency is reduced by the
3
factor 4 due the summertime, which is too wet for CMB observations (see figure 3.10).
Observational daily time is about 7 hours 30 minutes, which is equal to the time when
the field of QUBIC (see figure 3.12) is within the QUBIC allowed elevation ranges (see
figure 3.13) minus the dead time of 1 second on each edge of each sweep. The scanning
strategy is the one defined in the previous chapter and summarized in the table 8.1.

The relative bandwidth is 0.25 and the centers of two frequency bands of QUBIC are at
150 GHz and 220 GHz. Observations on each band are modeled as a sum of monochro-
matic observations as described in chapter 5 and the number of frequencies to sample
the polychromatic wide band is 15 for 150 GHz band and 20 for 220 GHz. After that we
reconstruct the simulated TOD as described in chapter 6, building 2 sub-band maps for
150 GHz band and 3 for 220 GHz band. Finally, we disentangle the CMB signal from
the 5 reconstructed maps using ILC method, introduced in chapter 6.

The observed sky is modeled according the theoretical CMB power spectra with the
latest measured values for cosmological parameters [6]. The value for r is set to zero
and the lensing is zero too. We also model the dust foreground as described in 6.2.1.
The noise is modeled according the values of atmospheric noise for Puna desert and the
intrinsic noise of QUBIC detectors. The value of the knee frequency for 1/f noise is set
to 1Hz.

The true full simulations of QUBIC imply taking data from the detectors at frequency
100 Hz (we call it sampling frequency). Unfortunately, when writing this thesis, it was
impossible to run such simulations because of some hardware problems of NERSC. We
managed to run lighter simulations with sampling
q frequency 10 Hz and then we scale
100Hz
down the reconstructed residuals by factor 10Hz . Let’s consider the TOD for one
detector and for only the central peak of the synthesized beam. We take data from the
ω
sky from the directions, separated by angle f, where ω is the azimuth angular speed and
f is sampling frequency. In our case this ratio is equal 18 arcminutes. And the resolution
of the map, that is the size of the map pixels, is ∼ 14 arcminutes. That is we don’t hit
Chapter 9. Sensitivity of QUBIC 172

each pixel in the sky and hence these simulations are slightly suboptimal. One can hope
to get a better result with full simulations of QUBIC with f = 100 Hz.

9.2.1 Map-making

We run 4 realizations of sky and noise and reconstruct 5 maps on different frequencies.
The results are shown on the figure 9.1. One can see that all the sub-bands are nicely
reconstructed with flat residuals. Again, we see the badly reconstructed features on the
I residual maps. This strange behavior of the map-making algorithm is not understood
yet. However the Q and U maps are nicely flat. The Q input, output and residual maps
are shown on the figure 9.2 for clearer view. The noise on reconstructed the Q and U
maps corresponds to 1.4 µK for the 140 GHz sub-band, 1.6 µK for 159 GHz,

9.2.2 Component separation

As described in chapter 6, we use internal linear combinations method (ILC) to desan-


tangle the CMB signal from the foregrounds. The result of ILC, applied to the maps
from the figure 9.1 is shown on the figure 9.3. We’d like to mention again that though
the results are very satisfactory, one still can hope to get a better result using some other
methods of component separation. The ILC code, implemented in the framework of this
thesis, is very sketchy and it might be quite suboptimal.

9.2.3 Spectra reconstruction

Now having pure CMB map we can run the spectra reconstruction, as described in the
chapter 7. The results of spectra reconstruction with Xpure method are shown on the plot
9.4. The BB spectrum is biased due to noise and residuals after component separation.
In chapter 7 we said that both Xpure and Spice are good. However when we apply Spice
to the full simulations, we get very bad results, much worse than Xpure. We don’t show
Spice here, but the bias for Spice is approximately 5 times higher and the errorbars are
jumping strangely from bin to bin.

9.2.4 Parameter estimation

In the section 9.1.1.2 we discussed the general approach to the parameter estimation
from the power spectra. However in case of primordial B modes observations we care on
only one single parameter – tensor-to-scalar ratio r. This fact simplifies the whole thing
Chapter 9. Sensitivity of QUBIC 173

(a) (b)

(c) (d)

(e)

Figure 9.1: Reconstruction of multiple sub-bands within each of QUBIC wide bands
in the realistic Monte-Carlo, implying the optimal scanning strategy. Sub-band central
frequencies are: [140.0, 158.8, 200.9, 218.5, 237.6] GHz, they are plotted respectively on
the sub-plots A, B, C, D and E. Input convolved maps, output maps and their differ-
ence are plotted for each frequency for I, Q and U Stokes parameters. Note that the
difference maps are built on a smaller scale on color axis.
Chapter 9. Sensitivity of QUBIC 174

Figure 9.2: Repetition of Q maps from the figure 9.1


Chapter 9. Sensitivity of QUBIC 175

Figure 9.3: Reconstruction of CMB signal from the maps presented on the figure 9.1
by ILC method for component separation. Input convolved maps, output maps and
their difference are plotted for each frequency for I, Q and U Stokes parameters.

significantly: instead of exploring the multidimensional parameter space we calculate the


likelihood (9.1) dependent on r only:

" #
1 X (C` (r) − Ĉ`,b )2
L(r) = exp − 2
, (9.4)
2 ∆Ĉ`,b
b

where C` (r) is the theoretical spectrum with standard cosmological parameters and
tensor-to-scalar ratio given by r, Ĉ`,b is the measured binned spectrum with 1 sigma
P
errorbars ∆Ĉ`,b , the summation b is over all the bins of the reconstructed spectrum.
The likelihood for the spectra from the figure 9.4 is shown on the figure 9.5. The like-
lihood peak has a gaussian shape with mean at r = 0.035 and σ = 0.012. The result
is biased (the peak it not centered in the zero). This bias comes from the inaccurate
component separation. Note that the ILC method of component separation implemented
in the framework of this thesis is quite rudimentary. It was used mostly to test QUBIC
sensitivity on r. The bias was already seen on the power spectrum, see figure 9.4, so it
Chapter 9. Sensitivity of QUBIC 176

Figure 9.4: Reconstruction of the power spectra. Errorbars it the standard deviation
defined from different realizations. Solid green curves are for the theoretical spectra.

is as expected that the estimation of r is biased. However the sigma 0.012 is already a
very promising result. After debiasing it corresponds to the sensitivity of QUBIC.

The table 9.1 gives the summary of the main ground and balloon projects aiming at
measuring the primordial B modes. We put in this table the obtained value for QUBIC
sensitivity on r 0.012. One can see that with this sensitivity QUBIC is a competitive
project in the field.
Chapter 9. Sensitivity of QUBIC 177

Figure 9.5: Likelihood function for single parameter r for the reconstructed spectrum
9.4. The maximum of the likelihood is at r = 0.035, the sigma of the peak is 0.012.
Chapter 9. Sensitivity of QUBIC 178

Table 9.1: Summary of the main ground and balloon projects aiming at measuring
B modes [14].

Project Country Freq. σ(r)


Location Status ` range Reference
name and type (GHz) No fg. With fg.
140, 159,
France,
QUBIC Puna 2018 201, 219, 30-200 [14] 0.01 0.012
ground
238
BICEP3 U.S.A., 95, 150,
South Pole Running 50-250 [125] 0.0025 0.013
/Keck ground 220
U.S.A., 38, 93,
CLASS Atacama ≥ 2016 2-100 [126] 0.0014 0.003
ground 148, 217
U.S.A., 95, 148,
SPT3G South Pole 2017 50-3000 [127] 0.0017 0.005
ground 223
U.S.A., 90, 150,
AcvACT Atacama Starting 60-3000 [128] 0.0013 0.004
ground 230
Simons U.S.A., 90, 150,
Atacama ≥ 2017 30-3000 [129] 0.0016 0.005
Array ground 220
43, 90,
Italy,
LSPE Arctic 2017 140, 220, 3-150 [130] 0.03
balloon
245
U.S.A., 150, 220,
EBEX10K Antarctica ≥ 2017 20-2000 [131] 0.0027 0.007
balloon 280, 350
U.S.A.,
SPIDER Antarctica Running 90, 150 20-500 [132] 0.0031 0.012
balloon
U.S.A., 200, 270,
PIPER Multiple ≥ 2016 2-300 [133] 0.0038 0.008
balloon 350, 600
General conclusions

9.3 Physical problematic and QUBIC experiment

The standard model of modern cosmology is based on the concept of the Big Bang, which
implies expansion of the Universe from a hot and dense state. The earliest known epoch
of expansion took place approximately 13.8 billion years ago. This model explains very
well the properties of the large scale structures of the Universe, of the relic background
radiation and the light element abundances. However, there are many unanswered ques-
tions: what is the nature of dark matter, what is the nature of dark energy, what is the
physics of neutrinos and others. One of the main issues of the Big Bang model is that
it does not explain the flatness of the Universe and the fact that it is so homogeneous
(horizon problem). One possible solution for these problems is cosmic inflation – a pe-
riod in the very early Universe, when it expanded with acceleration. The accelerated
expansion necessarily generates the gravitational waves which, propagating through the
expanding Universe, leaves a particular imprint on the polarization of the CMB – the B
modes. It is often said, the B modes are a “smoking gun" of inflation.

QUBIC is a very promising project for measuring the primordial B modes. It explores a
novel concept of bolometric interferometry. It images the sky using a complex synthesized
beam formed by interference of beams from multiple pupils. It inherits advantages of
both imagers and interferometers – two kinds of instruments for CMB observations. From
imagers it inherits high sensitivity thanks to the usage of bolometric detectors, and from
interferometers QUBIC takes the self-calibration technique which allows to decrease the
systematics significantly. Moreover, thanks to the fact that the shape of the synthesized
beam depends on the frequency of light, we are able to reconstruct multiple subbands
within each wide band of QUBIC, thus improving significantly the ability of QUBIC for
component separation.

The QUBIC project is currently on construction phase. The first light is expected to
be seen in 2018. QUBIC is a ground based experiment in Puna desert in Argentina.
The frequency bands are 150 and 220, each with a flat bandwidth and 0.25 relative

179
General conclusions 180

bandwidth. The sky is observed through an array of 400 corrugated horns. The incoming
light polarization is modulated by the rotating half-wave plate, then it passes through
the polarizing grid. Thus the total power signal that passes towards the inner part of the
instrument is defined only by the angle of rotation of the half-wave plate and does not
change even if the inner parts affect the polarization. After passing the horns the light
is focused by two mirrors on the focal planes, tiled with highly sensitive photon noise
limiter bolometer detectors.

In the framework of this thesis we worked on the definition of the scanning strategy
parameters which would allow to increase the sensitivity of QUBIC. The parameters of
the optimal scanning strategy are shown in the table 8.1. We also have proven that the
target pointing accuracy for QUBIC is good enough to measure B modes.

9.4 Overview of QUBIC data analysis pipeline

The QUBIC data analysis pipeline starts from the time-ordered data (TOD). The TOD
of QUBIC will be probably filtered with a high-pass filter, in the framework of this thesis
we don’t analyse this possibility. We model the synthesized beam of QUBIC as a sum of
peaks, distributed on a Dirac 2-D comb and modulated with the primary gaussian beam.
Using this model for the synthesized beam the acquisition operator becomes sparse and
thus the map-making problem becomes computationally trackable.

The multi-band map-making explores the fact that the synthesized beam is different on
different frequencies. Thus the total acquisition operator for each focal plane can be
written as a sum of narrow frequency band acquisition operators and therefore we are
able to reconstruct multiple maps for these narrow bands. The idea of fusion acquisition
helps to solve the problem of poor constraint for the pixels on the edge of the field of
view.

The component separation is done by the internal linear combinations method (ILC).
The refining of the synthesized beam model. The development of map-making for the
polychromatic case and the implementation of ILC were done in the framework of this
thesis.

The next step in the pipeline is the reconstruction of the power spectra. We analyze
the efficiency of Xpole, Xpure and Spice methods and conclude that Xpure is good for
QUBIC needs.

The last step of data analysis is the estimation of the cosmological parameters among
which the most interesting one for us is the tensor-to-scalar ratio r. We run the realistic
General conclusions 181

simulations and conclude that QUBIC is capable to detect the primordial B-modes with
sensitivity on r 0.012. QUBIC is a competitive experiment in the third generation of
CMB observations and will certainly play an important role in constraining the primor-
dial B modes amplitude and unraveling the mystery of inflation. As the first bolometric
interferometer it will demonstrate exceptional abilities of this novel technique and will
likely inspire other experiments, like one proposed in section 6.6 – satellite-born bolomet-
ric interferometer with a wide frequency band from 60 to 600 GHz and unprecedented
frequency resolution.

9.5 For future studies and development

There are some known questions yet unsolved for QUBIC. The synthesized beam approx-
imation should be revised. The approach we currently use does not allow to model any
more features of the beam than we are already taking into account. Particularly, it does
not allow to simulate the TOD with a complete synthesized beam and analyse it using
an approximated one in order to check the validity of our approximation. To be honest,
it means we don’t know whether the approximation we use is valid or not. The only way
we can propose to simulate the TOD with the realistic synthesized beam, at least for a
very short observational period, is to use the synthesized beam as it is, without building
any model of it. However, computationally this problem is extremely heavy.

The synthesized beam for the polychromatic case differs from the realistic one (see figures
5.6 and 5.7). Probably, the reason is the same – that we neglect the minor features of
the synthesized beam in between the peaks. This issue requires further study.

The problem with poor frequency resolution for 150 GHz band is not completely un-
derstood (see figure 6.3). We propose a possible explanation for this, saying that the
angular size of the focal plane is too little for low frequency, but we don’t check it with
Monte-Carlo.

We are trying to develop a QUBIC-Planck fusion acquisition that should help to recon-
struct the map at the edge of the field of view of QUBIC. But the systematics which
could present on the Planck map may induce the E-to-B leakage for the QUBIC results.
This needs to be explored. Another question concerning the fusion acquisition is the
fact that it does not work well for multi-band acquisition. Hopefully, if the induced
systematics effect is negligible and the code is developed, the fusion map-making would
increase QUBIC sensitivity even more.
General conclusions 182

The problem with poor reconstruction of the temperature maps remains unsolved, see
figure 9.1. Probably, one can mitigate this effect by applying the QUBIC-Planck acquisi-
tion model. If it will turn out that Planck can induce a significant amount of systematics
to the QUBIC maps of Q and U , but solves the problem for I one can try to use the
fusion acquisition only for the temperature maps.

In the framework of this thesis we didn’t mention an important step in the data analysis
pipeline when we combine the daily maps into one. Certainly, this should be studied
attentively.

The component separation, as it is implemented for QUBIC now, seems to be suboptimal.


At least, one should check the performance of ILC more carefully and try out other
methods of component separation. Further work on this subject may give better results
in disentangling the CMB signal from total sky emission in microwave. At least with
proper component separation the bias seen on the reconstructed BB-spectrum and thus
on the estimation of r should vanish: we believe that the source of this bias is our
rudimentary implementation of ILC.

The scanning strategy may be adjusted when the systematics will be know more precisely:
the noise from atmosphere and from ground etc. We give the ranges within which one
can change the scanning strategy parameters, so now it is easy to pick another set of
parameters without ruining the sensitivity.

The noise power spectrum should be estimated and subtracted from the QUBIC power
spectra. Together with better component separation it should remove the bias from the
BB power spectrum. Only then the detection of the B modes is possible.
Résumé général

9.6 Problématique physique et l’expérience QUBIC

Le modèle standard de la cosmologie moderne est basé sur le concept du Big Bang, qui im-
plique l’expansion de l’Univers à partir d’un état chaud et dense. L’époque d’expansion
la plus ancienne connue a eu lieu il y a environ 13,8 milliards d’années. Ce modèle
explique très bien les propriétés des structures à grande échelle de l’Univers, du ray-
onnement de fond et des abondances des éléments lumineux. Cependant, il existe de
nombreuses questions sans réponse: quelle est la nature de la matière noire, quelle est la
nature de l’énergie noire, quelle est la physique des neutrinos et d’autres? L’un des prin-
cipaux problèmes du modèle du Big Bang est qu’il n’explique pas la planéité de l’Univers
et le fait qu’elle soit si homogène (problème d’horizon). Une solution possible pour ces
problèmes est l’inflation cosmique - une période dans l’univers très précoce, lorsqu’elle
s’est développée avec une accélération. L’expansion accélérée génère nécessairement les
ondes gravitationnelles qui, en se propagant à travers l’Univers en expansion, laissent
une empreinte particulière sur la polarisation de fond diffus cosmologique - modes B.
On dit souvent, les modes B sont un “smoking gun" de l’inflation.

QUBIC est un projet très prometteur pour mesurer les modes B primordial. Il explore
un nouveau concept d’interférométrie bolimétrique. Il fait les images du ciel en util-
isant un faisceau synthétique complexe formé par l’interférence des faisceaux à partir de
plusieurs pupilles. Il hérite des avantages des images et des interféromètres - deux types
d’instruments pour les observations CMB. De l’imageur, il possède une grande sensibilité
grâce à l’utilisation de détecteurs bolométriques, et des interféromètres QUBIC prend
la technique d’auto-étalonnage qui permet de réduire considérablement la systématique.
De plus, grâce au fait que la forme du faisceau synthétisé dépend de la fréquence de la
lumière, nous sommes en mesure de reconstruire de multiples sous-bandes dans chaque
large bande de QUBIC, améliorant ainsi significativement la capacité de QUBIC pour la
séparation des composants.

183
Résumé général 184

Le projet QUBIC est actuellement en phase de construction. La première lumière devrait


être vue en 2018. QUBIC est une expérience basée sur le sol dans le désert de Puna en
Argentine. Les bandes de fréquences sont de 150 et 220, chacune avec une bande passante
plate et 0.25 bande passante relative. Le ciel est observé à travers une série de 400 cornes
ondulées. La polarisation de la lumière entrante est modulée par la plaque à demi-onde
rotative, puis elle traverse la grille de polarisation. Ainsi, le signal de puissance total qui
passe vers la partie interne de l’instrument n’est défini que par l’angle de rotation de la
plaque à demi-onde et ne change pas même si les parties internes affectent la polarisation.
Après avoir passé les cornes, la lumière est focalisée par deux miroirs sur les plans focaux,
carrelés avec des détecteurs de bolomètre à limiteur de bruit photonique très sensibles.

Dans le cadre de cette thèse, nous avons travaillé sur la définition des paramètres
de la stratégie de balayage qui permettrait d’augmenter la sensibilité de QUBIC. Les
paramètres de la stratégie de balayage optimale sont affichés dans le tableau 8.1. Nous
avons également prouvé que la précision de pointage cible pour QUBIC est suffisante
pour mesurer les modes B.

9.7 Vue d’ensemble du pipeline d’analyse de données QUBIC

Le pipeline d’analyse de données QUBIC commence à partir des données ordonnées dans
le temps (TOD). L’TOD de QUBIC sera probablement filtré avec un filtre passe-haut,
dans le cadre de cette thèse, nous n’analysons pas cette possibilité. Nous modélisons
le faisceau synthétique de QUBIC comme une somme de pics, distribués sur un peigne
Dirac 2-D et modulés avec le faisceau gaussien primaire. À l’aide de ce modèle pour
le faisceau synthétique, l’opérateur d’acquisition devient discret et, par conséquent, le
problème de la cartographie devient analytique.

La cartographie multi-bandes explore le fait que le faisceau synthétique est différent sur
différentes fréquences. Ainsi, l’opérateur d’acquisition total pour chaque plan focal peut
être écrit comme une somme d’opérateurs d’acquisition de bande de fréquence étroite
et, par conséquent, nous sommes en mesure de reconstituer des cartes multiples pour
ces bandes étroites. L’idée d’acquisition de fusion aide à résoudre le problème de la
contrainte médiocre pour les pixels au bord du champ de vision.

La séparation des composants se fait par la méthode des combinaisons linéaires internes
(ILC). Le raffinage du modèle de faisceau synthétique, le développement de la cartogra-
phie pour le cas polychrome et la mise en oeuvre de la CIT ont été réalisés dans le cadre
de cette thèse.
Résumé général 185

L’étape suivante dans le pipeline est la reconstruction du spectre de puissance. Nous


analysons l’efficacité des méthodes Xpole, Xpure et Spice et concluons que Xpure est
bon pour les besoins QUBIC.

La dernière étape de l’analyse des données est l’estimation des paramètres cosmologiques
parmi lesquels le plus intéressant pour nous est le ratio tensor-à-scalaire r. Nous exécutons
les simulations réalistes et concluons que QUBIC est capable de détecter les premiers B
-modes avec une sensibilité de r 0.012. QUBIC est une expérience compétitive dans la
troisième génération d’observations CMB et jouera certainement un rôle important dans
la contrainte de l’amplitude des modes B primordiale et dévoile le mystère de l’inflation.
En tant que premier interféromètre bolométrique, il démontrera des capacités exception-
nelles de cette nouvelle technique et inspirera probablement d’autres expériences, comme
l’a proposé dans la section 6.6 - l’interféromètre bolométrique sur satellite avec une large
bande de fréquences de 60 à 600 GHz et une résolution de fréquence sans précédent.
Appendix A

QUBIC data analysis package


documentation

The QUBIC pipeline was developed up to some quite advanced point by Pierre Chanial
[134]. It is a package, written in Python programming language, based on the packages
Pyoperators [135] and Pysimulators [136], developed also by Pierre. The Pyoperators
package provides tools for using the operators notations in a simple way in Python.
And Pysimulators is a set of basic tools to simulate a CMB instrument and its data
acquisition.

The basic structure of the qubic package [137] is shown on the figure A.1. The main
class of the package is the QUBIC acquisition class, called QubicAcquisition. In order
to create an instance of QubicAcquisition, one needs to provide it with instances of
QubicScene, QubicSampling and QubicInstrument classes, see figure A.1. The Qubic-
Scene class combines the general information about the observations, like the used sky
pixelization and whether we do or not assume for the polarized signal. The QubicSam-
pling keeps information about the pointing. And the QubicInstrument class contains the
complete information about the instrument: its optical geometry, noise characteristics,
parameters for the synthesized beam model etc.

Exhaustive information about the installation process of the qubic package you can find
on the [137]. The documentation for each class and function of the package is easily
accessed via the interactive Python shell iPython [138].

Now let consider some examples as a tutorial. Here is a simple code to simulate the
time-ordered data (TOD) and reconstruct the CMB maps from it:
1 from __future__ import d i v i s i o n
2 from q u b i c import (

186
Appendix A. QUBIC 187

QubicScene
• Healpix map parameters
• Polarized signal or not

QubicSampling
• Complete pointing information Simulation of
observations
QubicInstrument QubicAcquisition
• Each component of the
instrument is implemented as Map-making
an operator
• Bandwidth
• Primary and secondary
beams for each horn
• Synthesised beam model

Figure A.1: Sketch of the qubic package structure

3 c r e a t e _ s w e e p i n g _ p o i n t i n g s , e q u 2 g a l , Q u b i c A c q u i s i t i o n , QubicScene ,
4 tod2map_all )
5 import h e a l p y a s hp
6 import numpy a s np
7 from q u b i c . data import PATH
8 from q u b i c . i o import read_map
9

10 x0 = read_map (PATH + ’ syn256_pol . f i t s ’ )


11

12 # parameters
13 n s i d e = 256
14

15 # g e t t h e s a m p l i n g model
16 np . random . s e e d ( 0 )
17 sampling = create_sweeping_pointings ( )
18 s c e n e = QubicScene ( n s i d e )
19

20 # g e t t h e a c q u i s i t i o n model
21 a c q u i s i t i o n = Q u b i c A c q u i s i t i o n ( 1 5 0 , sampling , s c e n e ,
22 s y n t h b e a m _ f r a c t i o n =0.99 ,
23 d e t e c t o r _ t a u =0.01 ,
24 d e t e c t o r _ n e p =1. e −17 ,
25 d e t e c t o r _ f k n e e =1. ,
26 d e t e c t o r _ f s l o p e =1)
27

28 # simulate the t i m e l i n e
29 tod , x0_convolved = a c q u i s i t i o n . g e t _ o b s e r v a t i o n (
30 x0 , c o n v o l u t i o n=True , n o i s e l e s s=F a l s e )
31

32 # reconstruction
Appendix A. QUBIC 188

33 map , cov = tod2map_all ( a c q u i s i t i o n , tod , t o l =1e −2)

The first line is just to prevent the errors with confusion between the float and integer
numbers. Then we import the necessary components from qubic library, as well as
numpy [139] and healpy [140] packages. Note that qubic library disposes some necessary
data, for example a map of one realization of CMB according to the theoretical spectrum
(in agreement with Planck results [6]) and with r = 0. Note that the notion of the CMB
map always implies 3 component map: I, Q and U . Also there are two useful functions
for input and output: qubic.io.read_map and qubic.io.write_map. These functions
allow to easily read (write) 3 component maps in to (out of) fits files. Moreover, the files
written in this way take less disk space than standard healpix fits files. The input CMB
sky is read from the file in the line 10.

The parameter nside is set to match the nside of the map. It is needed below, in the
line 18, when we create the QubicScene object. The random seed is fixed (line 16) to
have the same realization of noise each time we run the script. The pointing object is
created with function create_sweeping_pointings. The default parameters are set to
the best scanning strategy, as discussed in chapter 8. Pointing is defined usually only
for one day of observations. We assume that the noise is uncorrelated between two daily
observations, so there is no point to simulate the time-line for observational duration
longer than one day.

Then we create the acquisition model. Note that we omitted the step of creation of the
instrument model. Here it is not needed since we take all the parameters by default. If
some custom configuration is needed one can either create the instrument model using
class QubicInstrument, and then put it as an input to the QubicAcquisition initialization,
or change the parameters of the acquisition.instrument object afterwards. In this
example the QubicAcquisition object is provided with the following parameters:

• 150 – band of QUBIC. Note that it is not frequency, rather kind of notation.
Allowed values are 150 and 220. This number could be replaced with an instance
of QubicInstrument class.

• sampling and scene – instances of QubicSampling and QubicScene classes.

• Parameters of the noise.

A useful keyword parameter for QubicAcquisition is effective_duration. With it one


can specify the observational time in years (100% observational efficiency is assumed).
Then the noise is scaled down to match efficient level of noise for the specified duration
of observations.
Appendix A. QUBIC 189

The simulation of noisy TOD is in the lines 29-30 and the map-making is in the line 33. As
an output we have the reconstructed map and the coverage map cov. The reconstructed
map should be compared with the convolved input map x0_convolved. Note, that unlike
the healpy convention, in qubic we use the shape for maps (npix , 3), where npix is the
number of pixel of the map.

Note also that this script is not feasible on a personal computer, because the required
amount of memory and CPU is huge. Instead one should use the supercomputers: either
CURIE or NERSC. The qubic package has very good documentation for every function
and class and it disposes some example scripts, so it won’t be difficult of any user to
grasp the package.
Bibliography

[1] Edwin Hubble. A relation between distance and radial velocity among extra-
galactic nebulae. Proceedings of the National Academy of Sciences, 15(3):168–173,
1929.

[2] Scott Burles, Kenneth M Nollett, James W Truran, and Michael S Turner. Sharp-
ening the predictions of big-bang nucleosynthesis. Physical Review Letters, 82(21):
4176, 1999.

[3] АГ Дорошкевич and ИД Новиков. Средняя плотность излучения в


Метагалактике и некоторые вопросы релятивистской космологии. ДАН СССР,
154(4):809–811, 1964.

[4] DJ Fixsen, ES Cheng, JM Gales, John C Mather, RA Shafer, and EL Wright. The
cosmic microwave background spectrum from the full cobethe national aeronautics
and space administration/goddard space flight center (nasa/gsfc) is responsible
for the design, development, and operation of the cosmic background explorer
(cobe). scientific guidance is provided by the cobe science working group. gsfc is
also responsible for the development of the analysis software and for the production
of the mission data sets. firas data set. The Astrophysical Journal, 473(2):576, 1996.

[5] CB Netterfield, Peter AR Ade, James J Bock, JR Bond, J Borrill, A Boscaleri,


K Coble, CR Contaldi, BP Crill, P De Bernardis, et al. A measurement by
boomerang of multiple peaks in the angular power spectrum of the cosmic mi-
crowave background. The Astrophysical Journal, 571(2):604, 2002.

[6] PAR Ade, N Aghanim, C Armitage-Caplan, M Arnaud, M Ashdown, F Atrio-


Barandela, J Aumont, C Baccigalupi, Anthony J Banday, RB Barreiro, et al.
Planck 2013 results. xvi. cosmological parameters. Astronomy & Astrophysics,
571:A16, 2014.

[7] R Adam, PAR Ade, N Aghanim, Y Akrami, MIR Alves, M Arnaud, F Arroja,
J Aumont, C Baccigalupi, M Ballardini, et al. Planck 2015 results. i. overview of
products and scientific results. arXiv preprint arXiv:1502.01582, 2015.

190
Bibliography 191

[8] Clarence Chang. A stage-iv cmb experiment, cmb-s4. 2013.

[9] PAR Ade, RW Aikin, D Barkats, SJ Benton, CA Bischoff, JJ Bock, JA Brevik,


I Buder, E Bullock, CD Dowell, et al. Detection of b-mode polarization at degree
angular scales by bicep2. Physical Review Letters, 112(24):241101, 2014.

[10] R Keisler, S Hoover, N Harrington, JW Henning, PAR Ade, KA Aird, JE Auster-


mann, JA Beall, AN Bender, BA Benson, et al. Measurements of sub-degree
b-mode polarization in the cosmic microwave background from 100 square degrees
of sptpol data. The Astrophysical Journal, 807(2):151, 2015.

[11] R Adam, PAR Ade, N Aghanim, M Arnaud, J Aumont, C Baccigalupi, AJ Ban-


day, RB Barreiro, JG Bartlett, N Bartolo, et al. Planck intermediate results-xxx.
the angular power spectrum of polarized dust emission at intermediate and high
galactic latitudes. Astronomy & Astrophysics, 586:A133, 2016.

[12] Nabila Aghanim, Subhabrata Majumdar, and Joseph Silk. Secondary anisotropies
of the cmb. Reports on Progress in Physics, 71(6):066902, 2008.

[13] Istvan Szapudi, Simon Prunet, Dmitry Pogosyan, Alexander S Szalay, and
J Richard Bond. Fast cosmic microwave background analyses via correlation func-
tions. The Astrophysical Journal Letters, 548(2):L115, 2001.

[14] J Aumont, S Banfi, P Battaglia, ES Battistelli, A Baù, B Bélier, D Bennett,


L Bergé, J Ph Bernard, M Bersanelli, et al. Qubic technological design report.
arXiv preprint arXiv:1609.04372, 2016.

[15] Bartel L van der Waerden and Peter Huber. Science awakening. vol. 2: The birth
of astronomy. Leyden: Noordhoff International Publication, and New York: Oxford
University Press, 1974, 1, 1974.

[16] ТМ Потемкина and ВН Обридко. Астрономия древних обществ. Наука М,


2002.

[17] Борис Михайлович Владимирский and Лев Дмитриевич Кисловский.


Археоастрономия и история культуры. М.: Знание, 1989.

[18] АИ Еремеева and ФА Цицин. История астрономии (основные этапы развития


астрономической картины мира). М.: Изд-во Моск. ун-та, 1989.

[19] Анатолий Михайлович Черепащук. История истории Вселенной. Успехи


физических наук, 183(5):535–556, 2013.

[20] Albert Einstein. Die grundlage der allgemeinen relativitätstheorie. Annalen der
Physik, 354(7):769–822, 1916.
Bibliography 192

[21] Albert Einstein. Kosmologische und relativitatstheorie. SPA der Wissenschaften,


142, 1917.

[22] Scott Dodelson. Modern cosmology. Academic press, 2003.

[23] Alexander Friedman. Über die krümmung des raumes. Zeitschrift für Physik A
Hadrons and Nuclei, 10(1):377–386, 1922.

[24] Alexander Friedmann. Über die möglichkeit einer welt mit konstanter negativer
krümmung des raumes. Zeitschrift für Physik A Hadrons and Nuclei, 21(1):326–
332, 1924.

[25] Александр Александрович Фридман. Мир как пространство и время. Наука,


1965.

[26] Wendy L Freedman, Barry F Madore, Brad K Gibson, Laura Ferrarese, Daniel D
Kelson, Shoko Sakai, Jeremy R Mould, Robert C Kennicutt Jr, Holland C Ford,
John A Graham, et al. Final results from the hubble space telescope key project to
measure the hubble constantbased on observations with the nasa/esa hubble space
telescope, obtained at the space telescope science institute, which is operated by
aura, inc., under nasa contract nas5-26555. The Astrophysical Journal, 553(1):47,
2001.

[27] Vesto M Slipher. Nebulae. Proceedings of the American Philosophical Society, 56


(5):403–409, 1917.

[28] VM Slipher. Radial velocity observations of spiral nebulae. The Observatory, 40:
304–306, 1917.

[29] Georges Lemaı̂tre. Un univers homogène de masse constante et de rayon croissant


rendant compte de la vitesse radiale des nébuleuses extra-galactiques. In Annales
de la Société scientifique de Bruxelles, volume 47, pages 49–59, 1927.

[30] Donald H Perkins. Introduction to high energy physics. Cambridge University


Press, 2000.

[31] Andrej Dmitrievich Sakharov. Violation of cp invariance, c asymmetry, and baryon


asymmetry of the universe. JETP lett., 5:24–27, 1967.

[32] J McDonough, VL Highland, WK McFarlane, RD Bolton, MD Cooper, JS Frank,


AL Hallin, P Heusi, CM Hoffman, GE Hogan, et al. New searches for the c-
noninvariant decay π 0 3γ and the rare decay π 0 4γ. Physical Review D, 38(7):
2121, 1988.
Bibliography 193

[33] JR Batley, GE Kalmus, C Lazzeroni, DJ Munday, M Patel, MW Slater, SA Wotton,


R Arcidiacono, G Bocquet, A Ceccucci, et al. Precision measurement of the ratio
br(ks → π + π − e+ e− )/br(kl → π + π − πd0 ). Physics Letters B, 694(4):301–309, 2011.

[34] Bernard Aubert, A Bazan, A Boucham, D Boutigny, I De Bonis, J Favier, J-


M Gaillard, A Jeremie, Y Karyotakis, T Le Flour, et al. The babar detector.
Nuclear Instruments and Methods in Physics Research Section A: Accelerators,
Spectrometers, Detectors and Associated Equipment, 479(1):1–116, 2002.

[35] Lyndon Evans and Philip Bryant. Lhc machine. Journal of Instrumentation, 3
(08):S08001, 2008.

[36] Gerard Hooft. Magnetic monopoles in unified gauge theories. Nuclear Physics: B,
79(2):276–284, 1974.

[37] ДС Горбунов and ВА Рубаков. Введение в теорию ранней Вселенной:


Космологические возмущения. Инфляционная теория, 2010.

[38] Viatcheslav F Mukhanov, Hume A Feldman, and Robert Hans Brandenberger.


Theory of cosmological perturbations. Physics Reports, 215(5-6):203–333, 1992.

[39] Shaun Hotchkiss, Anupam Mazumdar, and Seshadri Nadathur. Observable grav-
itational waves from inflation with small field excursions. Journal of Cosmology
and Astroparticle Physics, 2012(02):008, 2012.

[40] Alan H Guth. Inflationary universe: A possible solution to the horizon and flatness
problems. Physical Review D, 23(2):347, 1981.

[41] Andrew R Liddle and David H Lyth. Cobe, gravitational waves, inflation and
extended inflation. Physics Letters B, 291(4):391–398, 1992.

[42] Александр Дмитриевич Долгов. Космология ранней вселенной. Рипол


Классик, 1988.

[43] Chung-Pei Ma and Edmund Bertschinger. Cosmological perturbation theory in the


synchronous and conformal newtonian gauges. arXiv preprint astro-ph/9506072,
1995.

[44] Arno A Penzias. The origin of the elements. Reviews of Modern Physics, 51(3):
425, 1979.

[45] Fritz Zwicky. Die rotverschiebung von extragalaktischen nebeln. Helvetica Physica
Acta, 6:110–127, 1933.

[46] Daniel J Eisenstein. Dark energy and cosmic sound. New Astronomy Reviews, 49
(7):360–365, 2005.
Bibliography 194

[47] Marek Kowalski, David Rubin, Greg Aldering, RJ Agostinho, A Amadon, R Aman-
ullah, C Balland, K Barbary, G Blanc, PJ Challis, et al. Improved cosmological
constraints from new, old, and combined supernova data sets. The Astrophysical
Journal, 686(2):749, 2008.

[48] Blake D Sherwin, Joanna Dunkley, Sudeep Das, John W Appel, J Richard Bond,
C Sofia Carvalho, Mark J Devlin, Rolando Dünner, Thomas Essinger-Hileman,
Joseph W Fowler, et al. Evidence for dark energy from the cosmic microwave
background alone using the atacama cosmology telescope lensing measurements.
Physical Review Letters, 107(2):021302, 2011.

[49] Georges Aad, T Abajyan, B Abbott, J Abdallah, S Abdel Khalek, AA Abdelalim,


O Abdinov, R Aben, B Abi, M Abolins, et al. Observation of a new particle in
the search for the standard model higgs boson with the atlas detector at the lhc.
Physics Letters B, 716(1):1–29, 2012.

[50] Fedor Bezrukov and Mikhail Shaposhnikov. The standard model higgs boson as
the inflaton. Physics Letters B, 659(3):703–706, 2008.

[51] BP Abbott, Richard Abbott, TD Abbott, MR Abernathy, Fausto Acernese,


Kendall Ackley, Carl Adams, Thomas Adams, Paolo Addesso, RX Adhikari, et al.
Observation of gravitational waves from a binary black hole merger. Physical review
letters, 116(6):061102, 2016.

[52] Simeon Bird, Ilias Cholis, Julian B Muñoz, Yacine Ali-Haı̈moud, Marc
Kamionkowski, Ely D Kovetz, Alvise Raccanelli, and Adam G Riess. Did ligo
detect dark matter? arXiv preprint arXiv:1603.00464, 2016.

[53] EF Keane, S Johnston, S Bhandari, E Barr, NDR Bhat, M Burgay, M Caleb,


C Flynn, A Jameson, M Kramer, et al. The host galaxy of a fast radio burst.
Nature, 530(7591):453–456, 2016.

[54] Stephan Schlamminger, K-Y Choi, TA Wagner, JH Gundlach, and EG Adelberger.


Test of the equivalence principle using a rotating torsion balance. Physical Review
Letters, 100(4):041101, 2008.

[55] Walter S Adams. Some results with the coudé spectrograph of the mount wilson
observatory. The Astrophysical Journal, 93:11, 1941.

[56] Andrew McKellar. Molecular lines from the lowest states of diatomic molecules
composed of atoms probably present in interstellar space. Publications of the Do-
minion Astrophysical Observatory Victoria, 7:251, 1941.

[57] ИД Новиков and ЯБ Зельдович. Строение и эволюция Вселенной, 1975.


Bibliography 195

[58] ТА Шмаонов. Методика абсолютных измерений эффективной температуры


радиоизлучения с низкой эквивалентной температурой. Приборы и техника
эксперимента, 1:83, 1957.

[59] Игорь Дмитриевич Новиков. Чёрные дыры и Вселенная. М.: Молодая гвардия,
1985.

[60] Arno A Penzias and Robert Woodrow Wilson. A measurement of excess antenna
temperature at 4080 mc/s. The Astrophysical Journal, 142:419–421, 1965.

[61] Phillip James Edwin Peebles. Principles of physical cosmology. Princeton Univer-
sity Press, 1993.

[62] Robert H Dicke, P James E Peebles, Peter G Roll, and David T Wilkinson. Cosmic
black-body radiation. The Astrophysical Journal, 142:414–419, 1965.

[63] John C Mather, ES Cheng, DA Cottingham, RE Eplee Jr, DJ Fixsen, T Hewagama,


RB Isaacman, KA Jensen, SS Meyer, PD Noerdlinger, et al. Measurement of
the cosmic microwave background spectrum by the cobe firas instrument. The
Astrophysical Journal, 420:439–444, 1994.

[64] NW Boggess, JC Mather, R Weiss, CL Bennett, ESe al Cheng, E Dwek, S Gulkis,


MG Hauser, MA Janssen, T Kelsall, et al. The cobe mission-its design and perfor-
mance two years after launch. The Astrophysical Journal, 397:420–429, 1992.

[65] Д. И. Нагирнер. Реликтовый фон и его искажения. CПбГУ, 2002.

[66] George F Smoot, Mark V Gorenstein, and Richard A Muller. Detection of


anisotropy in the cosmic blackbody radiation. Physical Review Letters, 39(14):
898, 1977.

[67] Michael J Way, Jeffrey D Scargle, Kamal M Ali, and Ashok N Srivastava. Advances
in machine learning and data mining for astronomy. CRC Press, 2012.

[68] Wayne Hu and Naoshi Sugiyama. Toward understanding cmb anisotropies and
their implications. Physical Review D, 51(6):2599, 1995.

[69] Antony Lewis, Anthony Challinor, and Anthony Lasenby. Efficient computation
of cosmic microwave background anisotropies in closed friedmann-robertson-walker
models. The Astrophysical Journal, 538(2):473, 2000.

[70] Wayne Hu and Scott Dodelson. Cosmic microwave background anisotropies. arXiv
preprint astro-ph/0110414, 2001.
Bibliography 196

[71] PAR Ade, N Aghanim, C Armitage-Caplan, M Arnaud, M Ashdown, F Atrio-


Barandela, J Aumont, C Baccigalupi, Anthony J Banday, RB Barreiro, et al.
Planck 2013 results. xxiv. constraints on primordial non-gaussianity. Astronomy &
Astrophysics, 571:A24, 2014.

[72] PAR Ade, N Aghanim, C Armitage-Caplan, M Arnaud, M Ashdown, F Atrio-


Barandela, J Aumont, C Baccigalupi, AJ Banday, RB Barreiro, et al. Planck 2013
results. xxiii. isotropy and statistics of the cmb. Astronomy & Astrophysics, 571:
A23, 2014.

[73] IA Strukov, AA Brukhanov, DP Skulachev, and MV Sazhin. Anisotropy of the


microwave background radiation. Soviet Astronomy Letters, 18:153, 1992.

[74] CL Bennett, A Banday, KM Gorski, G Hinshaw, P Jackson, P Keegstra, A Kogut,


George F Smoot, DT Wilkinson, and EL Wright. 4-year cobe dmr cosmic microwave
background observations: maps and basic results. arXiv preprint astro-ph/9601067,
1996.

[75] Eiichiro Komatsu, KM Smith, J Dunkley, CL Bennett, B Gold, G Hinshaw,


N Jarosik, D Larson, MR Nolta, L Page, et al. Seven-year wilkinson microwave
anisotropy probe (wmap wmap is the result of a partnership between princeton uni-
versity and nasa’s goddard space flight center. scientific guidance is provided by the
wmap science team.) observations: Cosmological interpretation. The Astrophysical
Journal Supplement Series, 192(2):18, 2011.

[76] JW Fowler, Viviana Acquaviva, Peter AR Ade, P Aguirre, M Amiri, JW Appel,


LF Barrientos, ES Battistelli, JR Bond, B Brown, et al. The atacama cosmology
telescope: a measurement of the 600 < ` < 8000 cosmic microwave background
power spectrum at 148 ghz. The Astrophysical Journal, 722(2):1148, 2010.

[77] John Ruhl, Peter AR Ade, John E Carlstrom, Hsiao-Mei Cho, Thomas Crawford,
Matt Dobbs, Chris H Greer, William L Holzapfel, Trevor M Lanting, Adrian T
Lee, et al. The south pole telescope. In SPIE Astronomical Telescopes+ Instru-
mentation, pages 11–29. International Society for Optics and Photonics, 2004.

[78] Kevork N Abazajian, K Arnold, J Austermann, BA Benson, C Bischoff, J Bock,


JR Bond, J Borrill, E Calabrese, JE Carlstrom, et al. Neutrino physics from the
cosmic microwave background and large scale structure. Astroparticle Physics, 63:
66–80, 2015.

[79] Jean Kaplan, Jacques Delabrouille, Pablo Fosalba, and Cyrille Rosset. Cmb polar-
ization as complementary information to anisotropies. Comptes Rendus Physique,
4(8):917–924, 2003.
Bibliography 197

[80] Frank S Frank S Crawford. Waves, berkeley physics course. 1968.

[81] JN Goldberg, AJ Macfarlane, Ezra T Newman, F Rohrlich, and ECG Sudarshan.


Spin-s spherical harmonics and . Journal of Mathematical Physics, 8(11):2155–
2161, 1967.

[82] Matias Zaldarriaga. Nature of the e- b decomposition of cmb polarization. Physical


Review D, 64(10):103001, 2001.

[83] Juan R Pardo, José Cernicharo, and Eugene Serabyn. Atmospheric transmission at
microwaves (atm): an improved model for millimeter/submillimeter applications.
IEEE Transactions on Antennas and Propagation, 49(12):1683–1694, 2001.

[84] RA Sunyaev and Ia B Zeldovich. Microwave background radiation as a probe of the


contemporary structure and history of the universe. Annual review of astronomy
and astrophysics, 18:537–560, 1980.

[85] JP Ostriker and El T Vishniac. Generation of microwave background fluctuations


from nonlinear perturbations at the era of galaxy formation. The Astrophysical
Journal, 306:L51–L54, 1986.

[86] PAR Ade, Y Akiba, AE Anthony, K Arnold, M Atlas, D Barron, D Boettger,


J Borrill, S Chapman, Y Chinone, et al. A measurement of the cosmic microwave
background b-mode polarization power spectrum at sub-degree scales with polar-
bear. The Astrophysical Journal, 794(2):171, 2014.

[87] H Cynthia Chiang, Peter AR Ade, Denis Barkats, John O Battle, Evan M Bierman,
JJ Bock, C Darren Dowell, Lionel Duband, Eric F Hivon, William L Holzapfel,
et al. Measurement of cosmic microwave background polarization power spectra
from two years of bicep data. The Astrophysical Journal, 711(2):1123, 2010.

[88] M Calvo, A Benoit, A Catalano, J Goupy, A Monfardini, N Ponthieu, E Barria,


G Bres, M Grollier, G Garde, et al. The nika2 instrument, a dual-band kilopixel
kid array for millimetric astronomy. Journal of Low Temperature Physics, pages
1–8, 2016.

[89] Paul F Scott, Pedro Carreira, Kieran Cleary, Rod D Davies, Richard J Davis, Clive
Dickinson, Keith Grainge, Carlos M Gutierrez, Michael P Hobson, Michael E Jones,
et al. First results from the very small array—iii. the cosmic microwave background
power spectrum. Monthly Notices of the Royal Astronomical Society, 341(4):1076–
1083, 2003.

[90] ACS Readhead, ST Myers, TJ Pearson, JL Sievers, BS Mason, CR Contaldi,


JR Bond, R Bustos, P Altamirano, C Achermann, et al. Polarization observa-
tions with the cosmic background imager. Science, 306(5697):836–844, 2004.
Bibliography 198

[91] John Michael Kovac, EM Leitch, C Pryke, JE Carlstrom, NW Halverson, and


WL Holzapfel. Detection of polarization in the cosmic microwave background
using dasi. Nature, 420(6917):772–787, 2002.

[92] M-A Bigot-Sazy, R Charlassier, J-Ch Hamilton, J Kaplan, and G Zahariade. Self-
calibration: an efficient method to control systematic effects in bolometric inter-
ferometry. Astronomy & Astrophysics, 550:A59, 2013.

[93] TJ Pearson and ACS Readhead. Image formation by self-calibration in radio as-
tronomy. Annual review of astronomy and astrophysics, 22:97–130, 1984.

[94] Peter John Bell Clarricoats and A David Olver. Corrugated horns for microwave
antennas. Number 18. Iet, 1984.

[95] Eric Hivon. Pixel window functions on the nasa healpix web-site. http://healpix.
jpl.nasa.gov/html/intronode14.htm, 2010.

[96] Jean-Christophe Hamilton. Cmb map-making and power spectrum estimation.


Comptes Rendus Physique, 4(8):871–879, 2003.

[97] Kaare Brandt Petersen, Michael Syskind Pedersen, et al. The matrix cookbook.
Technical University of Denmark, 7:15, 2008.

[98] E Battistelli, A Baú, D Bennett, L Bergé, J-Ph Bernard, P De Bernardis, G Bor-


dier, A Bounab, É Bréelle, EF Bunn, et al. Qubic: The qu bolometric interferom-
eter for cosmology. Astroparticle Physics, 34(9):705–716, 2011.

[99] Jonathan Richard Shewchuk. An introduction to the conjugate gradient method


without the agonizing pain, 1994.

[100] Nersc web-site. http://www.nersc.gov.

[101] Curie manual. http://www.vi-hps.org/upload/material/tw09/


vi-hps-tw09-Curie_Info.pdf.

[102] R Adam, PAR Ade, N Aghanim, M Arnaud, M Ashdown, J Aumont, C Bac-


cigalupi, AJ Banday, RB Barreiro, N Bartolo, et al. Planck 2015 results. viii.
high frequency instrument data processing: Calibration and maps. arXiv preprint
arXiv:1502.01587, 2015.

[103] Federico Incardona. Impact of instrumental eects on bolometric interferometry


data analysis of the cmb polarization with qubic. Master’s thesis, 2016.

[104] Henk A Van der Vorst. Iterative Krylov methods for large linear systems, volume 13.
Cambridge University Press, 2003.
Bibliography 199

[105] R Charlassier, Emory F Bunn, J-Ch Hamilton, J Kaplan, and S Malu. Bandwidth
in bolometric interferometry. Astronomy & Astrophysics, 514:A37, 2010.

[106] J Delabrouille and J-F Cardoso. Diffuse source separation in cmb observations. In
Data Analysis in Cosmology, pages 159–205. Springer, 2008.

[107] Joseph Silk and Michael L Wilson. Residual fluctuations in the matter and radia-
tion distribution after the decoupling epoch. Physica Scripta, 21(5):708, 1980.

[108] Martin White and Mark Srednicki. Window functions for cmb experiments. arXiv
preprint astro-ph/9402037, 1994.

[109] Eric Hivon, Krzysztof M Górski, C Barth Netterfield, Brendan P Crill, Simon
Prunet, and Frode Hansen. Master of the cosmic microwave background anisotropy
power spectrum: a fast method for statistical analysis of large and complex cosmic
microwave background data sets. The Astrophysical Journal, 567(1):2, 2002.

[110] J Grain, M Tristram, and Radek Stompor. Polarized cmb power spectrum estima-
tion using the pure pseudo-cross-spectrum approach. Physical Review D, 79(12):
123515, 2009.

[111] M Tristram, JF Macias-Perez, C Renault, and D Santos. Xspect, estimation of the


angular power spectrum by computing cross-power spectra with analytical error
bars. Monthly Notices of the Royal Astronomical Society, 358(3):833–842, 2005.

[112] JR Bond, Andrew H Jaffe, and L Knox. Radical compression of cosmic microwave
background data. The Astrophysical Journal, 533(1):19, 2000.

[113] PAR Ade, Z Ahmed, RW Aikin, KD Alexander, D Barkats, SJ Benton, CA Bischoff,


JJ Bock, JA Brevik, I Buder, et al. Bicep2/keck array v: Measurements of b-
mode polarization at degree angular scales and 150 ghz by the keck array. The
Astrophysical Journal, 811(2):126, 2015.

[114] JR Bond, Andrew H Jaffe, and L Knox. Estimating the power spectrum of the
cosmic microwave background. Physical Review D, 57(4):2117, 1998.

[115] Lloyd Knox. Cosmic microwave background anisotropy observing strategy assess-
ment. The Astrophysical Journal, 480(1):72, 1997.

[116] Zigmund D Kermish, Peter Ade, Aubra Anthony, Kam Arnold, Darcy Barron,
David Boettger, Julian Borrill, Scott Chapman, Yuji Chinone, Matt A Dobbs, et al.
The polarbear experiment. In SPIE Astronomical Telescopes+ Instrumentation,
pages 84521C–84521C. International Society for Optics and Photonics, 2012.
Bibliography 200

[117] Peter AR Ade, RW Aikin, M Amiri, D Barkats, SJ Benton, CA Bischoff, JJ Bock,


JA Brevik, I Buder, E Bullock, et al. Bicep2. ii. experiment and three-year data
set. The Astrophysical Journal, 792(1):62, 2014.

[118] J Errard, PAR Ade, Y Akiba, K Arnold, M Atlas, C Baccigalupi, D Barron,


D Boettger, J Borrill, S Chapman, et al. Modeling atmospheric emission for cmb
ground-based observations. The Astrophysical Journal, 809(1):63, 2015.

[119] R Charlassier, J-Ch Hamilton, É Bréelle, A Ghribi, Y Giraud-Héraud, J Kaplan,


M Piat, and D Prêle. An efficient phase-shifting scheme for bolometric additive
interferometry. Astronomy & Astrophysics, 497(3):963–971, 2009.

[120] Marian Douspis. Cosmological parameter estimation: methods. Comptes Rendus


Physique, 4(8):881–890, 2003.

[121] Bharat Ratra, Radoslaw Stompor, Ken Ganga, Graça Rocha, Naoshi Sugiyama,
and Krzysztof M Górski. Cosmic microwave background anisotropy constraints on
open and flat-λ cold dark matter cosmogonies from ucsb south pole, argo, max,
white dish, and suzie data. The Astrophysical Journal, 517(2):549, 1999.

[122] Roberto Trotta. Bayes in the sky: Bayesian inference and model selection in
cosmology. Contemporary Physics, 49(2):71–104, 2008.

[123] Dani Gamerman and Hedibert F Lopes. Markov chain Monte Carlo: stochastic
simulation for Bayesian inference. CRC Press, 2006.

[124] Antony Lewis and Sarah Bridle. Cosmological parameters from cmb and other
data: A monte carlo approach. Physical Review D, 66(10):103511, 2002.

[125] KS Karkare, PAR Ade, Z Ahmed, RW Aikin, KD Alexander, M Amiri, D Barkats,


SJ Benton, CA Bischoff, JJ Bock, et al. Keck array and bicep3: Spectral charac-
terization of 5000+ detectors. In SPIE Astronomical Telescopes+ Instrumentation,
pages 91533B–91533B. International Society for Optics and Photonics, 2014.

[126] Thomas Essinger-Hileman, Aamir Ali, Mandana Amiri, John W Appel, Derek
Araujo, Charles L Bennett, Fletcher Boone, Manwei Chan, Hsiao-Mei Cho,
David T Chuss, et al. Class: the cosmology large angular scale surveyor. In
SPIE Astronomical Telescopes+ Instrumentation, pages 91531I–91531I. Interna-
tional Society for Optics and Photonics, 2014.

[127] South pole telescope, 3rd generation. http://moriond.in2p3.fr/J16/


transparencies/3_tuesday/1_morning/1_bender.pdf.

[128] Advanced act. http://moriond.in2p3.fr/J16/transparencies/3_tuesday/1_


morning/5_niemack.pdf.
Bibliography 201

[129] K Arnold, N Stebor, PAR Ade, Y Akiba, AE Anthony, M Atlas, D Barron, A Ben-
der, D Boettger, J Borrill, et al. The simons array: expanding polarbear to three
multi-chroic telescopes. In SPIE Astronomical Telescopes+ Instrumentation, pages
91531F–91531F. International Society for Optics and Photonics, 2014.

[130] S Aiola, G Amico, P Battaglia, E Battistelli, A Baù, P De Bernardis, M Bersanelli,


A Boscaleri, F Cavaliere, A Coppolecchia, et al. The large-scale polarization ex-
plorer (lspe). In SPIE Astronomical Telescopes+ Instrumentation, pages 84467A–
84467A. International Society for Optics and Photonics, 2012.

[131] Britt Reichborn-Kjennerud, Asad M Aboobaker, Peter Ade, François Aubin, Carlo
Baccigalupi, Chaoyun Bao, Julian Borrill, Christopher Cantalupo, Daniel Chap-
man, Joy Didier, et al. Ebex: a balloon-borne cmb polarization experiment. In
SPIE Astronomical Telescopes+ Instrumentation, pages 77411C–77411C. Interna-
tional Society for Optics and Photonics, 2010.

[132] BP Crill, Peter AR Ade, Elia Stefano Battistelli, S Benton, R Bihary, JJ Bock,
JR Bond, J Brevik, S Bryan, CR Contaldi, et al. Spider: a balloon-borne large-
scale cmb polarimeter. In SPIE Astronomical Telescopes+ Instrumentation, pages
70102P–70102P. International Society for Optics and Photonics, 2008.

[133] Justin Lazear, Peter AR Ade, Dominic Benford, Charles L Bennett, David T Chuss,
Jessie L Dotson, Joseph R Eimer, Dale J Fixsen, Mark Halpern, Gene Hilton, et al.
The primordial inflation polarization explorer (piper). In SPIE Astronomical Tele-
scopes+ Instrumentation, pages 91531L–91531L. International Society for Optics
and Photonics, 2014.

[134] Pierre Chanial et al. Qubic map-making, in preparation.

[135] Pierre Chanial. Operators and solvers for high-performance computing. http:
//pchanial.github.io/pyoperators/, .

[136] Pierre Chanial. Pysimulators: tools to build an instrument model. https://pypi.


python.org/pypi/pysimulators, .

[137] Qubic wiki web page. http://www.apc.univ-paris-diderot.fr/qubic/pmwiki.


php/SimulationsWorkingGroup/PythonSoftwareForQUBIC.

[138] ipython web page. https://ipython.org.

[139] Numpy package web page. http://www.numpy.org/.

[140] Healpy package web page. https://healpy.readthedocs.io/en/latest/.

You might also like