Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Paper The following article is Open access

Data quality up to the third observing run of advanced LIGO: Gravity Spy glitch classifications

, , , , , , , , , , , , and

Published 20 February 2023 © 2023 The Author(s). Published by IOP Publishing Ltd
, , Citation J Glanzer et al 2023 Class. Quantum Grav. 40 065004 DOI 10.1088/1361-6382/acb633

0264-9381/40/6/065004

Abstract

Understanding the noise in gravitational-wave detectors is central to detecting and interpreting gravitational-wave signals. Glitches are transient, non-Gaussian noise features that can have a range of environmental and instrumental origins. The Gravity Spy project uses a machine-learning algorithm to classify glitches based upon their time–frequency morphology. The resulting set of classified glitches can be used as input to detector-characterisation investigations of how to mitigate glitches, or data-analysis studies of how to ameliorate the impact of glitches. Here we present the results of the Gravity Spy analysis of data up to the end of the third observing run of advanced laser interferometric gravitational-wave observatory (LIGO). We classify 233981 glitches from LIGO Hanford and 379805 glitches from LIGO Livingston into morphological classes. We find that the distribution of glitches differs between the two LIGO sites. This highlights the potential need for studies of data quality to be individually tailored to each gravitational-wave observatory.

Export citation and abstract BibTeX RIS

1. Introduction

Gravitational-wave astronomy provides unique information about our Universe. To date, the advanced laser interferometric gravitational-wave observatory (LIGO) [1] and Advanced Virgo [2] detectors have observed signals from coalescing binaries of neutron stars and black holes [37], with the rate of discovery increasing dramatically as the sensitivity of the detector network improves. Analysis by the LIGO Scientific, Virgo and KAGRA (LVK) Collaboration identified 3 candidates with a probability of astrophysical origin greater than $50\%$ in the first observing run (O1) of the advanced-detector network [8], 8 in the second observing run (O2) [4], and 79 in the third observing run (O3) [6, 7]. Such observations require measurements equivalent to fractional changes in distance of ${\lesssim}10^{-21}$ [9], and hence the detector must be carefully isolated from instrumental and environmental sources of noise. However, noise cannot be fully eliminated, and to identify and analyse gravitational-wave signals it is necessary to understand the properties of noise in the gravitational-wave detectors [10].

Transient, non-Gaussian bursts of noise (typically less than a few seconds in duration) in the gravitational-wave data stream are known as glitches. Glitches are particularly detrimental to the identification and analysis of gravitational-wave signals [1016]. There are many different glitch types, some with known environmental or instrumental origins, and others with uncertain or unknown sources [1721]. Identifying the causes of glitches is key to improving gravitational-wave data quality.

A wide range of tools are used to monitor data quality and characterise the behaviour of the detectors [2027]. In recent years, machine-learning methods have been developed for a range of analyses connected to various aspects of detector characterisation [e.g. 2838]. The Gravity Spy project [3942] aims to classify glitches by combining human and machine-learning classification schemes: volunteers on the Zooniverse citizen-science platform (as well as LVK detector-characterisation experts) inspect and classify individual glitches, which can then be used as input to a machine-learning algorithm that can classify large sets of data 14 . Since its launch in October 2016, the Gravity Spy project has analysed almost two million individual glitches and has accumulated over 5.7 million classifications by more than 27 000 registered Zooniverse users 15 . Results of machine-learning and volunteer classifications have been made available both internally within the LVK, and to the wider public [4447].

Compiling a catalogue of classified glitches is useful for both identifying the physical causes of glitches (such that commissioning work could be done to remove them), and evaluating the impact of glitches on data analysis (creating new analyses to mitigate their effect where necessary). For example, Gravity Spy classifications have been used for: selecting example glitches to evaluate their impact on data analysis [4851]; studying glitch morphology [5255]; cross-referencing glitches with environmental-noise or auxiliary-channel measurements [20, 5658], and as a component of training for gravitational-wave detection algorithms [5965] or glitch-classification algorithms [32, 6669]. Additionally, identification of new classes can indicate new sources of noise and suggest areas for further commissioning [42].

In this paper we describe the glitch classifications from Gravity Spy's machine-learning analysis of data from the first three observing runs of Advanced LIGO; this analysis uses the Gravity Spy convolutional neutral network (CNN) models previously developed for O1–O2 [39, 40] and O3 [42]. In section 2 we describe the gravitational-wave strain data, the machine-learning algorithm and the glitch classes; further details of the different classes used for analysis of each observing run are given in the appendix. In section 3 we illustrate how results of classifications from across the observing runs can be used for detector characterisation, summarising the rates of different glitches, and highlighting results from times near potential gravitational-wave candidates; we also give an overview of the data release. In section 4 we review the implications of our results, before summarising in section 5. The data release is available from Zenodo [46], and the volunteer classifications [47] will be discussed in a companion paper.

2. Methods

2.1. Detector data & detector characterisation

The two LIGO detectors in the USA (Hanford and Livingston) [1], the Virgo detector in Italy [2] and the KAGRA detector in Japan [70], are highly sensitive instruments designed and operated for the direct detection of gravitational waves. The primary data output of these observatories is the strain measured by the interferometers [71], which will contain gravitational-wave signals as well as various sources of noise; however, there are additionally many auxiliary channels of data that record the internal state of the detectors and monitor their environments [17, 72, 73]. Since the beginning of O1 in September 2015, three observing runs have been completed [74]. These are preceded and interleaved with engineering runs that are used to test the performance of the detectors, and potentially diagnose data-quality issues. Each successive observing run is characterised by detector improvements that lead to higher sensitivity [7578] and, consequently, more detections [7], as well as revealing new sources of noise.

The data quality of these ground-based gravitational-wave detectors is impacted by multiple sources of noise. Broadly, noise in the detectors consists of stationary Gaussian noise sources (which include quantum noise, seismic noise and thermal noise), and non-Gaussian noise sources [10, 72, 76, 78]. Non-Gaussian noise includes long-lived spectral lines [79] and shorter-duration transient glitches [20, 22, 23]. Monitoring the status of data quality, identification and mitigation of transient noise are some of the tasks referred to as detector characterisation [17, 21]. Understanding and improving data quality is central to extracting astrophysical information from detector data.

Potential glitches (as well as gravitational-wave signals) are identified by searching for excess power in the data stream. All the noise transients analyzed in this paper were detected by the Omicron algorithm [26, 27] analysing the gravitational-wave strain channel (and not using auxiliary channels). Omicron identifies potential noise transients by triggering on excess power in the data stream. The Omicron algorithm annotates each identified transient with characteristics such as event time, peak frequency, central frequency and signal-to-noise ratio (SNR). The glitch morphology of the trigger can be visualized in a time–frequency spectrogram commonly known as an Omega scan [25, 80]. These Omega scans are used frequently in data-quality studies to establish potential noise correlations between different parts of the detector [81], and the time–frequency morphology can be used to categorise glitches [20, 40]. The morphology may contain clues to the cause of the glitch [21], e.g. arches are characteristic of light scattering, with the frequency encoding information about the relative motion of the scattering source, and multiple stacked arches suggesting repeated reflections of stray light from the scattering source [56, 82, 83]. Example Omega scans for common glitch classes are shown in figure 1. These time–frequency spectrograms are used as the input to Gravity Spy.

Figure 1.

Figure 1. Example time–frequency spectrograms [80] for a selection of LIGO glitch classes. The glitch classes here are relatively common and illustrate the range of morphologies different glitch classes can have. The spectrograms in each row are shown with a different time duration. Top left: Tomte is a short-duration glitch with a characteristic triangular morphology. Top right: Blip is another short-duration glitch, but covers a broader frequency range than Tomte and has a tear-drop morphology. Middle left: Whistles have a characteristic V, U or W shape sweeping through higher frequencies (${\gtrsim}128\,\mathrm{Hz}$). Middle right: Fast Scattering (also known as Crown) appears as one or more arches, each ${\sim}0.2$$0.3\,\mathrm{s}$ in duration. Bottom left: Scattered Light (also known as Slow Scattering) appears as longer-duration (${\sim}2.0$$2.5\,\mathrm{s}$) arches, with multiple arches often being stacked on top of each other. Bottom right: Extremely Loud are high-SNR triggers that saturate the spectrogram. Exemplar spectrograms for each Gravity Spy class are given in figure A1.

Standard image High-resolution image

2.2. Machine-learning algorithm & glitch classes

Gravity Spy contributes to detector characterisation by classifying glitches. The morphological classes used in Gravity Spy for LIGO data are detailed in the appendix. Classifications are made based upon time–frequency spectrograms, using two complementary approaches: visual inspection by Zooniverse volunteers, and automated analysis by a machine-learning algorithm [39, 41, 42]. Both approaches use the same input: Omega scans of four different temporal resolutions ($0.5\,\mathrm{s}$, $1\,\mathrm{s}$, $2\,\mathrm{s}$ and $4\,\mathrm{s}$ in duration, centred on the time of the transient). Here we concentrate on the machine-learning classification as opposed to volunteer classification.

Gravity Spy uses a CNN, a deep-learning algorithm used primarily for image classification, to analyse the Omega scans. For every image input to the CNN, the probability (or confidence) p of belonging to each class is calculated, and the glitch is assigned to the class with the highest associated confidence [39]. CNN architectures include an input layer, an output layer, and various hidden layers in between that transform the data and extract useful features. The CNN used by Gravity Spy [84] has four convolutional layers to extract features, each followed by a max-pooling and a rectified linear unit activation layer, and then a final fully connected layer and a softmax layer. The weights from the last softmax layer are the confidence scores for each of the classes. Confidence scores for each trigger, indicating the probability that it is associated with various morphological classes, are provided in the data release. The accuracy of the classification is tested during training of the CNN [39, 42, 84].

2.3. The training sets

The original LIGO data set used to train the Gravity Spy CNN was created by detector-characterisation experts and Gravity Spy volunteers. It initially contained 7718 glitch samples from 20 classes prevalent in the detector during O1 and the preceding engineering runs [39]. These classes included No Glitch, for when no significant excess power is visible in the Gravity Spy spectrograms, and None of the Above, which was intended to catch glitches that did not fit into the other classes. The training set was refined and updated to include the 1080 Lines and 1400 Ripples classes, which were identified by volunteers [40]. This gave a training set that included 7932 glitch samples from 22 classes [45]. The resulting training accuracy was $98.2\%$ [40]. This CNN model has been used to classify data from O1 and O2.

During O3, the presence of two new prevalent glitch morphologies motivated the addition of the Fast Scattering (also known as Crown) and Blip Low Frequency (also known as Low-frequency Blip) classes to the machine-learning model; in addition, the None of the Above class was removed for the final analysis, as it was decided that it was more effective for the CNN to label such triggers with low confidence than to try to construct a class of many morphologically diverse glitches [42] 16 . Adding in the new classes, and more examples from existing classes, this current training data set contains 9631 glitch samples distributed over 23 classes, of these 8427 were used for training and 1203 were used for validation. The resulting training and validation accuracies were $99.9\%$ and $98.8\%$, respectively [42]. This CNN model has been used to classify data from O3.

The performance of the CNN model depends upon the quantity and quality of examples from each glitch class in the training set. Augmenting the training set with additional glitches classified by volunteers [47] is expected to improve the results of future CNN models.

3. Results

The Gravity Spy glitch classifications can be used as inputs for a range of analyses, and here we illustrate their use as the base for detector-characterisation studies concentrating on O3. In section 3.1 we show how the distribution of glitches may be studied, and in section 3.2 we illustrate how data quality at specific times may be studied using the example of times around gravitational-wave candidates. For use in further studies, the release of the Gravity Spy machine-learning classification data set is described in section 3.3.

3.1. Glitch classifications

For data from both LIGO detectors, we find that there are certain glitch classes that are more common than others. For example, table 1 provides numbers of glitches sorted into the various classes from O3 data. In addition to the number of glitches in each class with an SNR ${\gt}7.5$, we also show those classified with a confidence ${\gt}90\%$ and ${\gt}95\%$. Using a higher confidence level gives a higher purity, but smaller sample. Figure 2 shows the cumulative distribution of classifications as a function of confidence; this gives an indication of how the numbers change with a different confidence thresholds. We mainly use a fiducial $90\%$ confidence threshold for our quoted results.

Figure 2.

Figure 2. The cumulative distribution of O3 triggers across all classes as a function of classification confidence. The horizontal axis is the confidence p, while the vertical axis $\Phi(p)$ is the fraction of glitches identified with confidence greater than p. Three glitch classes are highlighted as examples: Paired Doves (an uncommon class, with few training examples [39, 40]), Koi Fish (a more common class, which can be confused with Blips when quiet, and Extremely Loud when loud [40, 42]), and Scattered Light (one of the most common glitch types for both detectors [42]). The number of triggers in each class with p > 0.9 and p > 0.95 are quoted in table 1.

Standard image High-resolution image

Table 1. Number of Gravity Spy classifications in O3 LIGO Hanford and Livingston data. For each detector, the left column gives the total number of triggers with SNR > 7.5 classified, regardless of the confidence of the classification, while the middle and right columns give the number of classifications with confidence $p > 90\%$ (our fiducial threshold) and $p > 95\%$, respectively.

Gravity Spy classHanfordLivingston
SNR > 7.5 $p > 90\%$ $p > 95\%$ SNR > 7.5 $p >90\%$ $p > 95\%$
1080 Lines3447834942269141
1400 Ripples2538549763423841479
Air Compressor3431177629011314952
Blip743860205582555442643873
Blip Low Frequency404224672059215221561414003
Chirp418529128
Extremely Loud132351093810335899473046835
Fast Scattering224312861118741205521150782
Helix911592293716
Koi Fish11242844775361115370165800
Light Modulation1464529753191133
Low-frequency Burst212111941018756577138553448
Low-frequency Lines3955153611311374937512125
No Glitch7783524738741405067484773
Paired Doves26929124079277130
Power Line303164135198514411314
Repeating Blips184510789021142459350
Scattered Light633335711853701574004725843009
Scratchy643367311444287263
Tomte189213601242461443929937573
Wandering Line30105642820
Whistle623853715128862361505721
Violin Mode8844363661709300190

The number of glitches and the split between classes differs between the two observatories. Figure 3 shows the O3 distribution of glitches as a function of SNR for the most common classes (classes that have a ${\gt}1\%$ prevalence) in LIGO Hanford data, and figure 4 shows the same for LIGO Livingston.

Figure 3.

Figure 3. SNR distributions for LIGO Hanford glitches identified with a confidence $p \gt 90 \%$. Only results for classes with a prevalence greater than $1 \%$ in Hanford data are shown. The width of the distribution is normalized to be uniform across the different classes, and the classes are ordered in decreasing order of prevalence from left to right. Table 1 lists the numbers of triggers in each class for the full list of classes, and analogous distributions for Livingston data are shown in figure 4.

Standard image High-resolution image
Figure 4.

Figure 4. SNR distributions for LIGO Livingston glitches identified with a confidence $p \gt 90 \%$. Only results for classes with a prevalence greater than $1 \%$ in Livingston data are shown. The width of the distribution is normalized to be uniform across the different classes, and the classes are ordered in decreasing order of prevalence from left to right. Table 1 lists the numbers of triggers in each class for the full list of classes, and analogous distributions for Hanford data are shown in figure 4.

Standard image High-resolution image

During O3, the most common classes of glitches to occur at Livingston was due to scattered light [82, 83, 85], specifically, Scattered Light (also known as Slow Scattering) [56] and Fast Scattering (also known as Crown) [42]. Approximately $27\%$ of all the glitches in O3 were classified as Fast Scattering by the Gravity Spy machine-learning analysis with a confidence of ${\gt}90\%$. Scattered Light made up about $23\%$ of glitches with a Gravity Spy confidence of ${\gt} 90\%$. The relative motion between optical surfaces in LIGO are strongly correlated with the presence of light scattering. The rate of Scattered Light glitches decreased during the second half of O3 (O3b) following the introduction of reaction-chain tracking in January 2020 [7], which reduced the relative motion between the test-mass mirror and its counterpart used in control of the suspension system [56].

Tomtes were another common glitch class for Livingston, making up approximately $19\%$ of all the glitches with a Gravity Spy confidence of ${\gt}90\%$. The origins of these are currently unknown, as no environmental or instrumental couplings have been determined. They commonly appear with a frequency of $40\,\mathrm{Hz}$, and repeat often over the course of one day [20].

At Hanford, Scattered Light, Low-frequency Bursts, and Extremely Loud glitches were the most common glitch classes. Reaction-chain tracking was also implemented at Hanford to help mitigate Scattered Light. Low-frequency Bursts were common during August 2019. Extremely Loud glitches are large disturbances to the detector and often cause big drops in the detector's astrophysical range (the distance out to which a source can be typically detected [86]). Scattered Light made up about $47\%$ of O3 glitches classified with ${\gt}90\%$ confidence at Hanford, while Extremely Loud and Low-frequency Bursts made up about $9\%$ and $16\%$, respectively.

Figure 5 shows the hourly rate of four glitch classes (Scattered Light, Fast Scattering, Low-frequency Burst and Tomte) across the weeks of the O3 run for both Hanford and Livingston [5, 7]. The rate is calculated per unit observing time. The glitch rates were calculated using those classified with ${\gt} 90\%$ confidence. This shows the large increase in Scattered Light glitches in the second part of the observing run and the subsequent reduction after the introduction of reaction-chain tracking [7, 20, 56].

Figure 5.

Figure 5. Hourly glitch rate (per unit observing time) for four glitch types (classified with confidence ${\gt}90\%$) at LIGO Hanford and LIGO Livingston during O3 on different days of the week. The rate is calculated as the number of glitches per unit observing time. The solid traces show the rolling median of the daily average glitch rate across seven day intervals, while the dots show the glitch rate for each calendar week. The dashed vertical lines show the times when reaction-chain tracking was implemented [7, 56]. The month of October was used for commissioning, and its data is not shown here.

Standard image High-resolution image

Figure 6 shows a different visualization of the variation in glitch prevalence with time: how the glitch rate (for the same classes shown in figure 5) changes with the day of the week 17 . Fast Scattering shows a decline during the weekend at LIGO Livingston, as at these times there is less anthropogenic noise around the detectors. A similar difference is not visible at LIGO Hanford because of the much lower rate of Fast Scattering transients at Hanford (0.22 per hour) compared to Livingston (9.05 per hour) during O3: a relatively higher ground motion in the anthropogenic band around Livingston makes Fast Scattering a much bigger problem there [7, 42]. In contrast to Fast Scattering, Tomte shows negligible variation, indicating a lack of correlation with human activities.

Figure 6.

Figure 6. Hourly glitch rate for weekdays folded across the entire O3 run. The rate is calculated as the number of glitches per unit observing time, and we plot the average over each weekday. The month of October was used for commissioning and its data is not shown here.

Standard image High-resolution image

3.2. Data quality around candidates

The data set includes glitch classifications for data around the time of several gravitational-wave candidates. This happens either when there is a glitch picked up by Omicron, if a gravitational-wave signal is loud enough to trigger Omicron, or if some combination of glitch and signal is identified. Here we review these Gravity Spy classifications, and illustrate both how Gravity Spy may identify glitches around candidates and how it may struggle in classifying a gravitational-wave signal.

Tables 2 and 3 provide details of example candidates from the first and second parts of O3 (O3a and O3b), respectively, with associated Gravity Spy classifications. This list was compiled by cross-referencing the times associated with public alerts and high-significance candidates from offline analyses (whether or not they are identified as instrumental in origin) [57, 8792] with the Gravity Spy data set. For this analysis, a time window of $\pm 5\,\mathrm{s}$ around each candidate time was used to search for entries in the Gravity Spy data set. The majority of candidates did not have a corresponding entry in the data set classified by Gravity Spy.

Table 2. Gravity Spy classifications coincident with confident, marginal and retracted O3a gravitational-wave candidates [57, 8792]. Equivalent results for O3b are shown in Table 3. The main Gravity Spy analysis uses data flagged by the Omicron pipeline as an input, and so only classifies a subset of candidates. Omicron may pick up the candidate, a near-by glitch, or some combination of the two. The first column gives the corresponding candidate identification used in the Gravitational-wave Candidate Event Database (as used for low-latency alerts); the second gives the Coordinated Universal Time of the Omicron trigger ($\pm 5\,\mathrm{s}$ from the time of the candidate); the third column gives the Gravity Spy classification with H and L indicating whether data from Hanford or Livingston, respectively, have been analysed; the fourth column gives details of the final status of the candidate (and citations).

SupereventTimeGravity Spy classificationDescription
S190930ak2019-09-30 23:46:50H: Scattered LightInstrumental origin [7]
 2019-09-30 23:46:53H: Scattered Light 
S190930s2019-09-30 13:35:37L: Low Frequency LinesGW190930_133541 [5, 108]
S190928c2019-09-28 02:11:45L: TomteRetracted [5, 109]
S190924am2019-09-24 23:26:50L: Fast ScatteringInstrumental origin [87]
 2019-09-24 23:26:52L: Fast Scattering 
 2019-09-24 23:26:54L: Fast Scattering 
S190924h2019-09-24 02:18:42L: TomteGW190924_021846 [5, 110]
S190910s2019-09-10 11:28:07L: ChirpGW190910_112807 [5]
S190904w2019-09-04 17:49:10L: Fast ScatteringInstrumental origin [90]
S190829u2019-08-29 21:05:56L: Koi FishRetracted [5, 111]
S190814bv2019-08-14 21:10:38L: Scattered LightGW190814_211038 [5, 112, 113]
S190808ae2019-08-08 22:21:21H: Low Frequency BurstRetracted [5, 114]
S190804q2019-08-04 08:35:43L: Koi FishInstrumental origin [7, 88]
S190803e2019-08-03 02:26:59H: Low Frequency BurstGW190803_022701 [5]
S190728q2019-07-28 06:45:12L: No GlitchGW190728_064510 [5, 115]
S190701ah2019-07-01 20:33:02L: Fast ScatteringGW190701_203306 [5, 116]
S190630ag2019-06-30 18:52:05L: ChirpGW190630_18520 [5, 117]
S190524q2019-05-24 04:52:01L: No GlitchRetracted [5, 118]
 2019-05-24 04:52:02L: No Glitch 
 2019-05-24 04:52:04L: No Glitch 
 2019-05-24 04:52:09L: No Glitch 
S190521r2019-05-21 07:43:59H: Blip, L: ChirpGW190521_074359 [5, 119]
S190521g2019-05-21 03:02:29L: Blip Low FrequencyGW190521 [5, 120, 121]
S190519bj2019-05-19 15:35:44L: BlipGW190519_153544 [5, 122]
S190512at2019-05-12 18:07:18L: TomteGW190512_180714 [5, 123]
S190430af2019-04-30 00:49:32H: Koi FishInstrumental origin [88]
S190421ar2019-04-21 21:38:53L: Power LineGW190421_213856 [5, 124]
S190413ac2019-04-13 13:43:10L: Fast ScatteringGW190413_134308 [5]
S190412m2019-04-12 05:30:44L: ChirpGW190412 [5, 125, 126]
S190408an2019-04-08 18:18:06H: Low Frequency BurstGW190408_181802 [5, 127]

Table 3. Gravity Spy classifications coincident with confident, marginal and retracted O3b gravitational-wave candidates [7, 8790, 92]. This is equivalent to Table 2 but for O3b. The first column gives the corresponding candidate identification used in the Gravitational-wave Candidate Event Database; the second gives the Coordinated Universal Time of the Omicron trigger ($\pm 5~\mathrm{s}$ from the time of the candidate); the third column gives the Gravity Spy classification with H and L indicating whether data from Hanford or Livingston, respectively, have been analysed; the fourth column gives details of the final status of the candidate (and citations).

SupereventTimeGravity Spy classificationDescription
S200311bg2020-03-11 11:58:53L: BlipGW200311_115853 [7, 93]
S200224ca2020-02-24 22:22:34H: Blip, L: ChirpGW200224_222234 [7, 94]
S200214br2020-02-14 22:45:26L: Fast ScatteringInstrumental origin [7, 87]
S200129m2020-01-29 06:55:00L: Fast ScatteringGW200129_065458 [7, 95]
 2020-01-29 06:54:58H + L: Chirp 
S200121aa2020-01-21 03:17:48H: BlipInstrumental origin [7]
S200116ah2020-01-16 11:56:12L: TomteRetracted [96]
S200114f2020-01-14 02:08:18L: TomteInstrumental origin [87, 88, 97]
S200112r2020-01-12 15:58:38L: ChirpGW200112_155838 [7, 98]
S200108v2020-01-08 10:00:38L: Extremely LoudRetracted [99]
S200106av2020-01-06 18:34:32H + L: Scattered LightRetracted [7, 100]
S191225aq2019-12-25 21:57:15L: TomteRetracted [87, 101]
S191223an2019-12-23 01:41:59L: TomteInstrumental origin [87]
S191213g2019-12-13 04:34:08L: Scattered LightUnretracted, low significance [7, 102]
S191212q2019-12-12 08:27:25H: Scattered LightRetracted [103]
 2019-12-12 08:27:28H: Scattered Light 
S191127p2019-11-27 05:02:28H: Scattered LightGW191127_050227 [7]
 2019-11-27 05:02:24H: Scattered Light 
S191120aj2019-11-20 16:23:24L: Air CompressorRetracted [104]
S191117j2019-11-17 06:08:22L: Extremely LoudRetracted [105]
S191113q2019-11-13 07:17:53L: No GlitchGW191113_071753 [7]
 2019-11-13 07:17:48L: No Glitch 
S191110x2019-11-10 18:08:42L: Koi FishRetracted [106]
S191109d2019-11-09 01:07:17H: Scattered Light, L: BlipGW191109_010717 [7, 107]
 2019-11-09 01:07:15H: Scattered Light 
 2019-11-09 01:07:13L: Scattered Light 
 2019-11-09 01:07:12H: Scattered Light 
S191103a2019-11-03 01:25:52L: TomteGW191103_012549 [7]

First, we consider the set of classifications around gravitational-wave candidates without an identified instrumental origin:

  • From Livingston, there are 14 O3a candidates that have at least one trigger identified by Gravity Spy, and 7 O3b candidates. Three of the O3b events had two Livingston triggers during the time of the candidate. The most common class of glitches found were Chirps. Fast Scattering, Blip and Tomte were other common classifications.
  • At Hanford, only seven candidates from O3 are part of the Gravity Spy data set. One of these candidates has three associated Hanford glitches, and another has two. The most common class to occur at times associated with these candidates was Scattered Light.
  • There were four candidates in which a glitch was found at both detectors: GW190521_07 4359, GW191109_01 0717, GW200129_06 5458 and GW200224_22 2234. GW190521_07 4359, GW200129_06 5458 and GW200224_22 2234 are amongst the highest SNR candidates from O3 [5, 7]. GW190521_07 4359 [5] and GW200224_22 2234 [7] both have a Blip glitch identified at Hanford, and a Chirp at Livingston; while GW200129_06 5458 has a Chirp at both, in addition to a Fast Scattering glitch at Livingston [7]. For GW191109_01 0717 there are Scattered Light glitches at both detectors, plus a Blip at Livingston [7].

The distribution of Gravity Spy classifications is shown in figure 7.

Figure 7.

Figure 7. Gravity Spy classifications around O3 gravitational-wave candidates at LIGO Hanford and Livingston. For each candidate, a window of $\pm 5\,\mathrm{s}$ used to identify entries in the Gravity Spy data set. The machine-learning algorithm may be attempting to classify a gravitational-wave signal, a nearby glitch, or some combination of the two; it has not been trained to identify the full diversity of astrophysical gravitational-wave signals, nor how to classify data containing both a signal and a glitch.

Standard image High-resolution image

The Chirp class was originally created for hardware injections (simulated signals used for testing) representing compact binary coalescences [128], and hence might be expected to capture many of these candidates, as is the case. However, a chirp-like time–frequency morphology is only visible for the highest SNR signals; as Livingston is the more sensitive detector, there are more high SNR signals in its data. Tomte and Blip share a similar morphology to Chirps, and so may be confused for lower-SNR signals. Figure 8 illustrates an example (GW190521_07 4359 [5]) where a the higher-SNR Livingston signal is classified as a Chirp, while the lower-SNR Hanford signal is (mis)classified as a Blip.

Figure 8.

Figure 8. Gravitational-wave candidate GW190521_07 4359 [5]. At Livingston, this glitch was classified as a Chirp, and at Hanford it was classified as a Blip. The SNR of the signal is higher in Livingston, which is why the chirp-like structure is easier to identify.

Standard image High-resolution image

When a candidate is present at the same time as a glitch, it may be that the glitch is picked up by the classification algorithm. Data-quality checks [129] indicated that data mitigation was needed for many candidates from O3 where there was excess noise overlapping the gravitational-wave signal. GW190413_13 4308, GW190701_20 3306, GW190814 and GW200129_06 5458 all required data mitigation for Livingston data, while GW191109_01 0717 and GW191127_05 0227 required data mitigation for Hanford data [5, 7]. These all correspond to cases where there is a Gravity Spy classification of a glitch outside of the Chirp–Blip–Tomte family in the relevant detector. However, there is not a perfect correlation between instances where data mitigation was required and Gravity Spy glitch classifications, and there are both candidates where mitigation was required, but there is no entry in the Gravity Spy data set, and candidates where there is a Gravity Spy glitch classification but no data mitigation was required. The former could happen if the excess noise was below the threshold for Omicron trigger, but still identified by the careful data-quality checks performed to evaluate candidates. The latter could happen if the noise is at a frequency that does not impact signal analysis (e.g. ${\lesssim}20\,\mathrm{Hz}$), or if the CNN is confused by the combination of signal plus noise, and makes a misclassification. The Gravity Spy training set does not currently include examples of signals plus glitches.

To summarise, Gravity Spy is not a detection algorithm, but a noise-classification algorithm. As such, it is not intended to discriminate between gravitational-wave signals and glitches. Most gravitational-wave signals are comparatively low in SNR, making them more difficult to be picked up by Gravity Spy. Even when analysed by Gravity Spy, gravitational-wave signals will not all currently be put into the Chirp class. Consequently, the glitch classifications are contaminated (at a low rate) by gravitational-wave signals.

Along with analyzing the O3 gravitational-wave candidates, we also looked at other candidates that were determined to be false alarms. During these events at Hanford, the most common glitch type seen was Scattered Light. At Livingston, there was more of a variety ranging from Tomtes, Koi Fish, Extremely Loud, Fast Scattering, and No Glitch.

Of the candidates with an instrumental origin, the glitches classified as No Glitch are of particular interest: for the retracted candidate S190524q, there were 4 glitches classified as No Glitch. Figure 9 shows data around S190524q [5, 118], and despite the No Glitch classification, there is excess power visible. These glitches appear like a high-frequency analogue of Fast Scattering, which does not match any existing Gravity Spy class. This highlights how the existing set of classes does not catch the full diversity of noise in the detector, and that further refinements of the CNN are needed to properly classify new types of glitches.

Figure 9.

Figure 9. Example of a Livingston trigger classified as No Glitch from a time corresponding to the retracted candidate S190524q [5, 118]. Despite being labelled as No Glitch, the time–frequency resembles a high-frequency Fast Scattering glitch. This trigger was classified by the Gravity Spy CNN with a confidence of $94\%$.

Standard image High-resolution image

3.3. Data release

The data release of Gravity Spy machine-learning classifications is available from Zenodo [46]. This consists of comma-separated value (CSV) files for each detector and observing run (O1, O2, O3a and O3b). The CSV files consist of columns describing: (a) metadata output from the Omicron pipeline [26, 27] such as the time of the trigger, trigger peak frequency, bandwidth and amplitude, as well as the data analysed (the main gravitational-wave strain channel); (b) the unique Gravity Spy identifier of the glitch; (c) the machine-learning confidence for each of the original 22 glitch categories; (d) the machine-leaning classification and the confidence of this, and (e) links to Omega scans hosted by Zooniverse. Times are given as global positioning system times, and can be used to identify the relevant data from the Gravitational Wave Open Science Center (GWOSC) [71] 18 . Examples of how to use the data release are given in a Python notebook accompanying the release.

4. Discussion

The LIGO detectors in Livingston, Louisiana and Hanford, Washington nominally share an identical design [1], and thus we might not expect their performance to differ much from each other. However, due to differences in their commissioning progress [74, 77, 78], and in their surrounding environments, the two observatories do differ in practice [4, 5, 7, 20, 76]. For example, due to the presence of extra low-frequency noise at Hanford during O3, its sensitivity is about a factor of 2 lower in the frequency band 20–$60\,\mathrm{Hz}$, as compared to Livingston [78]. Additionally, the amount of ground motion in the anthropogenic (1–$6\,\mathrm{Hz}$) and microseism (0.1–$0.5\,\mathrm{Hz}$) bands is usually larger near Livingston than near Hanford. Consequently, there can be considerable difference in the amount and nature of transient noise between the two detectors: during O3b, the rate of Omicron transients with SNR above 10 at Livingston was about 1.7 times higher than at Hanford.

We see a difference in the number and distribution of glitches across the different Gravity Spy classes (e.g. table 1). For example, during O3, the glitch classes Tomte and Fast Scattering were more common in Livingston, and this increased prevalence boosted the overall glitch rate [20, 42, 130]. Examples of these two glitches classes, and a comparison of their prevalence during O3 is shown in figure 10.

Figure 10.

Figure 10. Time–frequency morphology of the glitch categories Tomte and Fast Scattering shown in the top plot. Both of these classes were more common at Livingston than at Hanford during O3, as shown in plot on the bottom.

Standard image High-resolution image

Fast Scattering was first noticed as a significant source of noise during the engineering runs preceding O3 [42, 131]. The prevalence of Fast Scattering was a primary motivation for updating the Gravity Spy model to include new classes for the analysis of O3 data [42]. Nearly all Fast Scattering during O3 is below ∼$60\,\mathrm{Hz}$. This transient noise is linked to an increase in ground motion in the anthropogenic and microseism bands near the detector [132, 133]. These two bands are usually noisier at Livingston than at Hanford, and this (combined with the differences in the detectors' low-frequency sensitivity) meant that Fast Scattering was more common at Livingston (9.05 per hour) than at Hanford (0.22 per hour) [20, 134].

Unlike Fast Scattering, we have not yet been able to identify an environmental or instrumental coupling that can explain the origin of Tomte glitches. There are ongoing detector characterisation efforts to understand how this glitch may couple in the detector [130]. While we do not know the origins of Tomte glitches, we do observe a difference in their prevalence at the two observatories: during O3, the rate of Tomte glitches at Livingston was 6.44 per hour, while at Hanford the rate was 0.23 per hour. Tomte glitches have most of their power below ${\sim}64\,\mathrm{Hz}$. The difference in the low-frequency sensitivity between the two detectors may be partially responsible for the difference in the rates during O3. Further study of when Tomte glitches occur, and the differences between Livingston and Hanford, may reveal the origins of these glitches.

A successful example of detector characterisation during O3 was the identification of the source of Scattered Light (Slow Scattering) glitches, and its subsequent mitigation [56]. Scattered Light glitches have a significant impact on data quality because they occupy a large region time–frequency parameter space. As shown in figure 1, Scattered Light transients appear as long-duration arches in spectrograms. These arches are characteristic of noise caused by light scattering. While the frequency gives some information on the motion of the component scattering the light, it is still difficult to identify the troublesome light path in the detectors. The Gravity Spy analysis played a significant role in understanding the source of Scattered Light: the occurrence of glitches classified as Scattered Light was found to correlate with motion of the the quad suspension [20, 56], which is captured by the optical shadow sensors and magnetic actuators [135, 136], indicating that the source of light scattering was part of the suspension system. The motion was subsequently reduced by employing reaction-chain tracking, which resulted in a considerable reduction in the rate of Scattered Light for the same degree of ground motion near the observatories [56]. The resulting drop in the glitch rate is visible in figure 5. This decline in the glitch rate of Scattered Light is sharper at Hanford than at Livingston due comparatively higher ground motion near Livingston in the microseism band during February 2020 [7, 20].

The fourth observing run (O4) will see the use of new and improved technologies [137]. Among them are frequency-dependent squeezing, new Faraday isolators, new test mass mirrors at Livingston, and higher laser power. These improvements will translate to a higher instrument sensitivity, thereby increasing our astrophysical reach for detecting gravitational-wave signals. However, a more sensitive detector is not just more sensitive to gravitational waves, it is also more sensitive to environmental and instrumental noise artifacts. Compared to O2, the rate of glitches during O3a was four times higher at Livingston [5]. Like O3, it is possible that in O4 we will witness one or more new types of noise transients, and that these will appear only at one of the detectors. This could require using a site-specific Gravity Spy training set and CNN model to properly characterise O4 data quality. The current plan for O4 is to sample the transients for any new glitch morphologies during the engineering run preceding O4, and retrain Gravity Spy before observing starts.

5. Summary

Understanding data quality is a key aspect of gravitational-wave detector characterisation. The Gravity Spy machine-learning algorithm enables automated classification of segments of LIGO data suspected to contain transient noise. Gravity Spy is routinely used in studies of data quality [20], has been integral in the identification of new classes of glitches [42], and has aided in the identification of the sources of glitches [56]. Here we describe the data release of classifications for O1, O2 and O3. Using CNN models trained for O1–O2 [39, 40] and for O3 [42], we have analysed Advanced LIGO data from these first three observing runs; the results are publicly available from Zenodo [46]. These can be used for a range of studies, from investigating environmental and instrumental origins of glitches, to developing new data-analysis pipelines; we have used the Gravity Spy classifications to illustrate some of the properties of data quality in O3 (as well as highlighting some limitations of the data set).

This release covers data from O1–O3. O4 (and subsequent observing runs) [74] will follow improvements to the detector that may lead to the appearance of new glitch classes (and possibly the elimination of current glitch classes). Therefore, the Gravity Spy machine-learning model may need to be updated to account for these changes. To aid detector-characterisation experts in identifying new glitch classes and building a training set of example glitches, we will draw upon the Zooniverse volunteers along with machine-learning clustering approaches. Gravity Spy volunteers have previously rapidly identified new classes based upon their time–frequency morphologies [42], and for O4 we will support their investigations into the causes of glitches by providing them with additional auxiliary channel data. Following the update of glitches classes, we anticipate that the classifications provided by the Gravity Spy project will enable further studies of LIGO data quality and improvements to data-analysis pipelines.

Acknowledgments

We thank the citizen-science volunteers of Gravity Spy who have contributed to the classifications of LIGO data. We are grateful to Marissa Walker and the anonymous referees for comments on the manuscript. Gravity Spy is partly supported by the National Science Foundation (NSF) Award INSPIRE 1547880 and partially by Award IIS-2107334. This work is supported by the NSF under Grant PHY-1912648. J G is supported by NSF Grant PHY-2110509. S B acknowledges support by NSF Grants PHY-1912648 and IIS-2107334. S S acknowledges support of the NSF Grant PHY-1764464 to the LIGO Laboratory. M Z is supported by NASA through the NASA Hubble Fellowship Grant HST-HF2-51474.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. CPLB acknowledges support from the CIERA Board of Visitors Research Professorship, and Science and Technology Facilities Council (STFC) Grant ST/V005634/1. O P is supported by NSF Grant PHY-1559694. V K was partially supported through a CIFAR Senior Fellowship, NSF Grant PHY-1912648, and by Northwestern University. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data and software obtained from GWOSC (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. The authors gratefully acknowledge the support of the United States NSF for the construction and operation of the LIGO Laboratory and Advanced LIGO as well as STFC of the United Kingdom, and the Max-Planck-Society for support of the construction of Advanced LIGO. Additional support for Advanced LIGO was provided by the Australian Research Council. Advanced LIGO was built under Award PHY-0823459. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under Cooperative Agreement PHY-1764464. This work used computing resources at CIERA funded by NSF Grant PHY-1726951, and the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This document has been assigned LIGO document number LIGO-P2200238. The data that support the findings of this study are openly available from Zenodo [46].

Data availability statement

The data that support the findings of this study are openly available at the following URL/DOI: https://doi.org/10.5281/zenodo.5649211 [46].

Appendix: Glitch classes

The Gravity Spy projects classifies images into a range of classes. For LIGO data from O1 and O2, 22 classes are used in the CNN model [39, 40], and for data from O3 23 classes (the older classes except None of the Above, plus Fast Scattering and Blip Low Frequency) are used [42]. In alphabetical order, the set of classes are,:

  • (a)  
    1080 Lines: These appear as short-duration dots repeating every ${\sim}0.1\,\mathrm{s}$ at ${\sim}108\,\mathrm{Hz}$. They are also accompanied by noise below $64\,\mathrm{Hz}$. These glitches were prevalent in Hanford date early in O2, but were reduced following improvements in the output mode cleaner [138].
  • (b)  
    1400 Ripples: These glitches appear as short (${\lesssim}0.05\,\mathrm{s}$) wavy lines at ${\sim}1400\,\mathrm{Hz}$.
  • (c)  
    Air Compressor: This class appears as thick flat line at ${\sim}50\,\mathrm{Hz}$. In Hanford, these were found to be related to air compressor motors at the end stations [139], and were reduced following the replacement the vibration isolators.
  • (d)  
    Blip: Blip glitches are broadband with very short (${\sim}0.04\,\mathrm{s}$) duration. Due to their teardrop morphology, Blips can adversely influence the search for high-mass binary black hole signals. Despite being a common glitch class, the cause of Blips is currently unknown [19].
  • (e)  
    Blip Low Frequency: Otherwise known as Low-frequency Blips, these glitches have a similar morphology to Blip glitches, except they occur at lower frequencies with peak frequencies ${\sim}10$$50\,\mathrm{Hz}$ [42]. This is a new glitch class added for O3.
  • (f)  
    Chirp: The characteristic sweep from low frequencies to high of a coalescing compact-object binary. The class originally contained examples of simulated signals created by hardware injections [128]. The Chirp training set was created early in the era of gravitational-wave astronomy to accommodate hardware injections, and is not representative of our current understanding of the population of coalescing binaries [7, 140].
  • (g)  
    Extremely Loud: These broadband transients are characterised by very high SNR, often leading to the spectrograms appearing saturated. These correspond to large disturbances to the detectors, and may often be accompanied by a drop in the astrophysical range of the detector. High-SNR glitches from other classes (e.g. Koi Fish) may be classified as Extremely Loud.
  • (h)  
    Fast Scattering: Otherwise known as Crown, these glitches appear as short-duration ($\sim0.2$$0.3\,\mathrm{s}$) arches [42]. These arches often appear in groups, each separated by either $0.25\,\mathrm{s}$ or $0.5\,\mathrm{s}$. They are correlated with ground motion in the anthropogenic (1–$6\,\mathrm{Hz}$) band, which is usually caused by bad weather or human activity. This is a new glitch class added for O3, and they were the most common glitch in Livingston data.
  • (i)  
    Helix: These are broadband glitches, usually in the frequency region 16–$512~\mathrm{Hz}$, often occurring in groups of two or three glitches separated from each other by $\sim0.1~\mathrm{s}$. They may be related to glitches in the auxiliary lasers used to calibrate the detectors [139].
  • (j)  
    Koi fish: These glitches are high-SNR broadband glitches. They typically occupy the frequency band ${\sim}20$$1000\,\mathrm{Hz}$, and can resemble Blips, but with pectoral fins at $\sim30\,\mathrm{Hz}$.
  • (k)  
    Light modulation: These transients are usually high SNR, with most of the noise content at 16–$128\,\mathrm{Hz}$, but there may also be one or more broadband spikes. They are caused by amplitude fluctuations in the control signal of the optical sidebands used to regulate the length and alignment of optical cavities [17].
  • (l)  
    Low-frequency burst: These are usually short-duration (${\sim}0.25\,\mathrm{s}$) transients between ${\sim}10$$20\,\mathrm{Hz}$, often appearing as a hump at the bottom of the spectrogram. They were common at Livingston data during O1 and Hanford data in O3a.
  • (m)  
    Low-frequency lines: These appear mostly as flat lines, extending ${\sim}1.5$$2\,\mathrm{s}$ in time and usually below ${\sim}20\,\mathrm{Hz}$.
  • (n)  
    No glitch: This category is used for Omicron triggers where there is not visible excess power in the Gravity Spy spectrogram. These are usually low-SNR Omicron triggers, but can include short-duration, high-frequency (${\gtrsim}2000\,\mathrm{Hz}$) transients than are difficult to resolve because of the logarithmic frequency scale used for the spectrograms.
  • (o)  
    None of the above: This category is a catch-all for glitches that do not fit into the other categories. Accordingly, there is no typical morphology. This class is primarily useful when Zooniverse volunteers are classifying images. This class was not used for the final CNN classification of O3 data.
  • (p)  
    Paired doves: These appear as a pair of short duration transients, alternating between increasing and decreasing in frequency, with a separation of ${\sim}0.1\,\mathrm{s}$. These glitches are potentially related to periods of excess motion of the beamsplitter [141].
  • (q)  
    Power line: These glitches appear as narrow, flat lines, usually ${\sim}0.2$$0.5\,\mathrm{s}$ close to $60\,\mathrm{Hz}$ (or harmonics of this). This frequency corresponds to the electric power-grid frequency in United States, and glitches can be caused by a range of equipment that runs of this power supply [142, 143].
  • (r)  
    Repeating blips: This class consists of multiple Blip-like glitches, often repeating with a cadence of ${\sim}0.25$$0.50\,\mathrm{s}$.
  • (s)  
    Scattered Light: Otherwise known as Slow Scattering (to distinguish from Fast Scattering), they appears as long-duration (${\sim}2.0$$2.5\,\mathrm{s}$) arches in the spectrograms. They are correlated with ground motion in the earthquake (0.03–$0.1\,\mathrm{Hz}$) or microseism (0.1–$0.5\,\mathrm{Hz}$) frequency bands. In O3, it was found that Scattered Light was caused by the relative motion between the optical suspension system's end test-mass chain and the reaction-mass chain [56].
  • (t)  
    Scratchy: Sometimes known as Blue Mountains, these appear as a series of sharp peaks at intermediate frequencies ${\sim}60$$250\,\mathrm{Hz}$. There may be ${\sim}10$–30 peaks per second. They are related to light scattering from the Swiss cheese baffles [144, 145].
  • (u)  
    Tomte: These are short-duration glitches with a characteristic triangular shape. They are similar to Blip or Blip Low-frequency glitches, and typically occupy the frequency band ${\sim}16$$150\,\mathrm{Hz}$. They can adversely influence the search for high-mass binary black hole signals.
  • (v)  
    Violin Mode: These appear as disturbances at ${\sim}500\,\mathrm{Hz}$ and harmonics. These frequencies correspond to the resonances of the glass fibres that are used to suspend the mirrors.
  • (w)  
    Wandering Line: These long-duration transients have an undulating line morphology. They can cover a wide range of frequencies, with multiple lines appearing at once at different frequencies, but are usually above ${\sim}256\,\mathrm{Hz}$.
  • (x)  
    Whistle: Also known as Radio Frequency Beat Notes, these appear as U-, V- or W-shaped transients, typically above ${\sim}128\,\mathrm{Hz}$ with most of the noise content above ${\sim}500\,\mathrm{Hz}$. They are caused when radio-frequency signals beat with the voltage controlled oscillators [146].

Examples for the 23 classes used for O3 classification are shown in figure A1.

Figure A1.

Figure A1. Time–frequency morphology for examples of the Gravity Spy classes in O3. The classes are grouped by the time duration ($0.5\,\mathrm{s}$, $1\,\mathrm{s}$, $2\,\mathrm{s}$ or $4\,\mathrm{s}$) that best illustrates their features. First row: Tomte, Blip, Blip Low Frequency and Low-frequency Burst ($0.5~\mathrm{s}$). Second row: Violin Mode, Power Line, Light Modulation and Scratchy ($0.5\,\mathrm{s}$). Third row: Chirp, Air Compressor, Koi Fish and 1400 Ripples ($0.5\,\mathrm{s}$). Fourth row: No Glitch, Whistle, Fast Scattering and Repeating Blips ($1\,\mathrm{s}$). Fifth row: Wandering Line, Scattered Light, Helix ($1~\mathrm{s}$) and Extremely Loud ($2\,\mathrm{s}$). Sixth row: Low-frequency Lines, 1080 Lines and Paired Doves ($4\,\mathrm{s}$). The Blip Low Frequency and Fast Scattering classes are not used for O1 and O2, but the O1 and O2 results do include an additional None of the Above class.

Standard image High-resolution image

In addition to the classes used in the CNN, there are additional LIGO glitch classes that have been proposed by Zooniverse volunteers during O3 that have not yet been incorporated into the machine-learning framework:

  • (a)  
    70 Hz Line: These appear as lines similar to Air Compressor or Power Line glitches, but centred at ${\sim}70\,\mathrm{Hz}$.
  • (b)  
    High-frequency Burst: These appear as very short-duration transients at frequencies ${\gtrsim}1000\,\mathrm{Hz}$.
  • (c)  
    Pizzicato: These appear as a short (${\sim}0.05\,\mathrm{s}$) transient that resembles a flying saucer centered around ${\sim}500\,\mathrm{Hz}$, ${\sim}1000\,\mathrm{Hz}$, or both. The frequencies correspond to violin modes of the suspension fibres, and the glitch may be related violin mode damping mechanisms, but the exact cause is yet to be identified.

These, and further classes, may be added to the CNN for future studies.

Footnotes

  • 14 

    Gravity Spy Zooniverse project gravityspy.org.

  • 15 

    The European Gravitational Observatory run a similar project dedicated to understanding glitches in Virgo data: GWitchHunters [43] www.zooniverse.org/projects/reinforce/gwitchhunters.

  • 16 

    None of the Above remains an option for Zooniverse volunteers We anticipate that reinstating the None of the Above class may be useful for identifying new classes in preliminary analysis of future observing runs. Prior to the introduction of the Fast Scattering class, there were a large number of None of the Above classifications for O3 data with the characteristic Fast Scattering morphology [42].

  • 17 

    Plotting the number of glitches (the glitch rate multiplied by the detector duty cycle) instead of the glitch rate, would show a significant drop on Tuesdays, as this corresponds to the day of routine maintenance.

  • 18 
Please wait… references are loading.
10.1088/1361-6382/acb633