An Information Retrieval and Recommendation System for Astronomical Observatories

, , , , , , , and

Published 2018 March 15 © 2018. The American Astronomical Society. All rights reserved.
, , Citation Nikhil Mukund et al 2018 ApJS 235 22 DOI 10.3847/1538-4365/aaadb2

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0067-0049/235/1/22

Abstract

We present a machine-learning-based information retrieval system for astronomical observatories that tries to address user-defined queries related to an instrument. In the modern instrumentation scenario where heterogeneous systems and talents are simultaneously at work, the ability to supply people with the right information helps speed up the tasks for detector operation, maintenance, and upgradation. The proposed method analyzes existing documented efforts at the site to intelligently group related information to a query and to present it online to the user. The user in response can probe the suggested content and explore previously developed solutions or probable ways to address the present situation optimally. We demonstrate natural language-processing-backed knowledge rediscovery by making use of the open source logbook data from the Laser Interferometric Gravitational Observatory (LIGO). We implement and test a web application that incorporates the above idea for LIGO Livingston, LIGO Hanford, and Virgo observatories.

Export citation and abstract BibTeX RIS

1. Introduction

Data mining in the big data framework often encounters difficulty in both extracting the relevant information from the data and in coming up with meaningful interpretations in a highly reliable fashion (Fan et al. 2014; Khan et al. 2014; Wu et al. 2014). In many situations, data come in a format that is not suitable to store in relational databases of coherent hierarchy (Stephens et al. 2015). The methods in which data are stored and associated with different entities also pose a challenge in mining the required information from it. For example, in a gravitational wave observatory, there is a core science data set with plenty of metadata on the observation and a variety of other auxiliary data sets collected from various sensors and actuators. Metadata includes information in the form of technical documents, images, and measurements.

Even though data mining methods like association analysis, clustering, and other machine-learning techniques exist, the presentation of unstructured data into these algorithms and inference generation is not a trivial task (Han et al. 2011). The generation of insights from big data with recommendation systems based on learning from unstructured text data (Pazzani & Billsus 2007) tackles these challenges (LaValle et al. 2011; Hu et al. 2014). Descriptive recommendations and information retrieval (Sigurbjörnsson & Van Zwol 2008; Gretzel et al. 2004) have recently gained popularity and have been applied to areas like travel recommendation systems (Gretzel et al. 2004) and content personalization systems (Liang et al. 2006). Besides commercial applications, text-summarization-based content recommendation (Hassan et al. 2009) is an exciting area that has a high level of applicability to different areas of science and research (Miner et al. 2012; Kerzendorf 2017). Unlike conventional rank-based search systems, these do not perform topical modeling and rank topics of recurring interest (Zoghbi et al. 2013). Topical modeling is usually done for retrieving information from a single website with multiple topics. The challenge is when different topics on a single site may be weakly linked to each other (Cointet & Roth 2010). While there could be already known relations among different entities, the process of data mining and better data representation can reveal the latent unanticipated linkages among different topical entities (Behrens & Bassu 2006).

Large science projects, especially astronomical observatories, have plenty of data about telescope operations, scheduling, maintenance, and general observational activities all logged in text form. Over the years, these logbook entries will accumulate, covering almost all the aspects of the instruments in the observatory. Although the key technologies are changing rapidly, the fundamental principles involved in construction and maintenance at these places are getting altered at a less rapid rate. This fact necessitates the need for keeping a record of activities carried out over the years for prompt diagnostics. Projects like SKA, TMT, LIGO, SALT, and JWST also require extensive internal coordination. These typically are a collaboration consisting of hundreds to thousands of scientists whose research can span areas like instrument fabrication, installation, commissioning, characterization, maintenance, upgrade, data analysis, and parameter estimation. Often their time span is spread across few decades and thus generate information whose volume and complexity cannot be handled effectively by traditional search-engine-backed information processing tools. On the positive side, analysis of such big data volumes can yield powerful insights into the inherent trends and fluctuations within the concerned project.

In this paper, we make use of the open source logbook data from the Laser Interferometric Gravitational Observatory (The LIGO Scientific Collaboration 2015) to depict the resourcefulness of knowledge rediscovery backed by natural language-processing (NLP; Ricci et al. 2011). Additionally, such efforts will efficiently disseminate technical knowledge to a wider audience and will help the ongoing efforts to build upcoming detectors, such as LIGO-India, by helping to foresee possible challenges during the design phase. This is a novel approach to observational astronomy, and the developed software is made available to the public through a web application named Hey LIGO.6 We also show the application of descriptive content-based recommendations to compare common issues among multiple observatories. These methods are scalable and will be very useful in the event of upcoming projects like the Square Kilometre Array (SKA) and the LIGO-India detector.

The paper is organized as follows. In Section 2 we describe the methodology adopted to convert raw data into useful and representable information. Section 3 provides the details of the data used in our analysis. The features of the recommendation system are outlined in Section 4. Finally, in Section 5 we apply the scheme to various gravitational wave observatories around the world and discuss the results obtained in Section 6.

2. Contextual Learning of Unstructured Data

Structured data are highly organized and usually reside in a relational database schema. Unstructured data refers to information that does not follow the traditional database scheme. For example, e-mails, web pages, business documents, and FAQs are some examples of unstructured data. They include text and even multimedia content. Therefore, processing such information is usually a tedious task.

This section briefly describes the manner in which unstructured textual data are acquired, processed, and finally given a structure. Moreover, it also enumerates the various steps that are involved in the development of a machine-learning model and are used to differentiate between the available textual data points contextually. Finally, the model is used to perform clustering over all the textual data, thus adding a structure for ease of access. Figure 1 shows the schematic representation of our web interface that has been used to implement the scheme.

Figure 1.

Figure 1. Schematic depicting the information retrieval and recommendation system.

Standard image High-resolution image

The unstructured data set that we use is in the form of textual web pages. These pages have an identical HTML structure and certain attributes defined for every data point. Due to this common structure and the open source nature of the web pages, it is possible to write a script that can extract each attribute from the HTML source code and organize the complete information into a dataframe. A dataframe is a tabular structure with columns as attributes and rows as individual data points. The first part of our algorithm performs data acquisition using the python package "Beautiful Soup" (Richardson 2017) to retrieve information from web pages by searching through new posts and related data. It saves them into relevant files for future utilization.

Once the data sets are stored locally, non-essential attributes are removed, textual timestamps are converted to system timestamps, duplicate data points are removed/combined, and the resulting data are passed onto the text-processing unit. A vocabulary for our data is generated by converting the unstructured data into stem words. For that, we have removed all special characters and punctuations such as ! @, #, $, %, ∗, and , ', ", (,) etc. All non-English words and other HTML tags, and URLs, are also excluded from the data. The text is then tokenized (Huang et al. 2007) by splitting the strings of text into a list of words called tokens. To reduce the redundancy in the vocabulary, we convert the related token forms and their derivatives to a common base stem by a process known as "stemming" (Frakes 1992).

We convert textual data into vectors that can be easily handled by the computer (Li et al. 2015) using the process of "embedding". There exist various embedding algorithms like One-Hot Encoding (Harris & Harris 2012) and Term Frequency Inverse Document Frequency (TFIDF) (Leskovec et al. 2014). Not all methods can capture the contextual differences between the words. However, a recent breakthrough in the field of NLP incorporates neural networks that can learn the vector values for each word by iterating over the text multiple times using a gradient-based algorithm (Mikolov et al. 2013a, 2013b). Bengio et al. (2003) have coined the term embeddings with a neural language model to train them with the model's parameter.

One of the commonly used tools for converting words into vectors is "Word2vec," described in Mikolov et al. (2013a). Word2vec has a single hidden layer, fully connected neural network that takes a large text corpus as input and produces a higher dimensional vector for each unique word in the corpus. Words that share common contexts in the corpus are located close to each other in the vector space. Word2vec models do not consider word order and can capture semantic information between words in a very efficient way (Ling et al. 2015). With the help of Word2vec embeddings, a computer can differentiate between words of different types. Word2vec implements two computationally less expensive models known as Continuous Bag of Words (CBOW) and a Skip-gram model (Mikolov et al. 2013a) to learn word embeddings. The representation of a corpus of text or an entire document in the form of a list of words (Multiset) is referred to as Bag of Words representation (Markov & Larose 2007). The algorithm essentially tries to predict the target based on a set of context words (Mikolov et al. 2013a, 2013b).

The model that we have used in this work is the Skip-gram model, whose architecture is shown in Figure 2. This representation is similar to the CBOW model, but instead of predicting the target word, it predicts the context words based on a given target word (Mikolov et al. 2013b). The model maximizes the probability for classification of a word based on another word in the same sentence (Mikolov et al. 2013a). Thus, the vector representation is capable of capturing the semantic meanings of the words from a sequence of training words w1, w2, ..., wT and their contexts c. The algorithm can be briefed as follows. First, the words are applied as input to a log-linear classifier whose objective is to maximize the average log probability given by

Equation (1)

Figure 2.

Figure 2. Skip-gram model for creating word vectors using neural networks. The model predicts contextual words after being given a target word. A linear transformation is used to project the input layer (one hot encoding) to the hidden layer via a ${\boldsymbol{V}}\times {\boldsymbol{N}}$ matrix. The hidden layer consisting of nonlinear activation functions is connected to the output layer via ${\boldsymbol{N}}\times {\boldsymbol{V}}$ matrix.

Standard image High-resolution image

Larger values of c can result in a higher accuracy but require more training time (Mikolov et al. 2013b). To obtain the output probability, $P({w}_{o}| {w}_{i})$, the model estimates a matrix that maps the embeddings into a V-dimensional vector ${O}_{{w}_{i}}$. Thus, the probability of predicting the word wo given the word wi is defined using the softmax function

where V is the number of words in the vocabulary (Mikolov et al. 2013b; Ling et al. 2015). But this formulation is computationally intensive for larger vocabularies. This problem is alleviated in Word2vec by using the hierarchical softmax function (Morin & Bengio 2005) or with a negative sampling approach (Goldberg & Levy 2014).

After embedding all words, every data point is represented as the average of all the word vectors of the words present in it. A Nearest Neighbor Algorithm (Andoni & Indyk 2006) is then used to cluster these data point vectors to respective clusters efficiently. The optimal number of clusters is estimated iteratively until it is observed that the accuracy has peaked, which in our case was found to be 1/5th of the vocabulary of our model. We used the Python implementation of Scikit-learn (Pedregosa et al. 2012) package for applying the nearest neighbor algorithm.

Even after NLP classification, we find that quite a few relevant posts are left out unobserved. So we added one more layer of processing by analyzing the overall emotional content of the reports. We used the AFINN lexicon (Nielsen 2011) consisting of a collection of 2477 words each with an associated integer value ranging between −5 to +5 representing a transition from a negative to a positive sentiment. Modifying the word valence and appending the lexicon with technical words that better represent the associated sentiments was found to produce better results. For example, the LIGO specific application associates terms like "lockloss" and "scatter noise" with negative sentiment, while "new filter installed" would be associated with a positive sentiment.

Our implementation for LIGO is designed so that the users can query for information through a web interface. The stem words in the query are identified and the vector is projected into the previously modeled word vector space. The nearest neighbor model retrieves the top neighbors for the query vector, which are then displayed as search results on the web interface. Search results are then filtered to check for the presence of the query words in either the title or in its content to weed out false positives. In Figure 3, we have shown a simple search query displayed on the web interface. The different features incorporated in the web interface are described in Section 4.

Figure 3.

Figure 3. Screenshot of the Hey LIGO Web Interface. Search results are color tagged based on their overall sentiment. Trending posts are identified based on the associated metadata consisting of comments and discussions made within the LIGO community.

Standard image High-resolution image

3. Gravitational Wave Observatories

GW interferometers (IFOs) have been in operation for the past few decades and have made the first direct detection of GWs from merging binary black holes (Abbott et al. 2016c). The complex nature of this multi-physics experiment requires scientists from multiple domains of expertise to work together and share information. Rigorous commissioning and characterization efforts have been carried over two decades to reach the current level of sensitivity. Efforts to enhance the detector uptime led to increased coincident observation, improving the likelihood of the detection of astrophysical signals. LIGO (The LIGO Scientific Collaboration 2015), Virgo (Acernese et al. 2015), GEO600 (Dooley et al. 2016), and KAGRA (Aso et al. 2013) archive most of their day-to-day site activities using web interfaces known as logbooks. These may range from installation activities to noise hunting, and mitigation works carried out during the lifespan of the observatory. Although there are site-specific issues, they often encounter problems of a similar nature and employing solutions that worked at the other sites may be a good strategy to start with. Also, it is not uncommon to see previously fixed issues reappear at a later time on timescales could from a few months to years. This happens due to recurring environmental fluctuations and configuration changes in the detector. Since the current GW detectors aim at coincident detection of events, the joint uptimes of the instruments are crucial. This is more significant because the probability of detection scales linearly with observation time and cubically with the sensitivity of the instrument.

Information extraction and processing of logbook information, as envisaged here, is expected to help with making better decisions pertaining to detector maintenance. For example, identifying the subsystems that could get affected during instrument upgrade will be vital for scheduling and coordinating tasks among the subgroups involved. Similarly, long-term tracking of an issue can be carried out to see if the various overhauling attempts indeed lead to an improvement in performance, which correlates with a lesser number of related posts.

Usually, logbooks are accessible to the public and valid credentials are required to create or edit them. Additional supporting materials like measurement figures, photographs, sensor data, codes, etc., can be uploaded with these entries. It is also possible to add comments and carry out further discussion on any of the logbook entries. It is mandatory for all the entries to have a title, section, task description, and author details. Details on retrieved information are given below in Table 1.

Table 1.  Logbook Details Retrieved from Different GW Observatories

Observatory Logbook Entries Contributors Time span Dictionary Size Clusters
LIGO Livingstona 24351 261 2010–2017 2273 455
LIGO Hanfordb 24968 237 2010–2017 2713 543
Virgoc 34592 660 2010–2017 5026 1005

Notes.

a https://alog.ligo-la.caltech.edu/aLOG/ b https://alog.ligo-wa.caltech.edu/aLOG/ c https://logbook.virgo-gw.eu/virgo/

Download table as:  ASCIITypeset image

4. Hey LIGO Functionalities

We have developed and deployed an open access NLP-based web application named Hey LIGO to support the commissioning and characterization efforts at the GW observatories. It relies on the logbook data recorded since 2010 by scientists specializing in different aspects of the detector. Every query is answered by matching it with most relevant logbook entries sorted per their closeness to the query term in the word vector space. We further analyze the sentiment of the post and color code so that green indicates a positive outcome and red corresponds to something undesirable in the context of activities carried out at the detector. An image retrieval facility displays the thumbnail of the figures attached to the sorted data, simplifying the knowledge discovery process. Contextual data visualization across multiple detectors is carried out as shown in Figures 5 and 6. This feature lets the user compare and see the trends in the searched keyword across different observatories.

An automatic check for new data entries is done periodically so that the NLP models are regularly updated. We track the volume of discussions happening on various topics and identify and rank the trending issues on a daily basis. Scientists involved with the project will mostly be interested in getting notified about specific problems that correlate with their domain of expertise, so the application only issues alerts to registered participants with matching interests. This targeted delivery will remove clutter and will ensure proper dissemination of information to the concerned people.

Code development is a tedious procedure wherein a significant amount of time is spent on readability and re-usability to benefit a wider research community. Our application makes better use of this idea by auto-detecting and notifying the user about the presence of codes in the searched content. We believe this feature would simplify the procedure involved in result reproduction and its consequent independent verification.

To check the capability of our application, we analyze six months of LIGO Livingston Laboratory (LLO) logbook data (2017 January 1 to June 30) and compare the NLP results with the actual entries. Table 2 shows the recovery performance for a specific set of randomly chosen keywords. In most cases, the false alarms occur at the tail end of the search results, which represent the neighbors of least relevance in the k-NN search. These can be removed either by setting a threshold on the similarity distance measure or by post-filtering the results by additionally comparing the content of each post. Currently, we have implemented the process of post-filtering to remove such posts in the final web application. In the future, we are planning to incorporate a mechanism that will make use of feedback received from the users and utilize it to improve the accuracy in retrieving relevant posts.

Table 2.  Prominent LLO Logbook Keywords from 2017 January 1 to June 30

Keyword Logbook Posts Retrieved
  Entries by NLP Code
    Total Relevant
Lock Loss 108 108 89
Earthquake 83 94 80
Charge 62 65 58
Measurement      
Guardian 55 65 55
Optical 63 61 48
Lever      
Calibration 55 52 45
Lines      

Download table as:  ASCIITypeset image

5. Inferring from Logbook Entries

Once the relevant logbook entries are identified using the techniques mentioned above, their associated metadata can be utilized to obtain several kinds of quantitative information about the topic of interest.

5.1. Trends within Detectors

We briefly compare the trends obtained for a few test search queries and discuss the observed patterns. Although of similar configuration, the effect of various noises on each detector can be of a different nature. Variation in instrumental behavior and environmental effects due to geographical location can also influence the efficiency of implemented mitigation measures. Figuring out such details may positively speed up the commissioning activities of future detectors like LIGO-India.

  • 1.  
    Installation. The first plot from Figure 6 shows the trends in posts related to installation work at each of the observatories. Activities picked up momentum in 2010 at LIGO and continued until mid 2014, after which testing and commissioning tasks started. Advanced Virgo seems to have began such activities in 2014 and carried them on until the end of 2016.
  • 2.  
    Jitter Noise. Jitter noise arising out of laser pointing fluctuations (Martynov et al. 2016) is sensitive to cavity alignments and angular mirror motions. It has been partly caused by the pre-stabilized laser (PSL) periscope motion induced by chiller water flow around PSL's high power oscillator. Various efforts to understand its possible origin and subsequent attempts to subtract it from the data stream are reflected in the increased number of logbook entries at the LIGO Hanford Observatory (LHO) relative to other sites. Commissioners performed online feed-forward noise subtraction using auxiliary witness channels that reduced the coupling significantly (Sigg 2016; Vajente 2016; Driggers 2017).
  • 3.  
    Scattering Noise in LHO and LLO. Noise from scattered light is one of the factors that limits the sensitivity in the frequency bin from 50 to 200 Hz (see Figure 4), especially during periods of high microseism. An off-axis beam-scattered laser beam could hit a reflecting surface like a camera mirror mount or beam tube and re-enter the cavity. Nonlinear features are seen in the data when this beam picks up resonances from reflecting surfaces, which then get upconverted or phase-modulated by low-frequency seismic-like motion. Its effect at LLO is more pronounced as compared to LHO, as the former is vulnerable to microseismic activity (Ottaway et al. 2012). Figure 4 shows the effect of acoustic excitation on the 82 Hz peak seen in gravitational wave differential arm motion (DARM) data. The acoustic injections carried out at the LIGO Y-end station are reconstructed using the scatter noise model S(t) (Accadia et al. 2010) given by
    where Yrms(t) is the ground motion and Yac(t) = B sin (2πfot) is the chamber motion, with (A, n, B) being the tunable parameters. The model parameters are fine-tuned using a pattern search. The scatter noise projection to DARM from ambient motion is obtained by scaling down the chamber motion based on the accelerometer signal before and after injection.
  • 4.  
    Non-astrophysical Transients. Glitches often show up in the strain data, leading to false alarms in the various search pipelines that look for astrophysical signals (Abbott et al. 2016b). They are often witnessed in auxiliary sensor channels, while a few of them have been reported to cause loss of lock of the interferometer (Mukund 2017). The report generation feature of our application provides the following glitch distribution (Figure 5) across multiple subsystems based on their tags in the data. The general operation of all three detectors has been affected by such transients ever since the beginning of their operation (see Figure 6). It is interesting that there are subtle variations in the noise sources between LHO and LLO. The origins of many of them have been studied and reported in the logbooks, but a vast majority are still not well understood.

Figure 4.

Figure 4. Effects of scattered light observed at LIGO Livingston. Noise gets amplified and upconverted during periods of high microseism and limits the sensitivity range of the GW detectors.

Standard image High-resolution image
Figure 5.

Figure 5. Pie diagram showing how various LIGO subsystems contribute to the various non-astrophysical glitches seen in LIGO. The acronyms in the diagrams are defined as follows: ISC: Interferometer Sensing and Controls; CAL: Calibration; AOS: Auxiliary Optics Support; SUS: Suspension; VE: Vacuum Electronics; SEI: Seismic External Isolation; CDS: Control and Design System.

Standard image High-resolution image
Figure 6.

Figure 6. Rate of occurrence of different keywords in the logbooks of multiple detectors as a function of time.

Standard image High-resolution image

5.2. Visualizing an Observatory as a Complex Network

The behavior of an observatory and the elements that lead to changes in the system behavior can be studied through its representation as a complex network. Complexity is expressed through nodes and links within the network. Here, the nodes can be either subsystems or specific instruments or even subgroups within the observatory, and edges between them provide the probability of each one of them being connected to the other as inferred from the logbook entries. We first create a dictionary of sub-system keywords and for each one find the frequency of their joint occurrence with the others. This information is then used to form the adjacency matrix, whose diagonal elements are all zero, and the off-diagonal value representing the linkage is given by the ratio of joint occurrence frequency divided by total occurrence of the keyword. The adjacency matrix being non-symmetric leads to a directed graph. The number of incident edges determines the node size, while the edge width is given by the associated connection probability. To better aid visualization, we adopt the Force Atlas 2 layout (Jacomy et al. 2014), with repulsion approximated using Barnes Hut optimization (Barnes & Hut 1986), which is well suited for larger graphs. The interconnectedness information within the observatory revealed through these networks may help with identifying the critical nodes in the system, making it easier to identify vulnerable connections. These representations could be useful during large-scale repair and maintenance, as they reveal the other subsystems that can be affected in the process.

In Figure 7 we show the network connection for a few prominent nodes of the Virgo observatory. It differs from real-world networks in terms of its degree distribution (degree refers to the number of edges connected to each node). Sparse networks are characterized by a degree distribution that takes the form of a power law and is commonly seen in biological networks and computer networks (Barabási 2016). For the case of Virgo network, this distribution deviates from such a power law, indicating a dense connection between the nodes. Further research is needed to analyze the network and study the instrument's robustness to random sub-system failures.

Figure 7.

Figure 7. Network plot of the Virgo detector highlighting the inter-connections between various subsystems. The subplots highlight the directed graph for: (A) vacuum, (B) control system, (C) photodetector, and (D) interferometric sensing and control.

Standard image High-resolution image

6. Discussions and Conclusion

We have demonstrated how information retrieval and recommendation systems could be useful for LIGO-like astronomical observatories. Compared with conventional searches associated with the existing sites, our web application incorporates a NLP-based information retrieval system that can also perform visualization of the user-queried data. Involving a wider science community in big science projects can alleviate some of the issues related to lack of sufficient human resources within the project. The developed interface identifies the major issues based on the discussions done within the LIGO community and recognizes the trending issues. It is plausible that someone involved with the project has already seen and solved these issues. Hence, proper dissemination of information will help technical experts within the collaboration to contribute, leading to an overall performance improvement for the instrument.

Coordinated efforts are being undertaken worldwide to carry out electromagnetic follow-up searches looking for counterparts to coalescing binaries sources (Abbott et al. 2016a). During the instance of GW candidate event alerts, astronomers may be able to take advantage of our application and know more about the instrument.

Future improvements in the application would be to include capabilities wherein an identified issue will be provided with possible fixes making use of information from past attempts that fixed an identical issue. This would require text abstraction and summarization, which are quite challenging when the data have ample numbers of technical terms. Efforts to add other GW detectors like GEO600 and KAGRA are currently in progress and will enhance the effectiveness of our application.

This kind of system has many potential applications following the commissioning and running of large science projects like the SKA and future LIGO observatories. In this project, the data source was more unstructured and had few tags related to the status of different activities. At present, institutions like SKA South Africa, which is in charge of building the MeerKAT7 telescope, one of the precursors to SKA, uses a more structured systems like JIRA8 for issue tracking and log keeping.9 Scaling our present system to such databases can improve the efficiency of topical modeling. This also enables auto-updates of the learning database as more and more information is logged into the system, finally making it robust.

The feature above will become advantageous as the organization includes more participants. The availability of such systems will make the re-usability of information much easier and efficient. This will also help with resolving instrument issues more quickly and more efficiently. Enhanced analytics of key components and recurring issues can help improve the fault tolerance of different subsystems and could provide insights into how to modify them for better performance.

We thank the detector characterization group and the machine-learning sub-group of the LIGO Scientific Collaboration for their comments and suggestions. N.M. acknowledges the Council for Scientific and Industrial Research (CSIR), India for providing financial support as a Senior Research Fellow. A. K. Aniyan thanks the SKA South Africa postgraduate bursary program. S.M. acknowledges support from the Department of Science and Technology (DST), India provided under the Swarna Jayanti Fellowships scheme. This research benefitted from a grant awarded to IUCAA by the Navajbai Ratan Tata Trust (NRTT). The authors express thanks to Arnaud Pele, Anamaria Effler, and Ajit K. Kembhavi for their valuable comments and suggestions. We thank Malathi Deenadayalan, Santosh Jagade, and Sudhagar Suyamprakasam for technical support. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation, and operates under Cooperative Agreement No.PHY-0757058. This paper has been assigned LIGO Document No. LIGO-P1700250.

Footnotes

Please wait… references are loading.
10.3847/1538-4365/aaadb2