Smart Cities 1nzhb60g Automatic Collection and Storage of Smart City Data With 4ocsnc18u4
Smart Cities 1nzhb60g Automatic Collection and Storage of Smart City Data With 4ocsnc18u4
Smart Cities 1nzhb60g Automatic Collection and Storage of Smart City Data With 4ocsnc18u4
MASTER'S THESIS
Open / Confidential
(signature author)
Julian Minde
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Motivating use case, part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Project description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Background 7
2.1 Theory: Data storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Theory: Data modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Theory: Finite automaton and Regular expressions . . . . . . . . . . . . . . . . . 9
2.4 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Motivating use case, part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Elasticsearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Distributed operation of Elasticsearch . . . . . . . . . . . . . . . . . . . . 13
2.6.2 Storing data in Elasticsearch . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6.3 Index mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6.4 Data types in Elasticsearch . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 Logstash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7.1 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7.2 Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7.3 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Architecture 21
3.1 Overview of the system architecture . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 Motivating use case, part 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3
4 CONTENTS
4 Design 27
4.1 Overview of the design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Discovering the schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Analysing the fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.1 Estimated probabilities from empirical relative frequencies . . . . . . . . . 32
4.3.2 Summary statistics for box plots . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Inferring Elasticsearch data types . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 Boolean type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.2 Number type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.3 Array type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4.4 Object type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.5 String type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Presenting the data model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5.1 Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5.2 Web interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.6 Generating configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.6.1 Filter section of Logstash configuration file . . . . . . . . . . . . . . . . . 46
4.6.2 Elasticsearch mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5 Implementation 51
5.1 Overview of the implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Building up the data model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.1 SDMDataObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.2 SDModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2.3 Schema discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2.4 Analysing data and adding metadata to the model . . . . . . . . . . . . . 56
5.3 Inferring data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.4 Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.5 Web interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.5.1 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
CONTENTS 5
7 Conclusion 73
7.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Acronyms 91
Glossary 93
References 100
6 CONTENTS
List of Figures
2.1 Finite automaton for the regular expression [A-Za-z]+\ of\ [A-Za-z]+ . . . . 9
7
8 LIST OF FIGURES
List of Tables
5.1 Available commands and their arguments for the sdmcli command line interface 58
9
10 LIST OF TABLES
List of Algorithms
4.1 Recursively discovering the schema and types of a single document of sample data 30
4.2 Recursively discovering the schema and types of sample data accepting previ-
ously seen fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Algorithm for counting the number of null values in the sample dataset and
storing the empirical relative frequency of null values as the value of a metadata
field for the data object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
11
12 LIST OF ALGORITHMS
List of Examples
13
14 LIST OF EXAMPLES
Abstract
Collecting and storing smart city data is a task that requires thorough data exploration, con-
figuring and testing to be of value. Configuring a data collection pipeline for data from a new
data provider needs to take into account what the various fields represent, what parts of the
data is of interest, which data fields should be stored, and more. In some cases the data fol-
lows a predefined, and known schema, in other cases the data may be undocumented.
This thesis presents a framework and a software for automating the process of collecting and
storing smart city data, and other event based data sets. The problem, and solution is illus-
trated in this thesis by a use case where the task consist of storing public transportation data
in a structured way in a storage system that can handle big data.
i
ii LIST OF EXAMPLES
Chapter 1
Introduction
With the recent advances in computer technology, generating data is easier than ever before.
More data is generated and at a faster pace. For example about 72 hours of video was up-
loaded to YouTube every minute on average in 2014 [1].
Traditional computer systems are not well suited to such large amounts of data, nor the rapid
speed at which they are generated. This gave rise to the term big data, defined by Apache
Hadoop as, “datasets which could not be captured, managed, and processed by general com-
puters within an acceptable scope” [1]. In 2011, the International Data Corporation (IDC) re-
defined big data as “describing a new generation of technologies and architectures, designed to
economically extract value from very large volumes of a wide variety of data, by enabling high-
velocity capture, discovery, and/or analysis” [2]. This definition implicitly refers to the char-
acteristics of big data commonly known as “the four Vs, i.e. Volume (great volume), Variety
(various modalities), Velocity (rapid generation) and Value (huge value but very low density)”
[1].
Through analysis, statistics, and machine learning, new and previously hidden insights can be
found in big data sets. One of the key challenges in big data applications today is the data
representation. Different datasets have different structures, semantics, organisation, granularity
and accessibility [1]. This presents a problem when combining data from different sources for
analysis.
Data generated from actual events are normally collected where the event happens, encoded,
and sent over the internet to a system that processes and stores the data. A data collection
framework is a software system that is responsible for acquiring and storing data in an efficient
and reliable manner. The system must be able to store data arriving at a high pace, while
making sure all data is stored correctly. It must also make the data easy to work with for data
scientists, developers and other users of the data. Such a framework can handle data of vary-
ing structure from multiple different data sources in the same running instance of the system.
The part of the instantiated system that handles one data source with one structure is referred
to as a data collection pipeline.
The data, in this context, is represented as a set of data fields, i.e. combinations of field names
and field values. One such set of data fields is in this thesis referred to as a data point, thus
a data set is a collection of many data points. If a data set follows a given schema, most data
points will have the same field names, and their values will be of the same types. However, if
1
2 CHAPTER 1. INTRODUCTION
there is no value for a field in a data point, that field can either have the null value, or the
field might be missing all together.
The part of the data collection framework that stores the data is in most cases some type of
database. Traditionally, structured data would be stored in a relational database, like MySQL
or PostgreSQL. In the case of storing structured, big data, the storage system must be able to
scale. Relational databases are not built to scale, and are therefore unfit for the job [3]. How-
ever, this has caused a rapid development of a category of databases referred to as NoSQL
databases. NoSQL, as in “Not Only SQL”, refers to a group of non-relational databases that
don’t primarily use tables, and generally not SQL for data manipulation [4]. NoSQL databases
are “becoming the standard to cope with big data problems” [3]. This is in part “due to cer-
tain essential characteristics including being schema free, supporting easy replication, possess-
ing a simple Application Programming Interface (API), guarantee eventual consistency and
they support a huge amount of data”[3]. NoSQL databases are said to not primarily use ta-
bles, instead of rows of data, as one would have in a relational database, one often has data
objects in a NoSQL database. A relational database is constrained by the schema of the ta-
bles, i.e. to store a row of data to a table, it has to conform to the format of the table it is
stored to. NoSQL databases are generally “schema-free” meaning the documents stored in the
database need not follow the same, or any, schema. Partly because of their schema free opera-
tion, NoSQL databases are relatively well suited for fault-tolerant distributed operation. This
again enables NoSQL databases to, generally speaking, store big amounts of data.
Configuring a pipeline for data from a new data provider can be a complex task. It needs to
take into account what the various data fields represent, what parts of the data is of interest,
whether there is information that should not be stored, and more. In many cases, the data will
follow a set schema that defines names and types of the various data fields. Data that is just
dumped to file with no schematic information can be difficult to use later, unless the meaning
of the data fields is documented adequately.
The term metadata is defined as “data that provides information about other data” [5]. In the
context of this thesis, metadata refers to information about the data that is not represented
by the values of the data. For example, the units of a value. Metadata can be implied as part
of the field name, it can be stored in a separate documentation file, or it can exist only in the
mind of the domain expert.
Semantics is the study of meaning in languages. The semantic information about the data is
information that clarifies the meaning behind the data, and their relationship to the real-world
entities that the data represent. A semantic data model is a data model that includes and fo-
cuses on the semantics of the data. Both metadata and semantic information also contribute
to interpret the data and extract the value of the data.
Raw data from sensors will sometimes include redundant and useless data that do not need to
be stored [1]. Therefore, the data will typically be transformed at the data collection pipeline,
to fit the storage system. The data is transformed in order to follow the same encoding as
other data in the same system, to make the data easy to retrieve, to remove unwanted data
and in some cases to make the data more structurally similar to other semantically similar
data in the system. Through this transformation and cleaning of the data, the data representa-
tion is altered. This altering of data representation must be done with a strong focus on mak-
ing the data more meaningful for computer analysis and user interpretation. Improper data
representation can reduce the value of the data [1]. In this transformation of the data, there is
a risk of losing valuable information and at the same time storing data that is of less value. An
1.1. MOTIVATION 3
example of data with no value might be identifier numbers referring to the data provider’s in-
ternal systems that are inaccessible, or model numbers of sensors that could be inferred from
table or index names instead. If the data provider’s internal systems are accessible on the
other hand, that identifier field could be valuable and it should possibly be stored. The task
of configuring a data collection and storage pipeline is a task of fitting data into a useful struc-
ture.
Data provider
Configurable components
Data collection pipeline
Storage
Figure 1.1: Overview of the data collection framework that this thesis will work with. The
data provider collects data from various sensors or other event sources and sends it to the data
collection and storage pipeline which transforms the data and store it. Developers and data
scientists retrieve data from the storage.
1.1 Motivation
More data is being generated and collected now than ever before, and from that data new in-
sights can be gathered. According to a forecast done by Gartner, the quantity of Internet of
Things (IoT) sensors will reach 26 billion by the year 2020, thus making IoT data the most im-
portant part of big data [6]. Collecting and storing data can be quick and simple. However,
structuring the data in a way that achieves a good data representation is a more complex task.
Currently, in order to ensure good data representation, the data and its structure must be
analysed and fully understood by the person configuring the data collection pipeline. Valu-
able data must be interpreted and stored using data types that reflect the semantics of the
data as well as possible. And data that is of less value to the objective of the collection can be
4 CHAPTER 1. INTRODUCTION
removed to save storage space and make the data easier to work with. In some cases, a data
point can benefit from being split into smaller data points, e.g. if one data point contains sev-
eral real-world entities. Other cases call for data points to be combined into bigger batches.
Data might also need to be converted into another format to better fit the storage system.
This thesis is motivated by the above challenges, and the idea of addressing these by automat-
ing the exploration and analysis of the data, and by generating templates for the configuration
of the data collection pipeline. This approach could reduce the need for human labour, and
make data representations more accurate and more consistent across data collections.
The challenges that motivate this work come in part from the Triangulum smart city project
and the data collection platform being developed at the University of Stavanger (UiS). The
data collection platform will collect and store large amounts of data from a variety of smart
city data providers. The structures of these data sets will vary. Some data sets follow strictly
defined and documented schemas, while others may come with no documentation or semantic
information at all. The data to be stored will be used for smart city research and development.
The bus_loc field is different, because its value is an object containing two fields, latitude
and longitude. This structure can be easily maintained in most NoSQL storage systems. If,
on the other hand, it is safe to assume that bus_loc represents a location in the real world en-
coded as GPS coordinates, there may be a data type designed specifically for this in the data
storage system of the data collection framework.
These are a few of the challenges one faces when configuring a data collection pipeline for a
new data source. This concrete use case will be revisited throughout this thesis to illustrate
the relevance and implications of the work presented. It will be continued in Section 2.5.
Goal Develop a software system that can automatically generate a set of configuration files
for a data collection pipeline, based on some sample data that is representative of the data to
be collected.
Research question 1 Can a model of the expected data, including field types, be discov-
ered by reviewing a representative set of sample data?
Research question 2 Can data collection pipeline configuration files be generated automat-
ically, based on a model of the expected data?
This thesis develops the design and implementation of a software system that can handle the
challenges of automatic pipeline configuration, specifically for the Triangulum smart city project
context. The software system here developed is called SDModel, and is a system that can gen-
erate configuration files based on sample data.
software system for generating configuration files for a data collection framework.
The thesis start after this introduction with some background and a presentation of the Elastic
stack in Chapter 2. Chapter 3 sets the requirements and provides an overview of a possible so-
lution to the problem. In Chapter 4 the design of the system is presented. The concept behind
a set of analysers that will analyse the sample data are presented, and the approach to suggest
a list of likely data types for each data field as well. Chapter 5 presents in technical detail the
implementation of the software, including how it is made and how it is used. Chapter 6 tests
and evaluates the software through some experiments and present the results. Chapter 7 gives
the overview and evaluation of the work presented as well as contributions and ideas for future
work.
Chapter 2
Background
Working with data intensive systems is different from traditional data systems. This is much
due to the architectural differences big data storage systems require to handle the fast pace
and large amounts of data. Many new systems have been created to handle the challenges of
big data. However, it turns out there exists older systems too that could handle the challenges
presented by big data.
This chapter will present some theory and background, first data storage in Section 2.1. Data
modelling is presented in Section 2.2. Finite automaton and their relationship with Regular
expressions is presented in Section 2.3. Related work is presented in Section 2.4, before the
data provider for the motivating use case is presented in Section 2.5. Elasticsearch, a search
engine and document store is presented in Section 2.6. Logstash, a data collection engine is
presented in Section 2.7.
The data points of a data set are often stored in some type of database. A database stores
the data in a way that makes it easy and fast to query and retrieve specific data. Relational
databases, like MySQL or PostgreSQL have long been the standard. They store data as rows
in predefined tables. This makes it easy to retrieve data based on the values of one or more
specific columns in the table. And it makes it possible to retrieve only the requested columns,
and combinations of columns from several tables. Unfortunately, this architecture is unfit for
the challenges that big data present [3]. In the case of storing structured, big data, the storage
system must be able to scale. and relational databases are not built for that.
NoSQL, as in “Not Only SQL”, refers to a group of non-relational databases that do not pri-
marily use tables, and generally not SQL for data manipulation [4]. NoSQL databases are “be-
coming the standard to cope with big data problems” [3]. This is in part “due to certain es-
sential characteristics including being schema free, supporting easy replication, possessing a
simple API, guarantee eventual consistency and they support a huge amount of data”[3].
There are three primary types of NoSQL databases:
Key-value store databases are simple but fast databases, where each data point is a key-value
7
8 CHAPTER 2. BACKGROUND
pair, i.e. a unique key identifies the value [3]. To retrieve a data point stored in such a database,
one must know the key of the object, and make a query based on that. This makes more ad-
vanced queries or searches impossible. On the other hand, this fact make key-value stores very
fast since the database never evaluates the value of the data point. Among the various key-
value store databases that have emerged recently, most seem to be heavily influenced by Ama-
zon’s Dynamo [3]. Another popular key-value store is Redis, “an open source, in-memory data
structure store, used as a database, cache and message broker” [8].
Column-oriented databases store and process data by column instead of by row [3] like tradi-
tional relational database systems do. This type of database is very similar to the relational
database, and often use SQL as its query language. However, column-oriented databases are
normally more efficient than relational database equivalents when searching for data and queries
are often faster.
Document-store databases “support more complex data structures than key-value stores, and
there are no strict schema to which documents must conform” [3]. MongoDB, SimpleDB, and
CouchDB are examples of document-store databases [3]. Another example is Elasticsearch, “a
real-time distributed search and analytics engine” [9].
1. A conceptual data model, where the semantics of the domain is the scope of the model
and entities are representations of real world objects without implementation considera-
tions.
2. A logical data model, where the semantics are described relative to a particular manipu-
lation technology.
3. A physical data model in which the physical storage is the main focus.
These represent levels of abstractions of the data model, different applications may require dif-
ferent levels of abstraction, and thus different kinds of data models.
Hammer and McLeod [11], describe a semantic data model as “a high-level semantics-based
database description and structuring formalism (database model) for databases”. Their idea is
to create a language for database modelling that is based on real world-entities and the rela-
tionships between them.
Semantic web is an approach to make the web, and more generally, human speech and writ-
ing, comprehensible for computers. The semantics is expressed by the Resource Description
Framework (RDF), a language for defining objects and their relationships. The RDF content
is serialised to eXtensible Markup Language (XML) and placed on the web page, hidden from
the general user, but visible to computers.
An entity is “something that has separate and distinct existence and objective or conceptual
reality” [12]. In the semantic web specification, entities are mapped to collections of informa-
2.3. THEORY: FINITE AUTOMATON AND REGULAR EXPRESSIONS 9
Figure 2.1: Finite automaton for the regular expression [A-Za-z]+\ of\ [A-Za-z]+
tion, called ontologies. In the context of information theory, an ontology is defined as “an ex-
plicit specification of a conceptualisation” [13]. In this context, a conceptualisation is defined
as an “abstract, simplified view of the world that we wish to represent for some purpose” [13].
The purpose of ontologies is to define a method for which different data providers within the
same domain can use the same formally defined terms in their semantic web definitions.
Ontology alignment, or ontology matching, is the process of determining the ontology repre-
sented by a field’s value [14].
In contrast, schema matching, as defined by [15], is to find a mapping between the data el-
ements of two different schemas. Schema integration is the process of integrating data with
different schemas, into one unified schema [16].
These subjects mentioned above can in some aspects seem similar to the part of the problem
this thesis seeks to solve, the discovery of a data model from some sample data. However,
they all seek to map between an implementation format, via some abstract and generic for-
mat, and then to some other implementation format. For this thesis, it might be easier to map
the data types directly from one implementation format to the other, and drop the detour via
the generic format.
the input contains a space character, the automaton will move to ‘state 2’. When in ‘state 2’,
the automaton needs to see the letter ‘o’ in the input to move to ‘state 3’. When in ‘state 3’, it
needs to see the letter ‘f’ to move to ‘state 4’, anything else will move it back to start. When
in ‘state 4’, it needs to see a space character to move to ‘state 5’, and from there, one letter is
enough to move it to ‘state 6’. ‘State 6’, however, is accepting, meaning that if a string reaches
this state, it is accepted by the automaton.
Imagine a string ‘3 of 55 or one of the best’ being fed to the automaton. Seeing the char-
acter ‘3’ will move it nowhere, the space character neither. When the automaton sees the ‘o’ it
moves to ‘state 1’. The ‘f’ will cause a reset and also a move to ‘state 1’, thus no change. This
will continue for all the letters, numbers, and spaces up until just after the letter ‘e’ has been
seen. The automaton is now in ‘state 1’, and seeing the space character moves it to ‘state 2’.
The next character is the letter ‘o’, the only letter not causing a reset of the automaton, but
rather a move to ‘state 3’. From ‘state 3’, the character ‘f’ is seen, moving it to ‘state 4’, and
then the space character moves it to ‘state 5’. The next character that is seen is the letter ‘t’
which is between a and z and it moves the automaton to ‘state 6’. ‘State 6’ is accepting, so
the given string is said to be accepted by the automaton, and the rest of the string need not be
evaluated.
A finite automaton is a deterministic finite automaton if, for all the states there is at most one
transition that include each character [17], i.e. there is no character that can move the au-
tomaton into two states at the same time. A finite automaton can also be non-deterministic,
in which case it is allowed to have transitions with the same characters in their labels, thus
leading to several states at the same time [17]. By definition, a deterministic automaton is also
a non-deterministic automaton [17].
Another feature of the non-deterministic automaton is the transition, it represents a silent
transition, or empty transition [17]. For example if the automaton in Figure 2.1 had an tran-
sition between ‘state 4’ and ‘state 2’, and it moved to ‘state 4’ it would simultaneously be in
‘state 2’. This would cause the automaton to accept the same strings as before, only now the
characters ‘of’ can appear any number of times before the last space character. For example
the string ‘a man ofofofof honour’ would be accepted.
“Regular expressions are an algebra for describing the same kind of patterns that can be de-
scribed by automata” [17]. It is analogous to arithmetic algebra. An operand can be one of the
following [17]:
• A character
• The symbol
• The symbol ∅
• A variable whose value can be defined by a regular expression.
• Closure, denoted by a star, e.g. R*, whose effect is “zero or more occurrences of strings in
R”.
• Concatenation, has no symbol, e.g. ab is the concatenation of a and b.
• Union, denoted by |. E.g. a|b, effectively a or b.
2.4. RELATED WORK 11
UNIX systems use regular-expressions like these to describe patterns not only in search tools
like grep but also in text editors and other tools [17]. However, in UNIX there has been added
some convenient additions to regular-expressions. Character classes are groups of characters
inside square brackets that are interpreted as any of these characters. For example, [aghinostw]
is the same as “any of the characters in the word ‘washington’ ” [17]. If a dash is put between
two characters, like [a-z], all the characters between a and z in the alphabet is denoted [17].
UNIX also has symbols for start and end of a line, ^ denote start, and $ end of the line. For
example ^[a-z]*$ will only match if the line consists of only letters between a and z. To ‘es-
cape’ characters, i.e. use the literal meaning, UNIX uses the backslash. For example to match
on an amount given in dollars one could use ^\$[0-9]*$. Here the first $ is escaped by the
backslash and is not interpreted as the end of line requirement, but the actual character $.
The example in Figure 2.1 is equivalent to the regular expression
[A-Za-z]+\ of\ [A-Za-z]+
Here the space characters are escaped. In clear text this regular expression accepts a string
consisting of “one or more letter between a and z, capitalised or not, then a space character,
then the word of, then a space character and in the end one or more letter between a and z,
capitalised or not”.
The SEMantic INTegrator (SEMINT) tool uses neural networks to identify attribute corre-
spondences between databases [18], i.e. attributes that have semantically equivalent values.
This is done by analysing metadata values extracted from relational databases [18].
The Automatch system uses bayesian learning to match the attributes of two database schemas
[19]. The Automatch system relies heavily on knowledge provided by domain experts that to
find the matching attributes [19].
The Learning Source Descriptions (LSD) system for automatic data integration is presented
in a paper from 2001 [20]. It performs data integration, the process of creating a “mediated
table” that contain virtual columns that are mapped to several data sets through semantic
mappings. In this context, a semantic mapping is a definition of the relationship between a
column in the mediated table and each of the data source’s tables. The LSD system attempts
to automate the process of finding semantic mappings. For example, for a column named ‘pho-
nenumber’ in one data source’s table and ‘phoneno’ in another data source’s table, the seman-
tic mapping would define that the two columns have the same semantic meaning and therefore
should both be mapped to the same column in the mediated table. This enables the user to
make one query and get results from several different data sources.
The LSD system uses a set of base learners, a meta-learner and a prediction converter to auto-
matically discover the semantic mappings [20]. The LSD system operates in two phases, train-
ing phase and matching phase.
The training phase starts with the user providing the semantic mappings manually for a small
set of data sources, then the system uses these mappings together with data from the data
sources to train the learners [20]. Each of the learners learn from different parts and charac-
teristics of the source data. For example there is a name matcher that is a base-learner which
12 CHAPTER 2. BACKGROUND
learn from the name of the XML tag. Another base-learner is the County-Name Recogniser
which searches a database extracted from the web to verify if a value is a county name [20].
In the matching phase, LSD applies the learners to the new data set and combine their predic-
tions in a ‘meta-learner’ [20].
The LSD system solves a different problem than this thesis seeks to solve, however, parts of
the approach can be used in this thesis. The architecture presented in Chapter 3 is, on an ab-
stract level, inspired by the base-learner, meta-learner and prediction converter used in the
LSD system.
• The Vehicle Monitoring Service (VM) provides information about the current location
and expected activities for the buses [21].
• The Stop Monitoring Service (SM) serves information about vehicles arriving to and de-
parting from a bus stop [21].
• The Situation Exchange Service (SX) serves deviation messages [7].
Requesting data from such a service is a complicated and tedious task and the response is
composed by many complex objects. It can prove difficult to make sense of the data and its
structure without the right tools.
Any SOAP web service is by definition self-documented. The specifications of all the data
fields exist in the Web Service Definition Language (WSDL) file. However, the data types de-
fined in the WSDL file are not generally available storage systems. Knowledge about how to
map these values to the available types in any specific storage system is needed in order to
store this data properly.
In 2011, Roy Fielding proposed an alternative to the SOAP protocol in his PhD thesis, the
Representational State Transfer (REST) principles [22]. Compared to SOAP, the new ap-
proach was much easier and less verbose, but also more prone to application errors. Mulli-
gan and Gracanin [23] present a set of tests that prove REST to be “more efficient in terms
of both the network bandwidth utilised when transmitting service requests over the Internet
and the round-trip latency incurred during these requests” [23]. This is probably parts of the
reason why REST seem more popular than SOAP.
While REST does not enforce any restrictions on serialisation of the data, the most commonly
used serialisation of transmissions is probably JSON.
With a view toward the general applicability of the solution presented in this thesis, beyond
this use case, the Kolumbus data will be retrieved, converted to JSON, and then used as input
to the system. The definitions provided by the WSDL files will act as a reference to what the
system should discover about the data.
2.6. ELASTICSEARCH 13
2.6 Elasticsearch
“Elasticsearch is a real-time distributed search and analytics engine” [9]. It stores documents
in a flat-world, document-oriented database. An data point stored in Elasticsearch is referred
to as a document. This convention comes from the early days of Elasticsearch, when it was
used to store mostly documents. Elasticsearch differs from classical relational databases in sev-
eral ways. For example, there are no tables in Elasticsearch. One document is stored in one
place, instead of being spread over multiple tables or columns. This makes searching the doc-
uments fast. However, the result of a query will consist of complete documents, not parts or
aggregations of documents like one can get from relational databases [9].
Elasticsearch runs on top of Apache Lucene core, “a high-performance, full-featured text search
engine library written entirely in Java” [24]. Elasticsearch was first created as libraries and
helpers to make it easier for developers to work with Apache Lucene core.
Communicating with an Elasticsearch instance is done through one of two ports. Port 9300
for Java applications through Java specific protocols, and 9200 for other languages through a
RESTful interface.
Elasticsearch normally runs distributed in a cluster. A node in this context is one running in-
stance of Elasticsearch. While several instances of Elasticsearch can run on the same physical
(or virtual) machine, a node is typically one instance of Elasticsearch running on one machine.
Several nodes can form a cluster where one node is elected leader following the Paxos algo-
rithm [25]. When a leader node, for any reason, becomes unresponsive, another node will be
elected leader. Any node in the cluster can try to become the leader, but in order accomplish
this, the node must have the votes of at least half of the other nodes, i.e. it must have votes
from a quorum. In the context of the Paxos algorithm, a quorum is the number of votes, or
a group of voters that form a majority in the system, i.e. more than half of the voters form a
quorum. The leader, or master node, is in charge of coordinating the cluster structure. How-
ever, queries does not need to go via this node, as every node can handle any request from a
client, ensuring high availability. [9]
The data stored in an Elasticsearch system is divided into shards, yet another type of contain-
ers of data. Because an index potentially could contain more data than any hardware has ca-
pacity to store, the index can be divided into shards. Shards can live on several nodes, and one
node can have several shards. Each shard is responsible for its own set of data, and handles
storing, searching, and retrieving the data. And while one node might have several shards it
does not need to have all the shards, but can make other nodes search through other shards.
There are two kinds of shards in the system, primary shards and replica shards. Each
replica shard is a replica of a primary shard. While write operations must involve the primary
shard, a read operation can safely be done off a replica shard. This division makes the system
highly available and fault-tolerant. [9]
The address of a shard is based on some routing value, _id by default. Given a document with
routing value R, being stored to an instance with a total of Nshards primary shards, the shard
that the document is stored in, Sdoc is given by
where the % denotes the modulo operator and hash() is some hashing function. [9] This makes
it easy to find which shard a document is located in, but makes the number of shards to split
an index into, immutable. If the number of primary shards were to ever change, then all the
documents already indexed would have to be re-indexed, which is possible but costly to do.
When a document is created or deleted in Elasticsearch, the node that receives the request for-
wards it to the node that holds the primary shard for the document. If this request is success-
ful at the primary shard, the node that has the primary shard will forward the request to the
node(s) that hold the replica shards. When the replica shards also are successful in perform-
ing the operation, a confirmation is sent back to the node that received the request, and back
to the client. By default, the node with the primary shard must first check that it has contact
with a quorum of the replica shards before starting the write operation. This behaviour can
be controlled by the consistency option in the system configuration, but it could cause ma-
jor consistency problems to turn it off. There is also a timeout option available, in which a
request will fail if there is a problem with the request.
To retrieve a document, on the other hand, the receiving node will just need to find a shard
that has the document requested, and send the request to that node, regardless of whether
it is a primary or replica shard. The nodes that have the requested shard will take turns in
handling retrieval requests in a round-robin fashion.
Elasticsearch is optimised for search, and to do this well the data must be mapped and anal-
ysed before indexing. A GET request to /_search is the simplest way of searching in Elastic-
search, and it will return a list of hits, by default ten hits. Each of these hits will have a prop-
erty _score which represents how good a match the hit is for the query. A GET request to
/{index}/{type}/_search forms a query that will search only within the specific index given
by {index}, and only for documents that are of the document type given by {type}. [9]
The value of the mappings property is an object that defines the mappings of the data that is
to be stored. A simple example of a mapping can be seen in Example 3.1. If dynamic mapping
is enabled on the Elasticsearch instance, only the fields that are expected to not be identified
properly by the dynamic mapper need to be specified. A mapping can be updated for a field,
after index creation, but that will only have an effect on future indexing of either new or re-
indexed documents [9].
Elasticsearch supports a number of different data types for the fields in a document [26]. Some
are obvious counterparts to the data types defined by JSON, while others are more complex
and specialised data types. The choice of data type for a field in Elasticsearch is important for
the performance of the whole Elastic stack.
The main benefit from choosing the correct data type is increased searchability. There are
also general performance benefits gained from choosing the data type. For example if storing a
number with a defined number of decimals. Here Elasticsearch has a data type scaled_float,
that stores the number as a long with a scaling factor. The value is scaled up upon storage
and down on retrieval. “This is mostly helpful to save disk space since integers are way easier
to compress than floating points” [27]. A data field that has the wrong data type will, gener-
ally speaking, be less searchable than if it had the correct type. In many cases though, there is
not as clear a distinction between the various data types, and it can be hard to find the correct
data type for a data field.
The following general data types can be mapped to a variety of possible Elasticsearch data
types.
Numbers can be either long, integer, short, byte, double, float, half_float, or scaled_float
in Elasticsearch. The integer types, long, integer, short and byte, i.e. the whole number
types, differ in possible maximum values. Choosing the smallest integer type according to
its maximum value compared to the maximum value of the data will help the indexing and
searchability. However, storage is optimised for the actual values that are stored, and not by
the capacity of the data type, so the choice of data type will not affect the storage demand
[26].
A date can be represented either by a string containing formatted dates, a long number
representing milliseconds-since-the-epoch, or an integer representing seconds-since-the-epoch.
Elasticsearch will convert the date to UTC, if the timezone is specified, before storing it as the
Elasticsearch data type date [26].
A Boolean in JSON can be stored as boolean in Elasticsearch. The Boolean type also ac-
cepts the strings “true” and “false” as Boolean values. Elasticsearch versions prior to 5.3.0 also
2.7. LOGSTASH 17
accepted strings “off”, “no”, “0”, ”” (empty string), 0, 0.0 as Boolean false and all other as
Boolean true values, but this is deprecated in newer versions.
Range data types are also supported in Elasticsearch, and can be either integer_range,
float_range, long_range, double_range, or date_range. The field that represents a range
should be a JSON object with any number of range query terms, like gte and lte representing
“greater than or equal” and “less than or equal”, respectively [26].
An array of values in JSON can be a list of values in Elasticsearch. Actually, there are no
explicit data type for arrays in Elasticsearch. Instead, all Elasticsearch fields can contain lists
of values, as long as the values are all the same Elasticsearch data type [26].
Objects are not supported by the Lucene core, since it only handles one level of values.
However, Elasticsearch hides this fact by flattening objects using dot notation prior to storing
them [26].
Nested objects is a specialised version of the object datatype that allows arrays of objects
to be indexed and queried independently of each other in Elasticsearch [26].
GeoJSON types can be stored using the datatype geo_shape. It is used to represent a geo-
graphic area. Elasticsearch supports the GeoJSON types point, linestring, polygon, multipoint,
multilinestring, multilinepolygon, geometrycollection, envelope and circle [26].
In addition to the mentioned types, there also exists a list of specialised types, IP, Completion,
Token count, mapper-murmur-3, Attachment and Percolator .
2.7 Logstash
While it is possible to add documents directly to Elasticsearch using the REST API, it is often
preferable to use a program like Logstash to collect and prepare the data before it is stored.
Logstash is a “data collection engine with real-time pipelining capabilities” [29]. One running
instance of Logstash can take input from different sources, prepare the data as defined by the
configuration, and send the data to storage. Logstash can send the data to a variety of differ-
ent storage systems, however, it is developed and marketed as part of the Elastic stack.
A Logstash pipeline consists of three different stages, input, filter and output. Inputs gener-
ate events, filters modify them and outputs ship them elsewhere [29]. The three stages are all
configured in the same configuration file.
18 CHAPTER 2. BACKGROUND
The format of the Logstash configuration file looks like it might be inspired by JSON, but it
is not valid JSON. According to the source code the format is custom-made using Treetop, a
Ruby-based package for parsing domain-specific languages, based on parsing expression gram-
mars [30].
2.7.1 Input
The input section of the configuration file defines how and where the input data will arrive and
how it is to be handled. There are a number of input plugins available that enable Logstash
to read events from specific sources [29]. For example the file plugin stream events from file,
http will receive events sent over http or https. An example of the input section can be found
in Example 2.1. This configuration enables Logstash to receive JSON encoded events using
TCP on port 5043.
input {
tcp {
port = > " 5 0 4 3 "
codec = > json
}
}
Example 2.1: An example of the input section of a Logstash configuration file. The Logstash
instance will here be receiving events using the tcp plugin on port 5043 and the event data is
expected to be encoded as JSON
2.7.2 Filter
There are a number of filter plugins that can be used in the filter section of the configuration.
Filter plugins perform intermediary processing of an event [29]. In Example 2.2 one plugin is
used, mutate, and from that plugin two operations are performed. First four of the field names
are changed, and then the types of the fields are changed, or at least set explicitly.
2.7.3 Output
As for the output section of the configuration, it too uses plugins. In Example 2.3 two plugins
are used. First the stdout plugin will output events to the std out of the program using the
codec rubydebug. Finally the Elasticsearch plugin is used to ship the data to an Elasticsearch
instance at localhost:9200. The event will be indexed at the index named testindex and be of
type testtype.
2.7. LOGSTASH 19
filter {
mutate {
rename = > {
" id " = > " provider_side_id "
" timestamp " = > " measurement_time "
" [ bus_loc ] [ latitude ] " = > " [ bus_loc ] [ lat ] "
" [ bus_loc ] [ longitude ] " = > " [ bus_loc ] [ lon ] "
}
convert = > {
" [ bus_loc ] [ lat ] " = > " float "
" [ bus_loc ] [ lon ] " = > " float "
}
}
}
Example 2.2: The filter section of a Logstash configuration file for the motivating ex-
ample. This configuration renames the field id to provider_side_id, the timestamp to
measurement_time and bus_loc to location.
output {
stdout {
codec = > rubydebug
}
elasticsearch {
hosts = > [ " localhost : 9 2 0 0 " ]
index = > " testindex "
document_type = > testtype
}
}
Example 2.3: An example of the output section of a Logstash configuration file. This output
section uses two plugins. First the stdout plugin sends events to the std out, and the second
plugin sends the same data to the Elasticsearch instance at localhost:9200. The event will
be indexed at the index named testindex and be of type testtype.
20 CHAPTER 2. BACKGROUND
Chapter 3
Architecture
To collect and store data using a data collection pipeline it is important to know how the data
is structured and what the various fields represent. Some information can be derived from the
field names, and there might also be other documentation available. However, assumptions
made from field names and dataset documentation might not be enough. Looking at sample
data is often a good way to get more knowledge, but it can be very time consuming.
To automate the process of a data collection pipeline configuration, the data must be explored
and analysed automatically. The domain expert must be given a chance to edit the results of
this analysis. And the results can be used to generate configuration files.
The goal for this project is to develop a software system that can automatically generate a set
of configuration files for a data collection pipeline, based on some sample of the expected data.
The main focus is the challenge of automatically configuring new data collection pipelines for
the data collection platform being developed at UiS.
One option for software to be used in the data collection platform, is the Elastic stack. Elas-
tic stack is a software stack consisting of several programs, most important is Elasticsearch,
Logstash and Kibana. Elasticsearch is “a real-time distributed search and analytics engine”
[9]. Logstash is “a data collection engine with real-time pipelining capabilities” [29]. Kibana is
“an analytics and visualisation platform designed to work with Elasticsearch” [31]. All three
are open source projects. The Elastic stack provides a system well suited for collecting, trans-
forming, storing, searching and visualising data.
Another document-storage based software is Apache CouchDB [32]. CouchDB is easy to use,
schema free, scales linearly and focuses on being an ‘offline-first’ type of database [32]. It sup-
ports offline operation, and can run offline for example on a smartphone, and also synchro-
nise with the main database when it is back online. However, CouchDB does not support data
types other than those of JSON [32], which would make the data less structured than it would
be in Elasticsearch.
This thesis describes a system that can automate the collection and storage of smart city data
in the Elastic stack. Mapping definitions for Elasticsearch and the filter section of the Logstash
pipeline configuration, are generated based on sample data.
An architectural overview of the solution is presented in Section 3.1, including the software
system called SDModel. The name comes from the idea of making a Semantic Data Modelling
21
22 CHAPTER 3. ARCHITECTURE
tool. The motivating use case is continued in Section 3.2, where the data provider is presented.
The process of analysing the data is presented in Section 3.3. Reviewing and editing the data
model, will be presented in Section 3.4, before generating the output files is presented in Sec-
tion 3.5.
Sample data
Configurable components
Engine Configuration Data collection pipeline
data model
View/edit
Configuration Storage
SDModel software
Figure 3.1: Overview of the system architecture. The data provider collects sensor data and
delivers data to the data collection and storage pipeline which passes the data on to the stor-
age. Here the data provider also sends some sample data to the engine. The engine serves the
discovered data model through a web interface where domain experts can view and edit the
data model and generate configuration files for the pipeline.
Figure 3.1 shows an overview of the system architecture of the system. The data provider col-
lects sensor data and delivers data to the data collection and storage pipeline which passes the
data on to the storage. The data provider also sends some sample data to the engine. The en-
gine analyses sample data provided by the data provider. From this it generates a data model
that can be viewed or edited by a domain expert, through a web interface. The data model
shows how the data is structured, what data types the various fields are, and results of the
analysis. The domain expert can then generate configuration files that can be used in the data
collection and storage pipeline configuration.
3.2. MOTIVATING USE CASE, PART 3 23
PUT / busdataindex
{
" mappings " : {
" bu s_ev en t_ do c_ ty pe " : {
" properties " : {
" bus_loc " : {
" type " : " geo_point "
}
}
}
}
}
Example 3.1: JSON structure for creating an Elasticsearch index that has a document type
bus_event_doc_type that is of type geo_point. The index is created by sending this struc-
ture in a PUT request to /busdataindex, which is the new index name.
filter {
mutate {
rename = > {
" id " = > " provider_side_id "
" timestamp " = > " measurement_time "
" [ bus_loc ] [ latitude ] " = > " [ bus_loc ] [ lat ] "
" [ bus_loc ] [ longitude ] " = > " [ bus_loc ] [ lon ] "
}
convert = > {
" [ bus_loc ] [ lat ] " = > " float "
" [ bus_loc ] [ lon ] " = > " float "
}
}
}
Example 3.2: The filter section of a Logstash configuration file for the motivating
use case. This configuration renames the field id to provider_id, the timestamp to
provider_timestamp and bus_loc to location. The convert plugin enables the value of the
location field to be interpreted as a geo_point in the Elastic stack, however, it must also be
explicitly assigned that data type (as in 3.1)
point in the sample data set and creates a schema of this data that will be the basis for the
data model. The next step is then to find statistical characteristics of the sample data, and
make an ordered list of possible storage level data types for each of the data fields.
data field that has been renamed will have the new name in the mappings, and if the name has
not been changed by Logstash Elasticsearch will not find these fields.
26 CHAPTER 3. ARCHITECTURE
Chapter 4
Design
To automatically collect and store data, the system must get to know the data, the same way
a domain expert would if the process was done manually. The SDModel system must check
every value of every field in the sample data set, before forming an opinion on what data type
field represents. When all fields have been identified by data type, instructions on how to trans-
form the data and handle the values, can be generated.
This chapter will present an overview of the design of the data collection system. The concept
of discovering the schema of the sample data is presented in Section 4.2. The concept behind
each part of the data analysis process is presented in Section 4.3. Section 4.4 presents the pro-
cess of limiting, prioritising, and selecting an appropriate data type for each field. The presen-
tation of the model, and use of the system is presented in Section 4.5. Section 4.6 concludes
the chapter with the concepts behind generating the configuration files.
27
28 CHAPTER 4. DESIGN
Sample data
SDModel software
Engine
Schema discoverer
Analysers
Data type
inferrer
Data model
representation
Logstash
filter
View/edit data model
Configurable components
generator
Elasticsearch
mapping Configuration Storage
generator
Figure 4.1: The overall design of the system. The data provider sends some sample data to
the engine, it is passed to the ‘Schema discoverer’. Next, the sample data is run through a
set of ‘Analysers’ that analyses the values of every field of every data point. After this, the
‘Data type inferrer’ will suggest a list of possible data types for each of the data fields. The
data model is served through a web interface where domain experts can view and edit the data
model. Logstash filter configuration and Elasticsearch mappings can be generated from the
data model.
4.2. DISCOVERING THE SCHEMA 29
{
" RecordedAt " : " 2 0 1 7 -0 4 -1 1 T 0 9 : 2 7 : 3 1 . 6 8 1 4 2 9 6 + 0 2 : 0 0 " ,
" Temperature " : " 2 1 " ,
" DeviceType " : " Manuf acture rModel 5 1 0 0 " ,
" Location " : {
" Latitude " : 5 8 . 9 3 8 3 3 4 ,
" Longitude " : 5 . 6 9 3 5 4 2
},
" BatteryLevel " : 6 5
}
Example 4.1: Example of JSON encoded event data from a temperature measurement de-
vice.
Given a data set consisting of JSON formatted data from some temperature measurement de-
vice, where a random reading from the sensor looks like the one in Example 4.1, how can its
schema be discovered? Finding the schema for this structure is done recursively by passing the
data point, a parent object and a key to a function discoverSchema. The data point is the
root of the data object, i.e. the outermost object in Example 4.1. The parent object can be
an empty data model object in the initial call, however, the data model that is created will be
passed along here in the recursive calls to the algorithm. If the data point that is passed is a
string, number or boolean, the algorithm will add a metadata key called samplevalue with
the value of the document and then add this data object to the parentObject and return it. If
the data point that is passed is of type object the algorithm will call itself recursively, with
the key and value of each or the properties as key and document, and the newly created data
object as the parentObject. If the document is an array the algorithm will call itself recur-
sively once for each element in the array, but with no key. This results in every recursive call
overwriting the last and only the last item in the array will be used to form the data model
below the array.
In the case in Example 4.1, the root object would be passed in first, and since it is an object
the algorithm would call itself recursively once for each of its properties. All values except
Location is made into data objects and returned directly. The data objects is then added
to the root object’s data objects list. Location becomes the parameter to another recursive
call to the algorithm, which in turn returns Latitude and Longitude as data objects. The
Location object is then added to the root object’s data objects list. This would result in a
data model represented by Figure 4.2.
30 CHAPTER 4. DESIGN
type: object
object metadata
data objects
RecordedAt
type : string
object metadata
samplevalue : ”2017-04-11T09:27:31.6814296+02:00”
data objects
Location
type : object
object metadata
data objects
Latitude
type : number
object metadata
samplevalue : 63.36718
data objects
..
.
..
.
Figure 4.2: Partial data object created from the JSON encoded event data generated by a
temperature measurement device shown in 4.1.
4.2. DISCOVERING THE SCHEMA 31
This algorithm only requires one document to generate a schema, but that one document might
not be an adequate representation of the dataset as a whole. In order to make the data model
a more comprehensive representation of the available sample data, all the documents of the
sample data is analysed, and the model must be updated accordingly. This is done by the ex-
tended algorithm listed in Algorithm 4.2
The benefit of passing the parent object as an argument is more clear here as most of the fields
analysed will be fields that already exist in the parent object. It is easy to imagine that a field
may have a null value in some documents and another value in another document. If only the
null-valued document is explored, the field will have no data type, but if the field is explored
again with for example a string value, the field will indeed be marked as a string type field.
In the case where all fields in the parent object are already explored, there is still value in iter-
ating over more sample data, as these extra sample data objects will confirm that the model is
correct. The more sample data that is being used to generate the data model, the more proba-
ble it is that unseen data will be properly represented by the data model.
32 CHAPTER 4. DESIGN
The first set of analysers find the empirical relative frequency of some characteristic of a field.
The characteristic is defined by a discriminant function, f , that takes one value as input and
returns a 1 if the value satisfies the characteristic, and a 0 if not. The empirical relative fre-
quency over the sample data X = [x0 , x1 , . . . , xn ], is used as an estimator P̂ , of the probability
that the field has the characteristic given by the function f . This estimator is defined as
P
f (x)
P̂ (f ) = x∈X (4.1)
n
The empirical relative frequency, P̂ (f ), is the probability that a randomly selected data point
in the sample data has this characteristic. The probability of this data field having the same
characteristic in a new data point from the same data source is also estimated to be P̂ (f ).
This is further illustrated by Example 4.2.
The fact that a field exists and has a string value in one data point does not necessarily mean
that it will exist in all data points in the data set. The number of null values can be a good
measure for how many data points have values for the specified field.
Counting null values is done by iterating over all the data points in the sample data, and count
the number of null values and the total number of values for each field. If a field is missing
in a data point, this counts as a null value. The algorithm that finds the empirical relative
frequency of null or missing values is shown in Algorithm 4.3. The result value is added as a
metadata value under the key p_null.
In this case the discriminant function, fnull is defined by
(
1, x = null
fnull (x) = (4.2)
0, x 6= null
As an example, imagine a data set with 1000 data points, one of the data fields have the value
null in 440 of the data points. According to Example 4.2, the empirical relative frequency of
null values, P̂ (fnull ) is 0.44 in this case.
440
P̂ (fnull ) = = 0.44
1000
Example 4.2: Example of data set where 440 of 1000 values are null. The empirical relative
frequency of null values is then calculated to 0.44.
4.3. ANALYSING THE FIELDS 33
Sometimes fields that are of a numerical nature gets stored as strings, perhaps by mistake or
for simplicity. However, if the field always contains numbers, it might be valuable to convert
the value of the field to a numeric data type.
To check if a field might be numerical the sample data is iterated using the same approach as
Algorithm 4.3. The discriminant function in this case is one that checks if a string value is ac-
˙
cepted by the regular expression "^((\+-)**.*,*+
. )$”—. The corresponding finite automaton is
shown in Figure 4.4. This regular expression accepts numbers, signed numbers, numbers with
comma as decimal mark, and numbers with punctuation mark as decimal mark. If a value is
null it will be ignored and thus neither counted as a numeric value nor in the total number of
values.
The relative frequency of numeric fields in the sample data set is added to the object_metadata
under the key p_number.
In some systems, Boolean values are represented by strings, or they may have been converted
to strings at some point. Strings that represent boolean values are easy to find, at least if there
is a limited set of strings that are expected to be interpreted as booleans. The algorithm that
discovers possible boolean values run through the dataset using the same procedure as Al-
gorithm 4.3, only here it checks if string values are in the lists of true or false boolean val-
34 CHAPTER 4. DESIGN
[0 − 9] [0 − 9]
+
[0-9] (.|, ) [0 − 9]
s 1 2 3
-
Figure 4.3: Non-deterministic finite automaton modelling the regular expression for recog-
nising a number disguised as a string. The input string must start with a digit, plus sign, or
minus sign. This moves the finite automaton to state 1, which is an accepting state. If there is
a comma or punctuation mark, the automaton will move to state 2, which is not an accepting
state. State 2 demands at least one number to move to state 3, which is an accepting state.
ues. According to the documentation for Elasticsearch [34], it would previously accept a list
of values, false, "false", "off", "no", "0", "" (empty string), 0, 0.0, interpreted as the
Boolean value false. Any other value was interpreted to be the Boolean value true. This fea-
ture was deprecated in version 5.3. The strings from this list, however, are used to build the
list used in this system, T rueBooleans = {‘true’,‘t’,‘yes’,‘on’,‘1’} and F alseBooleans =
{‘false’,‘f’,‘no’,‘off’,‘0’}. These are not exhaustive lists of possible representations of boolean
values, but they cover common variants.
If a field is of data type number and the sample data only has values 0 or 1 for the field, that
will also cause it to be determined as a possible boolean value.
The empirical relative frequency across the sample data set of possible boolean values in a
field, will be added to the metadata under the key p_bool.
Recognizing dates
Date or time values are usually represented as either a string or a number. In the case of string
there are a lot of different formats depending not only on locality, different generating systems
use different representations as well. Some values represent only the date while others contain
the time as well. In the case of the value being a number, the most used format is to give the
number of seconds since the UNIX epoch January 1st 1970 [35]. Some systems provide the
time since the epoch in milliseconds. Recognising that a string represents a date is a tedious
task, however most modern programming languages have some date parser built in, and most
of them recognise dates on the most used formats. This thesis will thus use a date parser for
Python called dateutil [36] to check if strings are dates.
The function that checks if a value might be a date will try to parse a date from the value us-
ing the dateutil date parser. If the parser is successful, it returns a 1, or true, if it is not suc-
cessful a 0 or false is returned. The empirical relative frequency of a field being a date will be
set on the object metadata under the key p_date.
If p_date is between 0.8 to1.0, the value of the metadata property samplevalue will be con-
verted to unix time and stored in the metadata property _date_unix. This conversion to a
unified format is done to make it easier to present the sample value in a human-readable for-
mat when presenting the data model.
If the field is a numeric type it might be a unix timestamp, and according to the unix time
definition [35] all real numbers represent some time, e.g the number 1 is January 1st 1970
4.3. ANALYSING THE FIELDS 35
s + 1 + 2 + 3 + 4
= = = =
/ / / /
Figure 4.4: The regular expression that accepts strings that could be Base64 encoded binary
strings, represented by a non deterministic finite automaton. The regular expression will ac-
cept any string that consists of the characters A-Z, a-z, 0-9, +, = and /. The total number of
characters in the input string must also be a multitude of 4.
00:00:01. However the number 1 might also be a boolean, a route number or any other nu-
meric value. To handle this problem with a simple solution the number must be between 473385600
and 2051222400 to be considered a date. The result being that only numbers representing
dates between January 1st 1985 and 2035 will be considered. This can lead to wrong assump-
tions, and should be checked extra carefully by the domain expert.
Bytes strings
Base encoding of data is to convert binary data to an ASCII string, so any binary data can be
stored as a string. Base64 encoding uses 65 characters of US-ASCII character set, to encode
6 bits in each character, the 65th character is = and is used for padding [37]. The Base64 en-
coding process represents 24-bit groups of input as output strings of 4 encoded characters [37].
If the input is only 8 bits the rest will be padded with = [37]. The length of a valid Base64
string must then be a multiple of 4, and it can only consist of characters from the US-ASCII
character set. To check if a string might be a Base64 encoded byte value, the string is tested
against the regular expression "^([A-Z0-9a-z=/\+]{4})+$", which accepts characters, num-
bers, + and =, in groups of four. This regular expression does not accept a string with less
than four characters.
The relative frequency of fields that might possibly be strings representing byte values is added
to the object with the key p_bytes.
An easy way to differentiate numeric values is if they have decimals or not. A number that
is without decimals, e.g. an integer, performs better in most systems. Each numeric value in
the sample data is checked for decimals. The empirical relative frequency of integers is stored
under the object metadata key p_integer.
36 CHAPTER 4. DESIGN
[0 − 9]
+
[0 − 9] (.|, ) [0 − 9] [0 − 9]
s 1 2 4 5 6
−
9]
[0
−
−
[0
8]
3
Figure 4.5: Non deterministic finite automaton modelling the regular expression for recognis-
ing a latitude coordinate. The input string can start with +, - or nothing to move to state 1.
A digit between 0 and 9 will take it to state 2, and if the digit is between 0 and 8 it will also
move to state 3. Should the next character be a punctuation mark or comma the automaton
will move to (only) state 4, if it is a number it will move to (only) state 2. To get from state 4
to state 6 two or more digits between 0 and 9 are needed. State 6 is the accepting state.
Figure 4.6: Example of a box plot. The horizontal lines on the left and right side represent
the min and max values. The grey box is defined by the fifth and ninety fifth percentile val-
ues, and the red horizontal line represents the median. It is easy to see that the values are
relatively evenly distributed and the median, is in the middle of the figure. This example
was generated from the values min: 1495540962 max: 1495541000 mean: 1495540981 median:
1495540981 variance: 240.6667 5th percentile: 1495540964 95th percentile: 1495540998.
Elasticsearch also accepts a string with latitude and longitude values separated by a comma as
a valid geo_point. These values are found by grouping the regular expressions and require a
comma between them.
The relative frequency of values that might be the string representation of a geo_point, are
stored in the object metadata under p_geo_point_string.
Uniqueness
How many unique or distinct values a field has can be a good measure for discovering fields
that represent categories or that otherwise can be used for grouping. Uniqueness can also re-
veal fields where all the values are the same. A measure called uniqueness [40], is given by the
number of distinct values divided by the total number of values. This is the same as the em-
pirical relative frequency of unique values.
The algorithms suggested for this in [40] use either hashing or sorting. To sort all the values of
the field, they would need to be stored in memory, full length. This can be very memory inten-
sive. Adding just the hashes of the values to a set would require less memory, but more CPU
power. In the analysis of the sample data set, capacity for a bigger sample data set is more
important than the speed of the process, so hashing the values is definitely the best procedure.
Three values are stored to the object_metadata, unique_num, the number of unique values,
unique_total_values the total number of values counted, and the unique_uniqueness the
former divided by the latter.
One way of visualising the statistical properties of a numeric value is through a box plot, such
as in Figure 4.6. The box plot is useful in the web interface, for the domain expert to get a
quick sanity check and an overview of the field’s values in the sample data set. A box plot will
intuitively reveal if there are outliers in the values, if the values are evenly distributed around
the median, how widely spread the values are compared to the minimum and maximum bor-
ders, and more. Summary statistics can also be created for aggregated values, like the number
of characters, or number of words in a string field.
To create a box plot describing a set of values one needs to know the minimum and maximum
values, the mean value, the median, and the 5th and 95th percentiles. Given a set of values
X = x0 , x1 , . . . , xn , the minimum value is the value in the set that has the lowest value, and
38 CHAPTER 4. DESIGN
the maximum value is the value with the highest value. The mean value x̄ is given by
P
x
x̄ = x∈X (4.3)
n
The median, x̃, is the middle element, and if n is odd, the median is given by
x̃ = x n2 (4.4)
If n is even, the median is given by
x n2 + x n2 +1
x̃ = (4.5)
2
The 5th percentile, 5p, is the value that 5% of the samples are less than, and is given by
5p = xdn∗0.05e (4.6)
Accordingly the 95th percentile, 95p, is the value that 95% of the samples are less than, and is
given by
95p = xdn∗0.95e (4.7)
For a numeric field all the values are are calculated and stored as one JSON object under the
key numeric_value_summary in the object metadata. This provides data to show a box plot in
the web interface as well as the statistics.
How long the strings are can give contribute to intuitive understanding of a field, in further
analysis of the dataset. To find the summary of the statistics of the string length, all the string
lengths for a given field in the sample data are collected, and the values are calculated and
stored like in section 4.3.2, and stored under the key string_length_summary.
The number of words in a string can also contribute to the understanding of a field, and can
help decide whether the string represents a keyword or is a full text field. Counting the num-
ber of words in a string is done by splitting it on space and counting the number of results.
The values are calculated and stored like in section 4.3.2, under the key word_count_summary.
Number
Figure 4.7: Identification tree for a JSON number type field. The grey boxes are tests, based
on previous analysis. The numbers in parenthesis are the weight of each suggestion. For ex-
ample, if the number is not an integer, the best suggestion is double because it has the highest
weight. The rest of the types in that box are also possibilities.
If a field is of type Boolean in the JSON representation, it is very probable that it is best stored
as a Boolean in Elasticsearch. However, one might want to store it as a keyword or a text.
The list of possible Elasticsearch field types for a JSON boolean field is Boolean on top, then
keyword as a second possibility, and text in case the domain expert insists.
If a data field has original JSON data type number there are several possible Elasticsearch
type matches. Figure 4.7 shows an identification tree for JSON values. The gray boxes repre-
sent constraints and the white rounded boxes show the possible data types that will be added
should they be reached. The number behind the data types is the weight of the suggestion.
The weight of a suggested data type is a number that is used to sort the suggestions, i.e. the
data type with the higher weight is more probable. It shows that if p_integer is less than 1,
i.e. there is at least one value that has decimals, the list of floating point data types is sug-
gested. If p_integer is 1, the list of integer data types is added, and p_date and p_boolean is
tested as well.
The list of integer number types is ordered after maximum number capacity in descending or-
der, so if the top one is chosen that will accept the largest range of numbers. If the domain
expert knows that values in a field will never be larger than the maximum value of a short in
Elasticsearch, choosing this will give better performance. If p_date is bigger than 0.8, the data
type date is added to the list of suggestions. p_date only has to be bigger than 0.8 because
dates are often misinterpreted by the date parser. Since this suggestion has weight 5, it will be
added above the integer types. There is still a possibility that the field represents a Boolean
value, and if p_boolean is 1, the data type boolean will be added to the top of the list.
40 CHAPTER 4. DESIGN
In Elasticsearch, there is no explicit data type “array”. Instead all types can contain lists of
values, as long as they are of the same type [26]. In other words, the Elasticsearch data type of
a field with arrays of values, should be the data type shared by all the values in the array.
However, in the case of an array containing objects this might not be the best approach. Lucene
core does not support nested objects. An array of objects will be flattened internally, and each
data field of the inner object will become one array containing the values from all the object’s
corresponding fields. Thus the relationship internally in each object is lost.
For example, take the case of storing an object that represents two bus trips that are planned
for the next hour. Each has an origin and a destination field. The request to store the object
is given in Example 4.3. This example has been adapted from [41].
Example 4.3: JSON structure for the indexing of an array of two trips, each with an origin
and a destination.
Because of the Lucene core, Elasticsearch will flatten these and store them as in Example 4.4.
Example 4.4: Example of how Elasticsearch would store the request from Example 4.3
The relationships inside the independent objects are then removed. A search for a trip from
Stavanger to Haugesund, i.e. a trip with origin Stavanger and destination Haugesund, would
be successful, even though the original data specifies no such trip.
To avoid this type of behaviour, and make each object in the array searchable and indepen-
dent, there are two possible approaches. One is the Elasticsearch nested data type, and the
other is Logstash split plugin. In the case of Elasticsearch’s nested data type, Elasticsearch
will put each trip in separate hidden documents and search them when the main object is
queried in a nested search. Through this approach it would appear to the user that the trips
4.4. INFERRING ELASTICSEARCH DATA TYPES 41
Array
Figure 4.8: Identification tree for a JSON array type field. If the objects contained in the
array are of type object, suggest split, nested or object, if not, use the type of the objects
contained in the array.
are objects in a list that is a field on the parent object. However, when used exhaustively, this
approach can damage performance.
Logstash’s split plugin is not a data type, but a filter plugin for Logstash that will split the
document into two separate documents, duplicating all properties that the documents are ex-
pected to have in common. However, for simplicity in the development of this thesis, split
will be considered to be a special case data type. The result of the split plugin on this ex-
ample is two independent objects with the same schema, but different values for the fields
trips.origin and trips.destination. The result is shown in Example 4.6.
{
" group " : " planned - trips - next - hour " ,
" trips . origin " : " Haugesund " ,
" trips . destination " : " Aksdal "
},
{
" group " : " planned - trips - next - hour " ,
" trips . origin " : " Forus " ,
" trips . destination " : " Stavanger "
}
Example 4.5: Example of how Logstash’s split plugin would transform the request from
Example 4.3
The possible Elasticsearch types for what is an array field in JSON is split with the highest
weight, nested as the alternative, and object lowest.
42 CHAPTER 4. DESIGN
A field with JSON data type object can be have data type object in Elasticsearch too, even
though it will be flattened, Elasticsearch maintains the object properties. However, when it is
an object in the JSON representation, it could also be a representation of a value that would
be better stored as one of the Elasticsearch data types range, geo_point or geo_shape. A
range in Elasticsearch is an object that represents a range on some scale, by defining specific
boundaries. A geo_point in Elasticsearch is a type that represents a geographical location,
while a geo_shape represents a geographic area.
If the object is an Elasticsearch data type range, it must either be integer, float, long,
double or date range. Ranges are recognised by the properties they must contain, the range
query parameters. The parameters are “greater than or equal”, “less than or equal”, “greater
than” and “less than”. These are abbreviated gte, lte, gt and lt respectively. The data type of
the range is defined by the data types of the range boundary parameter values.
The object might also be a geo_point. Elasticsearch accepts geo_points in four different for-
mats:
• A geohash string.
If the object is an object representation of a geo_point it must have one field with p lat == 1
and one with p lon == 1. The two properties must also have names that contains ‘lat’ for the
latitude, and ‘lon’ or ‘lng’ for the longitude.
An object might also be one of the geo_shape data type. Elasticsearch accepts geo_shapes
encoded by the GeoJSON format. According to [38] all GeoJSON object types that are ac-
cepted by Elasticsearch have a data field type that describes the GeoJSON type of the object,
e.g. line, polygon, or rectangle. If the object in question has a field named type and the
value of that field exists in the list of GeoJSON types accepted by Elasticsearch then geo_shape
will be suggested as an Elasticsearch data type for the object.
The identification tree for the JSON type object is shown in Figure 4.9.
Fields that are strings in the JSON representation of the data have the largest list of possi-
ble Elasticsearch types. A string can be a text, keyword, date, a number, boolean, range,
geo_point or geo_shape.
The identification tree for strings is shown in 4.10. If the p_number is 1, then the sample data
values should be converted to numbers and run through the analysers again. The resulting
suggestions should be added to this list of suggestions. The same approach goes for
4.4. INFERRING ELASTICSEARCH DATA TYPES 43
Object
Figure 4.9: Identification tree for a JSON object type field. If the object has properties
named lte, gte, gt or lt it is probably some kind of range. If it has a property type with a
value corresponding to one of the geo shapes accepted by Elasticsearch it is probably a geo
shape. Checking p lat and p lon can tell if this is a geo point, and at the bottom of the list is
always the object type.
44 CHAPTER 4. DESIGN
String
p date == 1 p geo point string
binary (5)
p boolean == 1 p number == 1 p keyword > 0.8
Figure 4.10: Identification tree for a JSON string type field. The relative frequencies deter-
mine the data type suggestions, if p date == 1 date is suggested with weight 5. p text and
p keyword need only be more than 0.8 to suggest text and keyword respectively.
4.5. PRESENTING THE DATA MODEL 45
All actions in the system are started from the command line interface. All interaction with
the command line interface is dependent on the current working directory, i.e. the directory
from which the command is run. Any command will use the current working directory to save
or load the data model file. For example, to create a new data model, move to the directory
where the data model file should be stored and run sdmcli init. An empty model will be
created in that directory. The data models are always named sdmodel.json, therefore two
data models can never reside in the same directory. The model file represents the current state
of the system, all other events in the system happen in separate and independent operations,
and all store their results in the model file.
Discovering a new model from sample data is done from the command line interface. It takes
the location of a JSON file that contains an array of documents as an argument. The model
file will be stored in the folder the command was run from. To review the discoveries of the
discoverer, one could print to screen the data model in the command line window, or start the
web interface.
Starting the web interface starts a local web server. It takes the data model as a JSON file
and serves this through a REST API. The API has an endpoint /model through which the
data model can be interacted with. It can be retrieved by a GET request to this endpoint, and
edited by sending an edited version as the body of a PUT request. To create a new data ob-
ject in the model, the POST method is used. It is also possible to delete an object using the
DELETE method, but in most cases it is better to mark the data object as deleted.
The server also serves a Javascript front end application that uses the REST API and lets the
user interact with the data model.
The data model is presented in the web interface as a tree containing data objects. At the root
is the _root_object. An example of a data object is shown in Figure 4.11. All data objects
has an Original Name that is the field name in the JSON representation. New Name is ini-
tially set to the same as the original name, and can be edited to give the field a different name
in storage. The field JSON Type shows the JSON data type for the field, while Elasticsearch
type shows the top suggestion for Elasticsearch data types.
46 CHAPTER 4. DESIGN
Figure 4.11: Example of a data field represented in web ui. The vertical lines represent par-
ent objects. The original name of this field is “PublishedLineName”. The field “New Name”
can be edited to give the field a new name in storage. The JSON type of the field is string
and the system has suggested keyword as the best choice for Elasticsearch data type. A de-
scription can be added and there are 17 object_metadata items.
Metadata values are viewed as they are, however some metadata properties have graphical rep-
resentations, for example summary statistics is presented as a box plot. There is also a set of
‘internal metadata’ properties, which are hidden by default, but can easily be made visible.
Figure 4.12 shows the data object from Figure 4.11 with the metadata section expanded and
some values edited.
Some of the values in the data model can be edited in the web interface by clicking the pen
icon next to the field in question. When a field is edited, and the save button is clicked, the
new edition of the data object is sent to the backend and then the interface is updated. The
object is sent through the REST API by an Ajax (Asynchronous Javascript and XML) engine
that handles the transaction with the server without blocking the user interface. While waiting
for the request to be returned however, the field that has been edited is blocked from further
editing. The back end handles an update request directly on the model file and saves this to
disk for every request. Once the request is returned, the front end will reload the model from
the back end. In Figure 4.12, the New Name has been changed to “PublishedRouteIdentifier”,
and this is the name it will be stored as. A short description of what the field represents has
also been added.
The user can request the filter section of a Logstash configuration from the web interface with-
out passing any arguments. Since input and output parts of the configuration file mainly con-
4.6. GENERATING CONFIGURATIONS 47
Figure 4.12: Example of a data field represented in web ui, with the “Metadata” field ex-
panded. In this case the “New Name” has been changed, and this field will get the name
“PublishedRouteIdentifier” upon storage. The estimated probabilities are visible in the meta-
data section, as they have been presented previously in this chapter. Since this field has data
type string in the sample data there is two box plots in the metadata. String length sum-
mary statistics show that most values have very few characters, while only a few values have
many characters. From the word count box plot it is obvious that there are few words in each
of the values, over half have only one word, and a value has at most two words.
48 CHAPTER 4. DESIGN
tain data provider and data storage specific information, it is not included in the output.
The Logstash filter generator will iterate over the data model recursively several times and add
configuration snippets to the string that will become the response. The first step is to traverse
the model recursively looking for arrays where the split option is chosen. This must be done
first for the addressing of underlying elements to work correctly later. An example of the snip-
pet that is added to the filter configuration is shown in Example 4.6.
if [ Answer ] {
split {
field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] "
}
}
The second step is the transformations. Snippets configuring the transformation of any data
field that require explicit data type transformation will be added to the filters string at this
step. For example if a field is a string in the JSON representation and integer is chosen to
be the Elasticsearch data type. An example of the string that is added to the filter configu-
ration is shown in Example 4.7. The snippet in Example 4.7 will convert all the values of the
data field [Answer][StopPointRef] to integers.
mutate {
convert = > {
" [ Answer ] [ StopPointRef ] " = > " integer "
}
}
Example 4.7: An example of usage of the Logstash mutate plugin. The value at
[Answer][StopPointRef] is explicitly converted to an integer.
Dates are also in some cases transformed. If the date in the JSON representation is given in
UNIX time, i.e. seconds from the epoch, this must be converted to milliseconds from the epoch
before Elasticsearch can use them. Example 4.8 shows an example of the string that is added
for such dates. The line starting with match defines which data field to match on, and what
plugin-specific data type the data field is expected to have. The line starting with target is
required to store the resulting value in the same field.
4.6. GENERATING CONFIGURATIONS 49
date {
match = > [ " [ Answer ] [ D e s t i n a t i o n A i m e d A r r i v a l T i m e ] " , " UNIX " ]
target = > " [ Answer ] [ D e s t i n a t i o n A i m e d A r r i v a l T i m e ] "
}
Example 4.8: An example of usage of the Logstash date plugin. The line
starting with match defines that the date plugin should act upon fields named
[Answer][DestinationAimedArrivalTime] and expect it to be of type UNIX. This type
is specific to this plugin. The line starting with target sets the output of the plugin, storing it
in the same field must be defined explicitly.
If the name of a data object has been changed in the web interface, that must result in a trans-
formation in the Logstash filter configuration. To rename a field, the mutate plugin is used.
The rename operation takes as arguments a key-value list, where the current name is key, and
the new name is value. An example is shown in Example 4.9.
mutate {
rename = > {
" [ VehicleLocation ] [ Longitude ] " = > " [ VehicleLocation ] [ lon ] "
}
}
Example 4.9: An example of renaming a field using the Logstash mutate plugin. The plugin
has an operation called rename that takes a key-value list as argument. Here with the current
name [VehicleLocation][Longitude] as key, and the new name [VehicleLocation][lon]
as its value.
In the web interface each object can be ‘marked for removal’, the result is a metadata property
_marked_for_removal that is set to true. If this metadata property exitsts and is set to true,
the field and its value is to be removed in the Logstash filter. This is also a job for the mutate
plugin. The operation remove_field takes one string argument. An example of removing a
field can be seen in Example 4.10.
mutate {
remove_field = > " [ VehicleLocation ] [ srsName ] "
}
Example 4.10: An example of removing a field using the Logstash mutate plugin. The plugin
has an operation called remove that takes a string as argument. Here the field with the name
[VehicleLocation][srsName] will be removed from all data points.
Elasticsearch mappings can also be requested from the web interface. However, in this case,
the user must pass the wanted ‘index name’ and ‘document type name’ as arguments. The
50 CHAPTER 4. DESIGN
mappings are a JSON-encoded object containing an object for each of the fields in the data
model. The object have a field named ‘type’ that contains the data type of the data that is
expected in that field. It also has a field ‘properties’ that is a list of objects representing any
child objects. Generating these mappings is done by traversing the data model, and adding
each of the data objects from the data model representation to the JSON object. If the data
object has an Elasticsearch type that corresponds to one of the Elasticsearch types that ex-
pect object data types, like range or geo_point the recursion will stop. The content of such
objects can not be defined in the mappings, but are defined implicitly by the data type.
{
" mappings " : {
" exampletype " : {
" properties " : {
" RecordedAt " : {
" type " : " date "
},
" BatteryLevel " : {
" type " : " long "
},
" DeviceType " : {
" type " : " keyword "
},
" Temperature " : {
" type " : " long "
},
" Location " : {
" type " : " geo_point "
}
}
}
}
}
Example 4.11: An example of an Elasticsearch mapping creation request that was generated
by the system. The document type that the mappings are created for is named exampletype.
Chapter 5
Implementation
A system for automatic data exploration and configuration generation will in some cases be
required to process large amounts of data. There might also be several domain experts coop-
erating on the same data set configuration. Therefore, such a system needs to be portable in
terms of running environment. It also might be beneficial if the system could be operated from
another machine than the one that it runs on.
This chapter presents an overview of the implementation of the SDModel software system in
Section 5.1. The process of building up the data model is presented in Section 5.2. Section 5.3
presents the data type inferrer, and its implementation. Section 5.4 presents the implementa-
tion and use of the command line interface from where all use of the system start. The web
interface is presented in Section 5.5, before the output plugins’ implementation is presented in
Section 5.6.
51
52 CHAPTER 5. IMPLEMENTATION
An instance of the class SDMServer exposes a REST API that handles serving and editing the
data model. The input plugins communicate through a REST API exposed by an instance
of the SDMPluginServer, and the output plugins using SDMOutputPluginServer. All server
classes use functionality from the CherryPy package.
The output formats are defined by output plugins that inherit from the SDMOutputPlugin.
The class SDMElasticsearchOutputPlugin generates the request needed to create a mapping
for the data in Elasticsearch. The SDMLogstashOutputPlugin generates the filter section of a
Logstash configuration file.
CherryPy also serves the static files that make up the front end app. The Javascript front end
app is built by Node using the JavaScript module bundler Webpack that builds the Vue.js app
from the Javascript source files.
5.1. OVERVIEW OF THE IMPLEMENTATION 53
Sample data
SDModel software
Engine
SDMDiscoverPlugin
SDMEtypePlugin
SDModel
SDMServer
SDMPluginServer
SDMOutputPluginServer
SDMElasticsearchOutputPlugin
SDMLogstashOutputPlugin
Frontend
app
Configurable components
Storage
Figure 5.1: The implementation overview of the system. The data provider sends some sam-
ple data to the engine that is passed to the SDMDiscoverPlugin. It discovers the schema
and populates a data model with results from analysers. The preliminary data model and
data is then passed to SDMEtypePlugin, which suggest a list of Elasticsearch types for each
data field in the model. The data model, now an instance of the SDModel class, is then served
served through the SDMServer. The frontend app lets domain experts review and edit the data
model. The SDMPluginServer and SDMOuptutPluginServer serve, and handle the use of, a
lists of available plugins. The output plugins generate configurations from the data model.
The domain expert can download, review, and add these to the data collection pipeline and
storage manually.
54 CHAPTER 5. IMPLEMENTATION
SDMDataObject
name
prim type
SDModel description
title muid
model path object metadata
created at data objects
init (self, model, model path) init (self, name, description =“”)
init from template (cls, path) object metadata = N one
init from path (cls, path) data objects = []
get dict repr (self ) prim type =‘object’, muid = N one)
save model (self ) init from dict (cls, dict)
load model (model path) merge metadata (self, newdata)
get template () merge suggestion (self, suggestion)
path is usable for new model (path) delete metadata (self, to delete)
path is usable for loading model (path) add data object (self, data object)
get data object from muid (self, muid)
delete data object with muid (self, muid)
add meta data (self, k, v, replace = F alse)
Figure 5.2: UML diagram showing the classes SDModel and SDMDataObject and their rela-
tion.
5.2.1 SDMDataObject
An instance of SDMDataObject have an instance variable name, which reflects the field name
in the sample data. It also has a variable prim_type that contains a string that identifies the
JSON data type of fields values in the sample data. These types must be one of the following
values, string, number, boolean, array or object in addition to null [33].
If the field is of type object it can have other data objects in its list of data objects. However,
it may also reflect a field which has a string, number, boolean or null value, in which the
data objects list is empty. If the field is of type array, it will have one data object represent-
ing all the items in the array. The data object representing such members of an array, will
have an empty string as its name.
The class SDMDataObject also defines a variable description. It is meant to hold a short hu-
man readable description of the data object and what it represents.
5.2. BUILDING UP THE DATA MODEL 55
Instances of the SDMDataObject class also have the variable object_metadata, a list of key-
value pairs that can hold information about the data object. The key must be unique to the
data object, and the value can be any of the JSON defined types. When analysers find infor-
mation about a data field it will be stored in the list of object metadata. Metadata pairs that
are intended only for use by the system itself and that normally need not be exposed to the
user has their keys prefixed by an underscore, inspired by the Python convention for private
variables [42]. These metadata pairs are hidden from the user by default, but can easily be dis-
played for a more verbose user interface.
Every data object also has the variable muid, a model unique identifier. When a data object
is first initialised, a Universally Unique IDentifier (UUID) Version 4, is generated. This is is
a 128-bit number that is pseudo randomly generated and that ”guarantee uniqueness across
space and time” [43]. There are five versions of UUIDs specified in RFC4122 [43], and they dif-
fer on what kind of input they use. Version 1 uses the current time and MAC address, version
2 is very similar to 1, but replaces a random part with a local domain value that is meant to
reflect the system that generated the value. Versions 3 and 5 are name based and thus embeds
some name, often used in distributed systems with some node name as the name preventing
two nodes producing the same UUID at the same time. Version 3 uses the MD5 hashing algo-
rithm version 5 uses SHA-1. Version 4 uses only a pseudo random value to generate the UUID.
Since this system runs on one node version 4 has been chosen. Version 1 would also work.
A new instance of SDMDataObject can be initialised either with just a name, or with any of
the values described above. There is also a class method for creating an instance from a dic-
tionary, making initialisation from JSON a lot simpler. By using the @classmethod decorator,
a method is treated as a static method that has access to the class they belong to through the
first argument to the function. This argument is by convention named cls. The static method
can then initialise and return an instance of the class, i.e. act as an initialiser for the class.
This is the Python approach for ‘overloading constructors’, as seen for example in Java. In
Java, two constructor methods can have the same name but different input parameters. This
enables the software to let the data type of the arguments determine which constructor to use,
and thus how to instantiate the class. In Python, method names must be unique to the class,
and a class can only have one instantiating method, the __init__() method. However, by us-
ing the @classmethod decorator on a method, it can act as an initialising method, and return
a new instance of the class, but take different input arguments.
Retrieving a data object that is child of an instance of SDMDataObject, can be done either by
its name, or by muid. Because the name of a data object is unique only within the parent ob-
ject’s list, only the current data object’s list of data objects will be searched when searching
by name. A NameError is thrown if it is not found. The muid however, is unique in the entire
data model. If the object is not found within the current object, the method is called on all
its data objects, so that all the data objects in data model will be searched for the object with
the correct muid.
The SDMDataObject has a method for JSON serialisation __jsonobj__ that can be used to
serialise the instance. This ensures the entire instance is serialised and unserialised correctly.
5.2.2 SDModel
The SDModel class inherits from SDMDataObject and is the root object in any data model. It
can be initialised either with a path to an existing model or as an empty model. The additions
56 CHAPTER 5. IMPLEMENTATION
that this class have over the SDMDataObject class are the ability to save and load itself from
file. When the SDModel writes itself to the sdmodel.json file it uses its own and its child ob-
jects‘ __jsonobj__ methods.
For JSON serialising, the Python json package is used. The result from serialising with this li-
brary is by default a JSON structure with no whitespace and no newlines [44]. This gives a lit-
tle increase in performance for machine readability. In this implementation, the indentation is
set to 4, making the serialiser add newlines and 4 spaces for each indentation. This contributes
to making the file more human readable.
Discovering the schema of the sample data is done by the SDMDiscoverPlugin. It iterates over
all the data points in the sample data, and passes each data point to a method _discover_schema
together with an instance of the SDMDataObject class (or one that inherits from it), named
parent_object. The first call to _discover_schema is done with an empty parent_object.
The method then traverses the data object recursively in a depth first manner. An instance of
SDMDataObject is instantiated for each of the data fields. The name and JSON data type of
the field is set, and the instance is added to the list of data objects on the parent object. If a
field is of type object, each of its data fields are passed recursively to the _discover_schema
method. The data object, i.e. the instance of the SDMDataObject, that was created for the ob-
ject is passed as the parent_object.
The _discover_schema function will first check if the object is already discovered in the model
before adding it. If the parent_object has a data object corresponding to the data field, it
will verify that the data type is the same and move on.
Once the schema is discovered, and a preliminary data model is instantiated, the SDMDiscoverPlugin
finds the relative frequencies of a predefined set of characteristics for the data fields.
The method _mark_string_field_as_possible_property takes three mandatory arguments,
obj_model, prop_key and disc_function. The first, obj_model, is the instance of SDMDataObject
that will get a new metadata property. The prop_key is the key that metadata will get. The
disc_function is the discriminant function for the characteristic in question. In Python, func-
tions are “First-class citizens”, meaning a function can be assigned to a variable, and passed
to another function. The disc_function is an example of this, and it can discriminate be-
tween data points that have a characteristic and those that do not. In addition there are two
optional arguments. prim_types a list of primitive data types, the data object must be one
of the data types in order to be checked. And threshold which determine how high the rela-
tive frequency of a characteristic must be to add it to the object metadata list. By default, the
threshold is zero, resulting in that a characteristic with relative frequency higher than zero is
added to the metadata list of the data object.
5.3. INFERRING DATA TYPES 57
The SDMEtypePlugin adds a list of possible Elasticsearch data types to each data object in the
model, based on the analysis stored in the data object’s metadata. The root object is passed
to a method _add_e_types_to_model. This method will retrieve a list of possible types from
_get_e_types_for_object and add the first to a metadata value with key e_type. The list
of possible types is serialised to a JSON string and added as the value of the metadata key
_e_types.
The _add_e_types_to_model method consists of if clauses, one for each possible JSON datatype.
Prior to this, an empty list, e_types, has been initialised, and inside each of the if clauses,
possible Elasticsearch data types are added to the list. For example if the primitive type is
boolean, the types boolean, keyword, and text are added to the list of possible data types.
The e_types list is then returned.
All parts of the SDModel system are started using the sdmcli command line interface. The
file sdmcli is a Python file that will call the method check_args(args) on the class SDMCli
if started from the command line. When a Python program is started from the command line,
a list of the arguments passed along can be retrieved by using argv property of the Python
package sys. For example if the command “sdmcli init” is run, the sys.argv will be the list
[’sdmcli’, ’init’].
The first argument is always the name or path of the file that was executed, while the rest are
arguments to the application. In sdmcli the second argument refers to the command to run,
and the rest are arguments to the method that will execute the given command. For exam-
ple to initialise a new model the command is init and it takes one optional argument, title
given either by -t flag or the more verbose --title flag. The full command to initialise a
model named ‘test’ then becomes sdmcli init --title=test. The args that are passed to
the check_args method in this case is [’sdmcli’,’init’,’--title=test’]. The first argu-
ment is discarded, the second argument refers to the command which to run, in this case init.
The init method is called with args as the argument.
The init method will check the arguments for the flag -t or --title. If present this will be
used as the title of the model object. If no title is given, sdmcli will prompt the user for a ti-
tle. Then the method creates an instance of the SDModel, saves it to the current location and
exits.
A list of all commands and their arguments are available in Table 5.4
The web interface provides a visual representation of the data model, and options for editing
the model. It consists of two parts, a server module that serves the model through a REST
API, and a client side JavaScript application that consumes this API through Ajax - Asyn-
chronous Javascript and XML calls.
58 CHAPTER 5. IMPLEMENTATION
5.5.1 Server
The server is started from the command line interface using the command serve. This will
start four server modules, each with its own endpoint. The server modules use decorators from
CherryPy, a lightweight, object oriented web framework [45], to serve content.
The main module is the SDMServer, it provides a REST API at the /model endpoint. This
endpoint supports GET, POST, PUT and DELETE requests in addition to the OPTIONS
preflight request to allow Cross Origin Resource Sharing (CORS) requests (mainly used for
development). All methods take one parameter; muid, which identifies the object to perform
the requested action on.
A GET request will return the data object with the matching muid, or the whole model if the
muid parameter is set to the string “top”. A POST request will add a SDMDataObject to the
object that corresponds to the given muid. If the muid is the string “top” then the object will
be added to the SDModel directly. The body of the request must consist of a JSON encoded
version of the SDMDataObject that is to be added.
A PUT request updates the data object that has the corresponding muid with the JSON en-
coded values in the body of the request. DELETE is for deleting objects, a DELETE request
thus will delete the data object with the corresponding muid. If the muid parameter is missing
or set to top, this will not delete the model as one might expect, rather nothing will happen.
To delete the model one has to delete the sdmodel.json file from disk.
The static files for the frontend are served by CherryPy directly at the / endpoint. Navigating
the browser to localhost:8080 after starting the server loads the frontend app.
The SDMPluginServer provides an API to handle input plugins at /plugins. A GET request
will return a list of the available plugins. Each plugin in the list is an object with four proper-
ties. name is the display name for the plugin while the class name can be found in class_name.
A short description is also included, as is a list named arg_components that specifies the argu-
ments needed to use the plugin.
To run an input plugin on the model, a POST request is sent to the /plugins endpoint with
the plugin object from the GET request as the body of the request. The argument components
should of course be edited as seen fit. This will return a list of merge suggestions, objects that
represent a suggested change in the model.
The merge suggestions that are accepted is then sent in a PUT request to /plugins which will
execute them on the model.
The output plugins are served by the SDMOutputPluginServer. A list of available output plu-
gins is available through the endpoint /outputplugins. The usage of the output plugins are
the same as the input plugins, except that the response of the output plugins is typically the
text of a configuration file.
The front end is built with Vue.js, a “progressive framework for building user interfaces” [46].
Vue.js is a Javascript framework that let developers easily develop progressive single page web
apps, web pages that resemble mobile or desktop apps [46].
In traditional web pages, the page is loaded synchronously, i.e. the user enters a url and the
60 CHAPTER 5. IMPLEMENTATION
SDMWebInterface
SDMMetadataRow SDMDataObject
Figure 5.3: The component tree showing the components of the Vue.js application and how
they are connected.
content on that page is loaded from the server and presented to the user. When a link is clicked,
the linked page is loaded from the server and presented. This means that the user has to wait
with no page showing while next page loads. In a progressive single page app the goal is to
load as little as possible before presenting it to the user, and then load the rest of the content
asynchronously. This way the user can start consuming the content faster. The content of con-
nected pages can be loaded once the current page is done. When the user clicks a link to an-
other page, the content is already loaded and the transition is a matter for the user interface
and need no network interaction. This in turn cause no change in the url, thus the name “sin-
gle page app”.
Vue.js is a component based system, i.e. any component can contain a HTML, CSS styles and
a JavaScript, and is reusable. Components use other components to lay out their content. The
app can then be viewed as a tree of components. Data is passed down this tree. If a compo-
nent lower on the tree wishes to change some piece of data it will do this locally and either
send it to the server or to the state manager. The state manager is a global state management
object that keeps track of shared data in the application [46]. Figure 5.3 shows the component
tree for the frontend app presented here.
Vue.js also implements a reactive data structure for the data in the web app. This means that
when a piece of data is changed by some part of the app, any view underneath it in the tree
that displays this data will also update [46]. When the data is changed in the state manager
all views will update.
The rest of this section will present each of the components used in the frontend app.
SDMWebInterface is the root element of the component tree. It lays out a component
TopMenu, some model information, and the components spinner, MetadataList and
DataObjectList. When this component is loaded and presented to the user it will set the
variable loading to true. This causes the spinner to be visible. It then dispatches an asyn-
chronous action getModel on the state manager, that will get the data model from server.
When the state manager has received the response, it will call the function that was passed
with the dispatch to stop the spinner.
Once the model is updated on the state manager, all components will get the updated version
and can present it to the user.
5.6. OUTPUTPLUGINS 61
SDMMetadataRow displays a metadata pair, and takes three parameters, metakey, metaval
and muid. If the user clicks the edit icon the state of the component will change to edit mode
and the key and value will be editable. When saved, the component will dispatch a setMetadata
action on the state manager that will send the request to the server. If the request is success-
ful the state manager will dispatch another action loadSDModel that will load and update the
model. This causes the change to be visible in the view.
SDataObject is the component that displays the data object. It has buttons for editing
fields and functions that will update the data object on a per field basis. This means that if
a user updates the description of a data object, that change will be sent to the server. The
user can now edit the type or any other part of the data object while still waiting for the name
change to be returned by the server. This is just one of the parts where asynchronous server
communications show its benefits in user experience.
This component also uses the component that is its parent, the SDataObjectList to show a
data object’s data objects. And also a SDMMetadataList.
SDMTopMenu is the component that handles the top menu and the plugin interactions.
When the user choose to use a plugin from the list in the top menu, a modal overlay is added
that covers the screen and lets the user configure the plugin. When the plugin is initiated a list
of merge suggestions can be accepted or rejected from the top menu’s modal element.
5.6 OutputPlugins
The output plugins inherit from the SDMOuptutPlugin which in turn inherits from SDMPlugin.
These relationships provide a basis for developing new plugins, and also encourages a similar
implementation for each of the plugins.
5.6.1 SDMLogstashOutputPlugin
The SDMLogstashOutputPlugin class is initialised by passing the arguments, model and plugin_opts,
to its superclass, the SDMOutputPlugin. The arguments become the values of the instance
variables self.model and self.plugin_opts. The SDMLogstashOutputPlugin does not re-
quire any arguments passed from the user, so the plugin_opts could be discarded.
62 CHAPTER 5. IMPLEMENTATION
The get_output method on the class is called by the SDMOutputPluginServer to get the out-
put from the plugin. In the case of the SDMLogstashOutputPlugin the method is responsible
for building up the filter section of a Logstash configuration file. First a local variable filters
is initialised with the string ‘filter {\n’. Then all the data objects at the root of the data
model are iterated over, and filter configurations are appended to the filter variable.
As described in Section 4.6.1, the first filter configurations that are added are the uses of the
Logstash split plugin. This is done by passing the data object that is currently being eval-
uated to the _get_splits method. The _get_splits method instantiates an empty string
‘split_string’, and checks if the prim_type of the object is array and the e_type is ‘split’.
If both these are true, it will add to the split_string a snippet that will split the array on
the current data object. It will then iterate over the data objects of the current data object,
and pass each as the data object in a recursive call to itself, _get_splits. The result of each
of these calls will be added to the split_string before it is returned.
The next configurations are the transforms. This is done by a similar approach as above, the
root object is passed as argument to _get_transforms. The method _get_transforms will
instantiate an empty string t_string and check the prim_type of the data object.
If the prim_type is ‘string’, it will check the e_type for any of the floating point number
types, double, float, half_float and scaled_float. If this test passes, a filter configura-
tion for the plugin mutate will be appended to the t_string variable. The filter plugin will
in this case be set to convert the current object to a float. The same approach is used if the
e_type is any of the integer types, long, integer, short or byte. Only here the filter plugin
will be set to convert the value to an integer.
If the prim_type is ‘number’, the e_type is ‘date’ and the metadata value for samplevalue
is between 473385600 and 2051222400, it would seem to be a numeric UNIX timestamp. The
original UNIX timestamp, as defined in 4.3.1, is given in seconds since the epoch. However,
Elasticsearch does not recognise a number as a date unless it is given in milliseconds since
the epoch. Therefore, if a field is numeric, has e_type ‘date’ and is within the ‘seconds from
epoch’ interval, it must be converted to milliseconds since the epoch. This is done by the Logstash
filter plugin date, and the snippet that configures it will be added to the t_string.
If the prim_type is ‘object’ and the e_type is ‘geo_point’ several filter configurations will be
added. The data objects of the data object is iterated over, and each will be checked. If the
current data object’s name has the string ‘lat’ in it, case insensitive, that field will be renamed
to ‘lat’. If the current data object’s name has the string ‘lon’ or ‘lng’ in it, case insensitive,
that field will be renamed to ‘lon’. If neither of the two apply, the field will be removed. That
is, a Logstash filter mutate will be used to remove the field from the data. This is required
because Elasticsearch does not accept any other objects within a geo_point object than the
lat and lon. After this, a filter configuration using the convert operation from the mutate
plugin is added that will explicitly set the lat and lon data types to float.
The _get_transforms method is then called recursively for all data object in the data objects
list.
The field removals are the next set of configurations to be added to the filter configuration.
The method _removal_for_data_object is called with the data object as argument. It will
check if the date object’s object metadata has a field _marked_for_removal. If it does, and
it is true, the Logstash mutate filter is used to set up Logstash to remove the field from the
data.
5.6. OUTPUTPLUGINS 63
The last filter configuration type that is added is the renaming of values. This has to be last,
because after a field is renamed, the new name is the only way to address the element, and
all the previous filter configurations operate with the original names. Renaming a data field is
done by the mutate plugin using the operation rename. It takes an array of key-value pairs as
argument. Here the key is the original name, and the value is the new name.
5.6.2 SDMElasticsearchOutputPlugin
Data can come in many different formats and encodings, it can have different structures and
schemas, and it can be small or big. To test how well a system performs, one should, ideally,
test it against all the data that it is made to handle. This is usually close to impossible. How-
ever, testing the system on a smaller set of data that represents the various data it is made to
handle might be possible, but this is usually still a very large amount of data.
Testing the system presented in this thesis is done using two different data sets. First it will
be tested against a simulated data set that is based on the temperature measurement example
in Section 4.2. The second test will be real-world data from the Kolumbu VehicleMonitoring
service.
This chapter will present the approach used to create a simulated data set for testing the sys-
tem in Section 6.1. Section 6.2 will present the results of testing the system on the simulated
data set. The results from the real-world example of collecting data from Kolumbus’ Vehicle
Monitoring Service is presented in Section 6.3.
To test the system, a data set that resembles the temperature example in Section 4.2 has been
developed. To make the data more realistic, the values of the fields are randomised and will
vary within certain constraints.
RecordedAt is increased by 1 minute for each document, simulating a measurement every minute.
The value of the field Temperature is given by a random sample from a gaussian probability
distribution with mean = 15 and standard deviation = 5. The result is that 68.2% of the mea-
surements lie between 10 and 20.
The value of the field Location is given by the location in Example 6.1, plus a random sample
from a gaussian distribution with mean = 0 and standard deviations 0.06 for latitude and 0.03
for longitude. This causes the majority (68.2%) of the values to fall inside a ellipse, approxi-
mately 7 by 5km, and the University of Stavanger in the centre.
The battery level will decrease from 100 in the first document down towards zero in the last.
65
66 CHAPTER 6. EXPERIMENTS AND RESULTS
{
" RecordedAt " : " 2 0 1 7 -0 4 -1 1 T 0 9 : 2 7 : 3 1 . 6 8 1 4 2 9 6 + 0 2 : 0 0 " ,
" Temperature " : " 2 1 " ,
" DeviceType " : " Manuf acture rModel 5 1 0 0 " ,
" Location " : {
" Latitude " : 5 8 . 9 3 8 3 3 4 ,
" Longitude " : 5 . 6 9 3 5 4 2
},
" BatteryLevel " : 6 5
}
Example 6.1: JSON encoded event data from temperature measurement device.
The first approach is to let Elasticsearch dynamically map the data types, i.e. not create any
index beforehand, just let Elasticsearch create the index and all the mappings. In this ap-
proach there are no filters in the Logstash configuration.
The second approach is to use the SDModel software presented in this thesis to discover the
data model. In this approach all the sample data is run through the system, and the result-
ing data model is used to generate both Logstash filter configuration and Elasticsearch index
mappings.
Results
Some of the results of the mappings are presented in Table 6.1. The complete results can be
seen in Appendix B.
The table presents the JSON type of the original data field in the first column. The second
column, Dynamic type, represents the result of dynamic mapping done by Elasticsearch. The
third column, SDModel type, represents the number one suggestion from the SDModel system.
The last column, GT type, is the ground truth, i.e. the perfect suggestion for the field.
The first field, RecordedAt, represents a date, and both methods suggested the date data
type, so both are correct. The field Temperature is a JSON string type field, however, in
6.3. KOLUMBUS VM DATA 67
all data points in the sample data, it contains only numbers, so it should be converted into
numeric values. Dynamic mapping suggested text, while SDModel suggested float, which
is the GT type. If the field is text it will be searchable as a text, and in some cases that is
the required behaviour. However, if the field is converted to a float, one can for example do
range searches, e.g. get all the measurements where the temperature is above 17 degrees. The
DeviceType is suggested to be data type text by dynamic mapping, while SDModel wants to
use the keyword data type. The GT type is keyword, but the difference here is not a big one,
keyword is searchable only by the complete term, while text is searched like a full text field
with partial hits.
The Location field is interpreted as an object by dynamic mapping, causing it to not be
searchable at all. SDModel however, suggests using the geo_point data type, which also is the
GT type. One of the benefits of the geo_point data type, is that Elasticsearch can do queries
based on the geographic location.
The BatteryLevel field is a number in JSON and is interpreted as a float by dynamic map-
ping and a long by the SDModel. In this case dynamic mapping is more correct than the SD-
Model, the battery level is given with 1 decimal in all of the data points and should thus be
a float. However, in all of the data points in the sample data, the decimal is 0, causing the
SDModel to see it as an integer. If the SDModel had used the decimal it would have sug-
gested double, and been more correct in this case. However, dynamic mapping still wins, as
the float type can store values up to 232 , the Temperature field probably won’t need more.
This experiment was specifically designed to make the SDModel software shine, and make
Elasticsearch’s dynamic mapping the loser. However, as will be shown in later examples, this
experiment is not that far from the real world.
The body of the request has one object, GetVehicleMonitoring. It again has a ServiceRequestInfo,
a Request, and a RequestExtension.
The ServiceRequestInfo contains information related to the request. It has the property
RequestTimestamp, reflecting when the request was sent, according to the sender, and a RequestorRef.
The latter field is required to not be empty, here it is set to the authors email address.
Next in the body is the Request, that contains information about what specific data is re-
quested. It has the same property RequestTimestamp as above. It also has a property VehicleMonitoringRef
that defines what vehicles to get monitoring data for, in this case ALL refers to all available ve-
hicles. The property MaximumVehicles could limit the request to a number of vehicles to get
data for, in this case it is set to 0, meaning there is no limit.
Example 6.2: XML encoded request to Kolumbus Vehicle Monitoring Service. The ac-
tion is GetVehicleMonitoring, and it is sent to the Kolumbus’ url. The body of the request
has only the required parameters. The ServiceRequestInfo object has two parameters.
RequestTimestamp is the current time, RequestorRef can have any value, in this case the au-
thors email address. The Request object is required to have the version, at the moment 1.4.
Furthermore it has the RequestTimestamp again. The parameter VehicleMonitoringRef can
be used to narrow down the number of results, in this case, the status of all vehicles are re-
quested. MaximumVehicles and VehicleMonitoringDetailLevel have their default values,
namely “0” and “normal” respectively.
The response from the service can be acquired by sending this request to the url given. The
response is XML encoded as well, however, to conduct this experiment it is converted directly
to JSON.
6.3. KOLUMBUS VM DATA 69
For this experiment, the Kolumbus VM Service was called 120 times with one minute between
each call. Due to the time spent by the requests themselves, the total time for the requests
was closer to 130 minutes. The expected structure of the response is available in Appendix A.
Because the structure of the response consists of several layers of nested objects at the root
level, Elasticsearch’s dynamic mapping will map only the three properties directly below the
root node, namely Answer, AnswerExtension, and ServiceDeliveryInfo. The values of these
properties are treated as un-indexed objects. Therefore the results of dynamic mapping is
omitted in this experiment.
The data from the Kolumbus VM Service was saved to a file, as a JSON array with 120 items.
This data was used as the sample data for the SDModel system, and it suggested a data model.
The generated data model was then used to generate an Elasticsearch index request, and the
filter section of a Logstash configuration file.
Results
Table 6.2 shows the structure of parts of the response from the Kolumbus VM Service, and re-
sults from the SDModel system. The complete results are available in Appendix B. The first
column is the field name, here the caracter - indicates that the field is a subfield of the pre-
vious field with one less -. The second column shows the JSON data type of the value in the
field. The third column is the SDModel suggestion for Elasticsearch data type for the field,
while the fourth column indicates the best choice for the Elasticsearch type in the authors
opinion. The choice of data type often a subjective one, and therefore there is no ‘ground truth’
in this example. The last column indicates if the SDModel was correct or not.
From Table 6.2 it is apparent that the data model is not a flat one, in fact there are up to
five levels of nesting. If the fields that have subfields are not counted, the data model has 110
fields. The analysis of the sample data show that 75 of these fields had null values. That is
approximately 68% of the data fields in the data set had no value in the 120 requests that
were made for this experiment. Further investigation into the dataset documentation issued
by Kolumbus [7] show that these fields are either not used, or not mentioned in the documen-
tation. The SDModel system marks these fields for removal by default, this can be corrected
through the web interface. Marking a field for removal results in a Logstash filter that removes
the data field from the data point before this is sent to Elasticsearch.
The fields that are arrays in the data set are given the data type split by default. split is
not really an Elasticsearch data type, but a Logstash filter that will split the data point into
several data points, one for each element in the array that is split. All values other than those
in the array will be duplicated and appear in each of the new data points.
The first field in Table 6.2 that the SDModel system did not get correct is the field VisitNumber.
The JSON type for this field is number but the SDModel system suggests boolean. The cor-
rect suggestion would be integer. The value of this field indicate how many times on the cur-
rent trip the vehicle has visited the upcoming stop [7]. One might imagine a situation where
this number was higher than 1, but in the Kolumbus route system, the buses visit each stop
at most once per trip. In the sample data set, this value was 1 or null in all the 46920 data
points. The number of data points at this point is so high because there are 391 vehicles that
70 CHAPTER 6. EXPERIMENTS AND RESULTS
report their status in each of the 120 requests, resulting in 391 ∗ 120 = 46920 data points of
the VehicleActivity field, and its child fields. The SDModel analysis reported the estimated
probability of null values to be 0.47 for this field. The SDModel system would by default sug-
gest that any field that contains only values 0 or 1 to be boolean data type.
The next suggestion that is not entirely correct is the field Delay. It is a number in the JSON
representation, and the SDModel system suggested the data type long, while the correct an-
swer was integer. This happens to several of the fields, double is suggested over float too.
The reason for this is that the SDModel system does not know the maximum value for the
field and thus chooses the data type with the highest capacity.
The response consisted of a total of 118 data fields, including data fields with subfields. Of
these, 75 never had a value in the sample data, thus making the data type suggestion close to
impossible. Of the 43 remaining data fields, 30 got correct suggestions, 13 got partially correct
suggestions, and 1 got an incorrect suggestion.
72 CHAPTER 6. EXPERIMENTS AND RESULTS
Chapter 7
Conclusion
This thesis presents the architecture, design, and implementation of a software system, SD-
Model, that can generate configuration files for big data collection pipelines, based on a set
of sample data. In terms of selecting the data types and transformations of the data, the sys-
tem is not fully automatic. It requires that a domain expert validates, and sometimes corrects
the suggestions given by the system. The renaming of data fields, for more semantic data field
names, is a task left for the domain expert to do while reviewing the data types.
The person that is to configure a system like the Elastic stack to store data from a new data
set, must get familiar with the data set. Knowledge about the data set is important in order
to minimise the information loss, as result of transforming the data from one system to an-
other. This can also contribute to avoiding the storage of redundant information. In the Elas-
tic stack, searchability is a valued feature, and optimising the configuration for the data to be
easily searchable requires knowledge not only about the actual data, but also about what it
represents, and how it will be used.
The SDModel framework for automating configuration of data collection pipelines presents the
summary of a data set through a web interface, with an intuitive and informative design. All
the collected statistics are available through the web interface, and some properties leverage
graphical components to present the results of the analysis. This makes it easy for the person
configuring the data collection pipeline to get familiar with the data set, and reduces the need
to explore large amounts of sample data manually.
As the results of the experiments in Chapter 6 shows, the SDModel is not always correct in
its number one suggestion. Suggesting data types is a complex task. Data representing the
same real-world entity can come in so many formats, and so many different entities can be rep-
resented by the same format. However, the SDModel narrows down the list of possible data
types for a data field, and if the number one suggestion was not correct, another suggestion
may be.
In many cases, transforming data using Logstash is a necessity. For example, the case of the
Elasticsearch type geo_point, where the properties of the object must be named lat and lon.
Here a renaming operation is in many cases required. If the original fields are named Latitude
and Longitude, this is a relatively safe transformation to make, with no apparent loss of in-
formation. However, there might be other data fields in the same object, as is the case in the
Kolumbus VM experiment. In this data set, there are data fields Precision and Coordinates
73
74 CHAPTER 7. CONCLUSION
on the Location field as well as Latitude and Longitude. The requirements for the Elastic-
search data type geo_point is that lat and lon are the only properties of the object. Remov-
ing these two extra fields could cause an information loss, and requires the domain expert to
get familiar with the situation before removing the fields, or choosing a different approach like
splitting the object into two objects. In the case of the Kolumbus VM, the extra fields proved
to bear no information in the sample data, i.e. the values were null at all data points, so re-
moving them appears to be a safe decision.
The process of collecting and storing smart city data is a task that still requires that the per-
son doing the configuration understand the data and how it is structured. However, the system
presented in this thesis can provide very valuable insights and templates for the configuration
of data collection pipelines.
7.1 Evaluation
The SDModel system does reach the goal of automatically generating the configuration files
for a data collection pipeline based on sample data. However, suggesting field types was harder
than the author of this thesis anticipated. There are many cases not covered by the SDModel
system, however, the results of the experiments in Section 6 proves that the system work well
even with a real-world data set.
In the beginning of this project, the user experience was an important part of the system, un-
fortunately solving the problem has taken priority, and thus the user experience is not optimal.
The SDModel system provides an intuitive and good summary of the contents of a data set,
and for this the system could prove useful even in situations where the configuration files are
not used.
7.2 Contributions
The author of this thesis, Julian Minde, has contributed with
• The exploration into some of the alternative choices of software for collecting and storing
smart city data and the description of one such system, the Elastic stack.
• The development of the SDModel software system. A software that analyses sample data
and generates a data model that show the structure and properties of the sample data.
• The development of a specific solution to storing data from the Kolumbus Vehicle Moni-
toring Service, with configurations generated by the SDModel system.
The SDModel system does not configure systems, rather it presents files that a user can use to
configure the system. This might be automated when the SDModel system is more mature and
reliable.
The SDModel system, in its current version, uses a finite set of sample data to create config-
urations. One possibly valuable feature would be if the SDModel system could receive bits of
sample data from the data provider throughout the whole process of the data collection. This
data could be analysed, and should the system discover a discrepancy between the current con-
figurations and the data that is received, it could either raise an alert to a technician, or sim-
ply update the configuration files to account for the discrepancy. For example, if a field that
has only null values in the sample data suddenly starts carrying valuable information, it could
be critical to start collecting said field.
The SDModel system provides a good overview of the data set. However, this data model pre-
sentation, the results of the analysis, and descriptions of the data fields, might be valuable in-
formation to the data scientists, developers and other users of the data, as well as the techni-
cian(s) configuring the data collection pipeline.
The collection and storage of smart city data is constantly evolving, and so must the system
presented in this thesis. There are many corner cases and more common cases that the SD-
Model system will not handle in its current implementation. However, by continuing the devel-
opment of solutions within the system, it will be able to handle similar cases later, and also for
other users.
76 CHAPTER 7. CONCLUSION
Appendices
77
Appendix A
The structure of the response from the Kolumbus VM Service is presented in Figure 1.1.
79
80 APPENDIX A. KOLUMBUS VEHICLE MONITORING SERVICE RESPONSE
Status
MonitoredVehicleActivity VehicleActivity
ValidUntil
ProducerRef SubscriptionRef
RequestMessageRef SubscriptionFilterRef
MonitoredCall ResponseMessageIdentifier ShortestPossibleCycle
OperatorRef ResponseTimestamp DefaultLanguage
ExternalLineRef Address ErrorCondition
OriginAimedDepartureTime version
Occupancy VehicleActivityNote
Delay VehicleActivity RequestorMessageRef
DestinationRef SubscriberRef
FacilityConditionElement VehicleActivityCancellation
ConfidenceLevel ResponseTimestamp
DestinationAimedArrivalTime ItemIdentifier VehicleActivityNote
PublishedLineName VehicleActivityNote
Monitored VehicleMonitoringRef
OnwardCalls MonitoredVehicleJourney
TrainBlockPart ValidUntilTime
SituationRef ProgressBetweenStops
IsCompleteStopSequence RecordedAtTime VisitNumber
VehicleMode AimedHeadwayInterval
LineRef CallNote
FramedVehicleJourneyRef AimedDepartureTime
MonitoringError FacilityConditionElement
MonitoredCall ArrivalBoardingActivity
DirectionRef
InPanic AimedArrivalTime
VehicleLocation RequestStop
VehicleLocation
InCongestion SituationRef
HeadwayService SignalStatus
PreviousCalls StopPointRef
DestinationName ExpectedDepartureTime
JourneyNote ExpectedHeadwayInterval
srsName
OriginName DepartureStatus
Precision
PredictionInaccurate ReversesAtStop
Coordinates
OriginRef StopPointName
Longitude
Bearing BoardingStretch
Latitude
Via ExpectedArrivalTime
Id
VehicleFeatureRef DestinationDisplay
VehicleRef VehicleAtStop
RouteRef DeparturePlatformName
DestinationShortName PlatformTraversal
ProgressStatus Order
BlockRef TimingPoint
CourseOfJourneyRef ActualArrivalTime
DirectionName ArrivalStatus
JourneyPatternRef ArrivalPlatformName
DataSource ActualDepartureTime
ProductCategoryRef FacilityChangeElement
FacilityChangeElement DepartureBoardingActivity
ServiceFeatureRef VehicleLocationAtStop
ProgressRate
OriginShortName
VehicleJourneyName
Figure 1.1:
Sketch of the structure of the data fields in the Kolumbus Vehicle Monitoring service’s re-
sponse to the action GetVehicleMonitoring. The root object is in the upper left corner, and
this figure shows that there are several levels of nesting in this data.
Appendix B
This appendix present the complete results of the experiment presented in Section 6.3. Table
B.1 through B.5 show the fields present in the data set, and their types. It also show the SD-
Model systems suggestions and the authors opinion on what Elasticsearch data types are best
fit for the fields.
Example B.1 show the suggested Elasticsearch mapping. And Examples B.2 and B.3 show the
filter section of a Logstash configuration file.
81
82 APPENDIX B. KOLUMBUS VM SERVICE EXAMPLE, COMPLETE RESULTS
{
" mappings " : {
" busdata " : {
" properties " : {
" Answer " : {
" type " : " object " ,
" properties " : {
" VehicleMonitoringDelivery ": {
" type " : " object " ,
" properties " : {
" Ve hi c le Ac t iv it y " : {
" type " : " object " ,
" properties " : {
" ProgressBetweenStops ": {
" type " : " object " ,
" properties " : {
" Percentage " : { " type " : " double " },
" LinkDistance " : { " type " : " long " }
}
},
" Rec ordedAt Time " : { " type " : " date " },
" MonitoredVehicleJourney ": {
" type " : " object " ,
" properties " : {
" MonitoredCall " : {
" type " : " object " ,
" properties " : {
" RequestStop " : { " type " : " boolean " },
" VisitNumber " : { " type " : " boolean " },
" TimingPoint " : { " type " : " binary " },
" A c t u a l A r r i v a l T i m e " : { " type " : " date " },
" Re versesAt Stop " : { " type " : " boolean " },
" StopPointRef " : { " type " : " long " },
" E x p e c t e d D e p a r t u r e T i m e " : { " type " : " date " },
" A i m e d D e p a r t u r e T i m e " : { " type " : " date " },
" De pa r tu re S ta tu s " : { " type " : " binary " },
" P l a t f o r m T r a v e r s a l " : { " type " : " boolean " },
" StopPointName " : { " type " : " text " },
" A c t u a l D e p a r t u r e T i m e " : { " type " : " date " },
" ArrivalStatus " : { " type " : " binary " },
" Bo ar d in gS t re tc h " : { " type " : " boolean " },
" D e p a r t u r e B o a r d i n g A c t i v i t y " : { " type " : " binary " },
" A r r i v a l B o a r d i n g A c t i v i t y " : { " type " : " keyword " },
" A i m e d A r r i v a l T i m e " : { " type " : " date " },
" VehicleAtStop " : { " type " : " boolean " },
" E x p e c t e d A r r i v a l T i m e " : { " type " : " date " }
}
},
" OriginRef " : { " type " : " long " },
" Ve h ic le Lo c at io n " : { " type " : " geo_point " },
" Monitored " : { " type " : " boolean " },
" VehicleRef " : { " type " : " long " },
" C o u r s e O f J o u r n e y R e f " : { " type " : " long " },
" VehicleMode " : { " type " : " keyword " },
" O r i g i n A i m e d D e p a r t u r e T i m e " : { " type " : " date " },
" De s ti na ti o nN am e " : { " type " : " text " },
" Delay " : { " type " : " long " },
" De stinati onRef " : { " type " : " long " },
" LineRef " : { " type " : " long " },
" D e s t i n a t i o n A i m e d A r r i v a l T i m e " : { " type " : " date " },
" P u b l i s h e d L i n e N a m e " : { " type " : " keyword " },
" OriginName " : { " type " : " text " },
" Mo n it or in g Er ro r " : { " type " : " keyword " },
" I s C o m p l e t e S t o p S e q u e n c e " : { " type " : " boolean " },
" DirectionRef " : { " type " : " keyword " }
}
},
" Val idUntil Time " : { " type " : " date " }
}
},
" S h o r t e s t P o s s i b l e C y c l e " : { " type " : " long "},
" version " : { " type " : " long " },
" ValidUntil " : { " type " : " date " },
" R e s p o n s e T i m e s t a m p " : { " type " : " date " }
}
}
}
},
" ServiceDeliveryInfo ": {
" type " : " object " ,
" properties " : {
" ProducerRef " : { " type " : " keyword " },
" R e s p o n s e T i m e s t a m p " : { " type " : " date " },
" R e s p o n s e M e s s a g e I d e n t i f i e r " : { " type " : " keyword " }
}
}
}
}
}
}
filter {
# Split arrays into separate events if the parent field is present
if [ Answer ] {
split { field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] "}
}
if [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] {
split { field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac t iv it y ] "}
}
# Transform values to better fit with storage
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [ StopPointRef ] " = > "
integer " }
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve h ic le A ct iv it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ O r i g i n A i m e d D e p a r t u r e T i m e ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e hi cl e Ac ti vi t y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ O r i g i n A i m e d D e p a r t u r e T i m e ] "
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ De stinati onRef ] " = > " integer " }
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve h ic le A ct iv it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ D e s t i n a t i o n A i m e d A r r i v a l T i m e ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e hi cl e Ac ti vi t y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ D e s t i n a t i o n A i m e d A r r i v a l T i m e ] "
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ LineRef ] " = > " integer " }
}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ srsName ]
"}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [
Precision ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [
Coordinates ] "}
mutate {
rename = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Ve h ic le Lo c at io n ] [ Longitude ] " = > " [
Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V eh i cl eA c ti vi ty ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Ve hi cl e Lo ca ti o n ] [ lon ] " }
}
mutate {
rename = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Ve h ic le Lo c at io n ] [ Latitude ] " = > " [
Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V eh i cl eA c ti vi ty ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Ve hi cl e Lo ca ti o n ] [ lat ] " }
}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ id ] "}
mutate {
convert = > {
" [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ lat ] " = > " float "
" [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ lon ] " = > " float "
}
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Ve h ic le Lo c at io n ] [ Precision ] " = > "
integer " }
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ OriginRef ] " = > " integer " }
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ VehicleRef ] " = > " integer " }
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi cl e Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ C o u r s e O f J o u r n e y R e f ] " = > " integer " }
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve h ic le A ct iv it y ] [ Val idUntil Time ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e hi cl e Ac ti vi t y ] [ ValidU ntilTim e ] "
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve h ic le A ct iv it y ] [ Rec ordedAt Time ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e hi cl e Ac ti vi t y ] [ Record edAtTim e ] "
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ ValidUntil ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ ValidUntil ] "
}
# Convert JSON type string to float . Elasticsearch type is long
mutate {
convert = > { " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ version ] " = > " integer " }
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ R e s p o n s e T i m e s t a m p ] " , " UNIX " ]
target = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ R e s p o n s e T i m e s t a m p ] "
}
# Convert UNIX time to UNIX_MS time
date {
match = > [ " [ S e r v i c e D e l i v e r y I n f o ] [ R e s p o n s e T i m e s t a m p ] " , " UNIX " ]
target = > " [ S e r v i c e D e l i v e r y I n f o ] [ R e s p o n s e T i m e s t a m p ] "
}
# To be continued ...
Example B.2: Filter section of a Logstash configuration file for Kolumbus VM Service re-
sponse part 1, continued in B.3
88 APPENDIX B. KOLUMBUS VM SERVICE EXAMPLE, COMPLETE RESULTS
# Continuing ...
# Remove fields that are marked for removal
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Status ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ Item Identifi er ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ V e h i c l e A c t i v i t y N o t e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ V e h i c l e M o n i t o r i n g R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
A i m e d H e a d w a y I n t e r v a l ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [ CallNote ] "
}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
F a c i l i t y C o n d i t i o n E l e m e n t ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
SituationRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
SignalStatus ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
E x p e c t e d H e a d w a y I n t e r v a l ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
D e s t i n a t i o n D i s p l a y ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
D e p a r t u r e P l a t f o r m N a m e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [ Order ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
A r r i v a l P l a t f o r m N a m e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
F a c i l i t y C h a n g e E l e m e n t ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ MonitoredCall ] [
V e h i c l e L o c a t i o n A t S t o p ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ OperatorRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ E xt er na l Li ne Re f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Occupancy ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ F a c i l i t y C o n d i t i o n E l e m e n t ]
"}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ C on fi de n ce Le ve l ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ OnwardCalls ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ T rainBlo ckPart ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ SituationRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ F r a m e d V e h i c l e J o u r n e y R e f ] "
}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ InPanic ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ srsName ]
"}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [
Coordinates ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V eh ic le L oc at io n ] [ id ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ InCongestion ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ H eadwayS ervice ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ PreviousCalls ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ JourneyNote ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ P r e d i c t i o n I n a c c u r a t e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Bearing ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ Via ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V e h i c l e F e a t u r e R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ RouteRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ D e s t i n a t i o n S h o r t N a m e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ P rogress Status ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ BlockRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ DirectionName ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ J o u r n e y P a t t e r n R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ DataSource ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ P r o d u c t C a t e g o r y R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ F a c i l i t y C h a n g e E l e m e n t ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ S e r v i c e F e a t u r e R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ ProgressRate ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ O ri gi nS h or tN am e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Ve hi c le Ac ti v it y ] [ M o n i t o r e d V e h i c l e J o u r n e y ] [ V e h i c l e J o u r n e y N a m e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Su bs c ri pt io n Re f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ S u b s c r i p t i o n F i l t e r R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ De fa u lt La ng u ag e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ Er rorCondi tion ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e h i c l e A c t i v i t y N o t e ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ R e q u e s t M e s s a g e R e f ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ SubscriberRef ] "}
mutate { remove_field = > " [ Answer ] [ V e h i c l e M o n i t o r i n g D e l i v e r y ] [ V e h i c l e A c t i v i t y C a n c e l l a t i o n ] "}
mutate { remove_field = > " [ A ns we rE x te ns i on ] "}
mutate { remove_field = > " [ S e r v i c e D e l i v e r y I n f o ] [ R e q u e s t M e s s a g e R e f ] "}
mutate { remove_field = > " [ S e r v i c e D e l i v e r y I n f o ] [ Address ] "}
}
Example B.3: Filter section of a Logstash configuration file for Kolumbus VM Service re-
sponse, part 2. Continued from B.2
Appendix C
The source code for the SDModel system is attached in this PDF. Sourcecode for the SDModel system.
89
90 APPENDIX C. SOURCE CODE FOR SDMODEL SYSTEM
Acronyms
ANSI American National Standards Institute. 8, 91, Glossary: American National Standards
Institute
API Application Programming Interface. 2, 7, 17, 23, 45, 46, 51, 52, 57, 59, 91, Glossary: Ap-
plication Programming Interface
CORS Cross Origin Resource Sharing. 59, 91, 94, Glossary: Cross Origin Resource Sharing
JSON JavaScript Object Notation. 7, 12, 14–18, 21, 27, 29, 38–46, 48, 50, 54–59, 66–71, 82–
85, 91, 93, Glossary: JavaScript Object Notation
SIRI Service Interface for Real Time Information. 12, 91, Glossary: Service Interface for Real
Time Information
SOAP Simple Object Access Protocol. 12, 67, 91, 95, Glossary: Simple Object Access Proto-
col
WSDL Web Service Definition Language. 12, 58, 91, Glossary: Web Service Definition Lan-
guage
91
92 Acronyms
Glossary
Asynchronous Javascript and XML An engine for doing http requests from the browser
asynchronously, so as to not block the user interface while waiting for the server to re-
spond [48].. 46, 57, 94
Cross Origin Resource Sharing A website can by default not request resources from other
origins than its own unless the resource opens for it through its headers. This mechanism
is called Cross Origin Resource Sharing.. 59, 91
Data Definition Language A data definition language or data description language (DDL)
is a syntax similar to a computer programming language for defining data structures,
especially database schemas. [49]. 91
data model A data model is an abstract model that organises elements of data and standard-
ises how they relate to one another and to properties of the real-world entities. [50] . 22–
24, 28, 29, 51, 53
DELETE The HTTP DELETE method requests that the server deletes the resource identi-
fied by the URI [51]. . 14, 45, 59
Elastic stack Distributed framework for document oriented search, storage and visualising of
big data using Elasticsearch, Logstash and Kibana. 6, 16, 21, 24, 73, 74
eXtensible Markup Language A markup language that defines a set of rules for encoding
documents in a format that is both human- and machine-readable. [52]. 8, 91
geohash A geocoding system invented by Gustavo Niemeyer and placed into the public do-
main, its purpose is to encode GPS coordinates in an URL friendly manner [28]. . 17
GeoJSON A geospatial data interchange format based on JSON. It defines several types of
JSON objects and the manner in which they are combined to represent data about geo-
graphic features, their properties, and their spatial extents. GeoJSON uses a geographic
coordinate reference system, World Geodetic System 1984, and units of decimal degrees
[38]. . 17, 36, 42
93
94 Glossary
GET Using the HTTP GET method in a request will retrieve whatever information is iden-
tified by the URI [51]. This is the method used for standard webpage retrieval. . 14, 15,
45, 59, 94
HEAD The HTTP HEAD method requests that the headers and not the body of a GET re-
quest is returned. It can be used for checking the validity of an url or obtaining meta
information about the entity without transferring the entity body itself [51]. . 14
Internet of Things “A global infrastructure for the information society, enabling advanced
services by interconnecting (physical and virtual) things based on existing and evolving
interoperable information and communication technologies” [53].. 3, 91
inverted index A data structure which is designed to allow very fast full-text searches. An
inverted index consists of a list of all the unique words that appear in any document, and
for each word, a list of the documents in which it appears. [9] . 14
JavaScript Object Notation JavaScript Object Notation (JSON) is a text format for the
serialisation of structured data. [33]. 7, 91
Lucene core A high-performance, full-featured text search engine library written entirely in
Java [?]. . 13, 17, 40
Paxos A family of protocols for solving consensus in a network of unreliable processors. Con-
sensus is the process of agreeing on one result among a group of participants. This prob-
lem becomes difficult when the participants or their communication medium may experi-
ence failures [54].. 13
POST The HTTP POST method is used to request that the entity enclosed in the request
is accepted as a new subordinate of the resource identified by the URI [51]. For example
when a webpage sends a form to the server, the forms content is enclosed in the request,
and it is requested that this submission is accepted in the list of form submissions. . 45,
59
PUT The HTTP PUT method requests that the enclosed entity be stored at the URI, and if
there is already an entity at the URI the enclosed entity should be considered as a modi-
fied version of the one residing on the server [51]. . 14, 15, 45, 59
Service Interface for Real Time Information XML protocol to allow distributed comput-
ers to exchange real time information about public transport service and vehicles [56]..
12, 91
Simple Object Access Protocol A protocol intended for exchanging structured information
in a decentralised, distributed environment [57].. 12, 91
Triangulum smart city project “The three point project Triangulum is one of currently
nine European Smart Cities and Communities Lighthouse Projects, set to demonstrate,
disseminate and replicate solutions and frameworks for Europe’s future smart cites” [58].
. 4, 5
Universally Unique IDentifier A UUID is a 128-bit value that can guarantee uniqueness
across space and time [43]. Often used in systems where an unique id is needed but a
global authority is not available or preferred.. 55, 91
Web Service Definition Language Language used to define SOAP web services.. 12, 91
96 Glossary
References
[1] M. Chen, S. Mao, and Y. Liu, “Big data: A survey,” Mobile Networks and Applications,
vol. 19, no. 2, pp. 171–209, 2014.
[2] J. Gantz and D. Reinsel, “Extracting value from chaos,” IDC iview, vol. 1142, no. 2011,
pp. 1–12, 2011.
[3] H. Hu, Y. Wen, T.-S. Chua, and X. Li, “Toward scalable systems for big data analytics: A
technology tutorial,” IEEE Access, vol. 2, pp. 652–687, 2014.
[4] A. Moniruzzaman and S. A. Hossain, “Nosql database: New era of databases for big data
analytics-classification, characteristics and comparison,” arXiv preprint arXiv:1307.0191,
2013.
[6] “Gartner says the internet of things installed base will grow to 26 billion units by 2020.”
[Online]. Available: http://www.gartner.com/newsroom/id/2636073
[7] Kolumbus, “Kolumbus real time open data.” [Online]. Available: https://www.kolumbus.
no/globalassets/sanntid-siri/kolumbus-real-time-open-data.pdf
[9] C. Gormley and Z. Tong, Elasticsearch: The Definitive Guide. ” O’Reilly Media, Inc.”,
2015.
[10] “Final report of the ansi/x3/sparc dbs-sg relational database task group,”
SIGMOD Rec., vol. 12, no. 4, pp. 1–62, Jul. 1982. [Online]. Available:
http://doi.acm.org/10.1145/984555.1108830
[11] M. Hammer and D. Mc Leod, “Database description with sdm: A semantic database
model,” ACM Trans. Database Syst., vol. 6, no. 3, pp. 351–386, Sep. 1981. [Online].
Available: http://doi.acm.org/10.1145/319587.319588
97
98 REFERENCES
[53] ITU, “Internet of things global standards initiative.” [Online]. Available: http:
//handle.itu.int/11.1002/1000/11559
[54] “Paxos (computer science).” [Online]. Available: https://en.wikipedia.org/wiki/
Paxos (computer science)
[55] “Rdf 1.1 concepts and abstract syntax.” [Online]. Available: https://www.w3.org/TR/
2014/REC-rdf11-concepts-20140225/
[56] “Service interface for real time information.” [Online]. Available: https://en.wikipedia.org/
wiki/Service Interface for Real Time Information
[57] W3C, “Soap version 1.2 part 1: Messaging framework (second edition).” [Online].
Available: https://www.w3.org/TR/soap12/
[58] “Triangulum project.” [Online]. Available: http://triangulum-project.eu