Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Kalyan 2018

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Remote Laboratories: For Real Time Access to Experiment

Setups with Online Session Booking, Utilizing a Database


and Online Interface with Live Streaming

B. Kalyan Ram1 ✉ , S. Arun Kumar1, S. Prathap1, B. Mahesh2,


( )

and B. Mallikarjuna Sarma2


1
Electrono Solutions Pvt. Ltd., #513, Vinayaka Layout, Immadihalli Road, Whitefield,
Bangalore 560066, India
{kalyan,arun,prathap}@electronosolutions.com
2
Independent Consultants, Bangalore, India
maheshryu1@yahoo.com, sarma.mallikarjuna@gmail.com

Abstract. This paper discusses the physical implementation of lab experiments


that are designed to be accessed from any web-browser using clientless remote
desktop gateway apache guacamole with the support of remote desktop protocol.
Which also facilitates live streaming of the experiments using axis cgi api, online
slot booking for students to book their respective sessions and apache Cassandra
database for users details storage.
Here, we shall address all aspects related to the system architecture and infra‐
structure needed to establish a Real time Remote access system for a given
machine (in this case being electric machines - which otherwise could be extended
to any machine). This is being built to evaluate the system feasibility to implement
a complete machine health monitoring system with remote monitoring and control
capability, though the current implementation is aimed at students being able to
perform the experiments related to machines lab.

Keywords: Remote labs · Engineering laboratory experiments · Apache


guacamole · Remote desktop protocol · Live streaming · Axis cgi api · Online slot
booking · Apache Cassandra database

1 Introduction

Laboratory experiments are the integral part of Engineering Education. The main focus
is to gain access to these lab experiments over the internet using various integration
tools. Remote laboratory (also known as online laboratory, remote workbench) is the
use of telecommunications to remotely conduct real (as opposed to virtual) experiments,
at the physical location of the operating technology, Enabling the students to utilize these
technology from a separate geographical location. Supported by resources based on new
information and communication technologies, it is now possible to remotely control a
wide variety of real laboratories.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_19
Remote Laboratories: For Real Time Access to Experiment Setups 191

2 Architecture of Guacamole

In cloud computing environment, there are various important issues, including standard,
virtualization, resource management, information security, and so on. Among these
issues, desktop computing in virtualized environment has emerged as one of the most
important ones in the past few years. Currently, users no longer use a powerful, more-
than-required hardware but share a remote powerful machine using light weight thin-
client. A thin-client is a stateless desktop terminal that has no hard drive. All features
typically found on the desktop PC, including applications, sensitive data, memory, etc.,
are stored back in the server when using a thin client. These thin clients may not neces‐
sarily be a totally different hardware but can also be in the form of PCs. Thin clients,
software services, and backend hardware make up thin client computing, a remote
desktop computing model [1]. Guacamole is not a self-contained web application and
is made up of many parts. The web application is actually intended to be simple and
minimal, with the majority of the grunt work performed by lower-level components.
Users connect to a Guacamole server with their web application. The Guacamole client,
written in JavaScript, is served to users by a web server within the Guacamole server.
Once loaded, this client connects back to the server over HTTP using the Guacamole
protocol. The web application deployed to the Guacamole server reads the Guacamole
protocol and forwards it to guacd, the native Guacamole proxy. This proxy actually
interprets the contents of the Guacamole protocol, connecting to any number of remote
desktop servers on behalf of the user [2] (Fig. 1).

Fig. 1. Guacamole architecture

2.1 Guacamole Protocol


The web application does not understand any remote desktop protocol at all. It does not
contain support for VNC or RDP or any other protocol supported by the Guacamole
192 B. Kalyan Ram et al.

stack. It actually only understands the Guacamole protocol, which is a protocol for
remote display rendering and event transport. While a protocol with those properties
would naturally have the same abilities as a remote desktop protocol, the design prin‐
ciples behind a remote desktop protocol and the Guacamole protocol are different: the
Guacamole protocol is not intended to implement the features of a specific desktop
environment. As a remote display and interaction protocol, Guacamole implements a
superset of existing remote desktop protocols [1].
Adding support for a particular remote desktop protocol (like RDP) to Guacamole
thus involves writing a middle layer which “translates” between the remote desktop
protocol and the Guacamole protocol. Implementing such a translation is no different
than implementing any native client, except that this particular implementation renders
to a remote display rather than a local one.

2.2 GUACD
• guacd is the heart of Guacamole which dynamically loads support for remote desktop
protocols (called “client plug-ins”) and connects them to remote desktops based on
instructions received from the web application.
• guacd is a daemon process which is installed along with Guacamole and runs in the
background, listening for TCP connections from the web application. guacd also does
not understand any specific remote desktop protocol, but rather implements just
enough of the Guacamole protocol to determine which protocol support needs to be
loaded and what arguments must be passed to it. Once a client plug-in is loaded, it
runs independently of guacd and has full control of the communication between itself
and the web application until the client plug-in terminates (Fig. 2).

Fig. 2. Guacamole server


Remote Laboratories: For Real Time Access to Experiment Setups 193

2.3 Remote Desktop Gateway


A remote desktop gateway provides access to multiple operating environments using an
HTML5 capable browser without the use of any plug-ins. Users connect to a Guacamole
server with their web browser. The Guacamole client, written in JavaScript, is served to
users by a web server within the Guacamole server. Once loaded, this client connects
back to the server over HTTP using the Guacamole protocol. The web application
deployed to the Guacamole server reads the Guacamole protocol and forwards it to
guacd, the native Guacamole proxy. This proxy actually interprets the contents of the
Guacamole protocol, connecting to any number of remote desktop servers on behalf of
the user. Remote Desktop Protocol (RDP) provides remote login and desktop control
capabilities that enable a client to completely control and access a remote server. The
protocol is implemented by Microsoft Corporation based on ITU-T T.120 family proto‐
cols. The major advantage distinguishing the RDP from other remote desktop schemes,
such as the frame-buffer approach, is that the protocol is based on preferably sending
graphic device interface (GDI) information from a server, instead of full bitmap
images [3].

3 Remote Labs Implementation

The implementation of remote lab involves designing a hardware infrastructure that


supports the remote access feature through the technology infrastructure mentioned
herewith (Fig. 3).

Fig. 3. Remote labs architectural block diagram

This specific remote laboratory setup is made up of Motor - Generator setups, PLC
trainer setup and Process control trainer setup.
194 B. Kalyan Ram et al.

These setups are designed to be accessed remotely by an authorized user through a


browser interface. Currently, the system is tested for different browsers namely Google
Chrome, Microsoft Internet Explorer and Mozilla Firefox and has found to be compat‐
ible for these browsers accordingly. The architecture is designed to support most
commonly used browsers.

4 Cassandra Database

Apache Cassandra is a highly scalable, high-performance distributed database designed


to handle large amounts of data across many commodity servers, providing high avail‐
ability with no single point of failure. It is a type of NoSQL database.
Cassandra has become popular because of its outstanding technical features. Given
below are some of the features of Cassandra:
– Elastic scalability - Cassandra is highly scalable; it allows to add more hardware to
accommodate more customers and more data as per requirement.
– Always on architecture - Cassandra has no single point of failure and it is contin‐
uously available for business-critical applications that cannot afford a failure.
– Fast linear-scale performance - Cassandra is linearly scalable, i.e., it increases your
throughput as you increase the number of nodes in the cluster. Therefore it maintains
a quick response time.
– Flexible data storage - Cassandra accommodates all possible data formats including:
structured, semi-structured, and unstructured. It can dynamically accommodate
changes to your data structures according to your need.
– Easy data distribution - Cassandra provides the flexibility to distribute data where
you need by replicating data across multiple data centers.
– Transaction support - Cassandra supports properties like Atomicity, Consistency,
Isolation, and Durability (ACID).
– Fast writes - Cassandra was designed to run on cheap commodity hardware. It
performs blazingly fast writes and can store hundreds of terabytes of data, without
sacrificing the read efficiency.
The design goal of Cassandra is to handle big data workloads across multiple nodes
without any single point of failure [4]. Cassandra has peer-to-peer distributed system
across its nodes, and data is distributed among all the nodes in a cluster.
– All the nodes in a cluster play the same role. Each node is independent and at the
same time interconnected to other nodes.
– Each node in a cluster can accept read and write requests, regardless of where the
data is actually located in the cluster.
– When a node goes down, read/write requests can be served from other nodes in the
network.
In Cassandra, one or more of the nodes in a cluster act as replicas for a given piece
of data. If it is detected that some of the nodes responded with an out-of-date value,
Cassandra will return the most recent value to the client. After returning the most recent
Remote Laboratories: For Real Time Access to Experiment Setups 195

value, Cassandra performs a read repair in the background to update the stale values.
The following figure shows a schematic view of how Cassandra uses data replication
among the nodes in a cluster to ensure no single point of failure (Fig. 4).

Fig. 4. Structure of Cassandra database

5 Single Sign-On Application

The first time that a user seeks access to an application, the Login Server:
– Authenticates the user by means of user name and password
– Passes the client’s identity to the various applications
– Marks the client being authenticated with an encrypted login cookie
In subsequent user logins, this login cookie provides the Login Server with the user’s
identity, and indicates that authentication has already been performed. If there is no login
cookie, then the Login Server presents the user with a login challenge. To guard against
sniffing, the Login Server can send the login cookie to the client brow er over an
encrypted SSL channel. The login cookie expires with the session, either at the end of
a time interval specified by the administrator, or when the user exits the browser. It is
never written to disk. A partner application can expire its session through its own explicit
logout.
1. Single Sign-On Application Programming Interface (API)
(a) The Single Sign-On API enables:
(i) Applications to communicate with the Login Server and to accept a user’s
identity as validated by the Login Server
(ii) Administrators to manage the application’s association to the Login Server
(b) There are two kinds of applications to which Single Sign-On provides access:
(i) Partner Applications
(ii) External Applications
196 B. Kalyan Ram et al.

2. Partner Applications
Partner applications are integrated with the Login Server. They contain a Single Sign-
On API that enables them to accept a user’s identity as validated by the Login Server.
3. External Applications
External applications are web-based applications that retain their authentication
logic. They do not delegate authentication to the Login Server and, as such, require a
user name and password to provide access. Currently, these applications are limited to
those which employ an HTML form for accepting the user name and password. The user
name may be different from the SSO user name, and the Login Server provides the
necessary mapping (Fig. 5).

Fig. 5. Single Sign-On

6 Port Forwarding

In computer networking, port forwarding or port mapping is an application of network


address translation (NAT) that redirects a communication request from
one address and port number combination to another while the packets are traversing a
network gateway, such as a router or firewall. This technique is most commonly used
to make services on a host residing on a protected or masqueraded (internal) network
available to hosts on the opposite side of the gateway (external network), by remapping
the destination IP address and port number of the communication to an internal host.
Port forwarding allows remote computers (for example, computers on the Internet) to
connect to a specific computer or service within a private local-area network (LAN). In
a typical residential network, nodes obtain Internet access through a DSL or cable
Remote Laboratories: For Real Time Access to Experiment Setups 197

modem connected to a router or network address translator (NAT/NAPT). Hosts on the


private network are connected to an Ethernet switch or communicate via a wireless LAN.
The NAT device’s external interface is configured with a public IP address. The
computers behind the router, on the other hand, are invisible to hosts on the Internet as
they each communicate only with a private IP address [6]. When configuring port
forwarding, the network administrator sets aside one port number on the gateway for
the exclusive use of communicating with a service in the private network, located on a
specific host. External hosts must know this port number and the address of the gateway
to communicate with the network-internal service. Often, the port numbers of well-
known Internet services, such as port number 80 for web services (HTTP), are used in
port forwarding, so that common Internet services may be implemented on hosts within
private networks.
Typical applications include the following:
– Running a public HTTP server within a private LAN
– Permitting Secure Shell access to a host on the private LAN from the Internet
– Permitting FTP access to a host on a private LAN from the Internet
– Running a publicly available game server within a private LAN
Usually only one of the private hosts can use a specific forwarded port at one time,
but configuration is sometimes possible to differentiate access by the originating host’s
source address.

7 A Record

An A record maps a domain name to the IP address (IPv4) of the computer hosting the
domain. Simply put, an A record is used to find the IP address of a computer connected
to the internet from a name. The A in A record stands for Address. Whenever you visit
a web site, send an email, connect to Twitter or Facebook or do almost anything on the
Internet, the address you enter is a series of words connected with dots. For example, to
access any website you enter a URL for instance www.google.com. At the name server
there is an A record that points to the IP address 8.8.8.8. This means that a request from
your browser to www.google.com is directed to the server with IP address 8.8.8.8. A
Records are the simplest type of DNS records, yet one of the primary records used in
DNS servers [7]. You can actually do quite a bit more with A records, including using
multiple A records for the same domain in order to provide redundancy. Additionally,
multiple names could point to the same address, in which case each would have its own
A record pointing to the that same IP address.

8 Video Streaming API

The HTTP-based video interface provides the functionality for requesting single and
multipart images and for getting and setting internal parameter values. The image and
CGI requests are handled by the built-in web server. The mjpg/video.cgi is used to
request a Motion JPEG video stream with specified arguments. The arguments can be
198 B. Kalyan Ram et al.

specified explicitly, or a predefined stream profile can be used. Image settings saved in
a stream profile can be overridden by specifying new settings after the stream profile
argument [8].

9 Tomcat Web Application Deployment

Deployment is the term used for the process of installing a web application (either a 3rd
party WAR or your own custom web application) into the Tomcat server. Web appli‐
cation deployment may be accomplished in a number of ways within the Tomcat server.
– Statically, the web application is setup before Tomcat is started
– Dynamically; by directly manipulating already deployed web applications (relying
on auto-deployment feature) or remotely by using the Tomcat Manager web appli‐
cation
The Tomcat Manager is a web application that can be used interactively (via HTML
GUI) or programmatically (via URL-based API) to deploy and manage web applica‐
tions. There are a number of ways to perform deployment that rely on the Manager web
application. Apache Tomcat provides tasks for Apache Ant build tool. Apache Tomcat
Maven Plug-in project provides integration with Apache Maven. The desired environ‐
ment should define a JAVA_HOME value pointing to your Java installation. Addition‐
ally, you should ensure the Java javac compiler command run from the command shell
that your operating system provides.

10 Network Architecture

The network architecture consists of ISP-Connection, firewall, load-balancer, Switch,


Server system and thin clients.

10.1 Firewall

A firewall is a network security system designed to prevent unauthorized access to or


from a private network. Firewalls can be implemented in both hardware and software,
or a combination of both. Network firewalls are frequently used to prevent unauthor‐
ized Internet users from accessing private networks connected to the Internet, espe‐
cially intranets. All messages entering or leaving the intranet pass through the firewall,
which examines each message and blocks those that do not meet the speci‐
fied security criteria.

10.2 Load-Balancer
A load balancer is a device that acts as a reverse proxy and distributes network or appli‐
cation traffic across a number of servers. Load balancers are used to increase capacity
(concurrent users) and reliability of applications. They improve the overall performance
Remote Laboratories: For Real Time Access to Experiment Setups 199

of applications by decreasing the burden on servers associated with managing and


maintaining application and network sessions, as well as by performing application-
specific tasks. Load balancers are generally grouped into two categories: Layer 4 and
Layer 7. Layer 4 load balancers act upon data found in network and transport layer
protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon
data found in application layer protocols such as HTTP. Requests are received by both
types of load balancers and they are distributed to a particular server based on a config‐
ured algorithm (Fig. 6).

Fig. 6. Network architecture

Some industry standard algorithms are:


– Round robin
– Weighted round robin
– Least connections
– Least response time
Layer 7 load balancers can further distribute requests based on application specific
data such as HTTP headers, cookies, or data within the application message itself, such
as the value of a specific parameter. Load balancers ensure reliability and availability
by monitoring the “health” of applications and only sending requests to servers and
applications that can respond in a timely manner.
200 B. Kalyan Ram et al.

10.3 Managed Switches


Switches are crucial network devices, so being able to manplate them is sometimes
important in dealing with information flow. Traffic may need to be controlled so that
information is transmitted according to its level of importance, urgency and any opera‐
tional requirements. This is the key reason for including managed switches alongside
unmanaged switches. Whereas an unmanaged switch is sufficient to deal with normal
networking, where traffic is managed solely by servers, a managed switch becomes
useful when it becomes important to filter traffic more precisely.

10.4 Remote Lab Server

The server machine runs on windows server 2012 and makes use of the remote desktop
service to configure and host software developed to control the hardware systems from
server machine.

10.5 Thin Clients

Thin client is a lightweight computer that is purpose-built for remote access to a server
(typically cloud or desktop virtualization environments). It depends heavily on another
computer (its server) to fulfill its computational roles. The specific roles assumed by the
server may vary, from hosting a shared set of virtualized applications, a shared desktop
stack or virtual desktop, to data processing and file storage on the client’s or user’s behalf.
This is different from the desktop pc (fat client), which is a computer designed to take
on these roles by itself.
Thin clients occur as components of a broader computing infrastructure, where many
clients share their computations with a server or server farm. The server-side infrastruc‐
ture makes use of cloud computing software such as application virtualization, hosted
shared desktop (hsd) or desktop virtualization (vdi). This combination forms what is
known today as a cloud based system where desktop resources are centralized into one
or more data centers. The benefits of centralization are hardware resource optimization,
reduced software maintenance, and improved security.

10.6 Heartbeat/Health Information System with SMS Alert


The status of the systems is unknown by the system administrator until and unless he
monitors the results physically. So by sending packets to each systems and receiving an
acknowledgement that the packet is recieved, similar to a two way hand shake algorithm.
We developed a service where the packets are send and recieved. Once these packets
are unable to be send from any of the systems or none of the systems are receiving these
packets an sms alert will be given to the system administrator phone.
Remote Laboratories: For Real Time Access to Experiment Setups 201

11 User Statistics of Remote Labs

Statistics is the study of numerical information, which is called data. People use statistics
as tools to understand information. Learning to understand statistics helps a person react
intelligently to statistical claims. Statistics are used in the fields of business, math,
economics, accounting, banking, government, astronomy, and the natural and social
sciences. Over all session statics is put in the admin portal. Where the admin will have
the privilege to check the overall user sessions, how many session are booked and
cancelled. The scheduler help to book a slot at the required time as per the user needs.
And the lab can be accessed at the particular time slot booked by the users (Fig. 7).

Fig. 7. Scheduler

Recently, many educational institutions have acknowledged the importance of


making laboratories available on-line, allowing their students to run experiments from
a remote computer. While usage of virtual laboratories scales well, remote experiments,
based on scarce and expensive rigs, i.e. physical resources, do not and typically can only
be used by one person or cooperating group at a time. It is therefore necessary to admin‐
ister the access to rigs, where we distinguish between three different roles: content
providers, teachers and students [10]. A scheduler is a software product that allows an
enterprise to schedule and track computer batch tasks. These units of work include
running a security program or updating software [11]. A scheduler starts and handles
jobs automatically by manipulating a prepared job control language algorithm or through
communication with a human user.
Based on the scheduler designed. The start time from when the user has started to
access his lab and at the time the user has used the session is been recorded in the
Cassandra database. Using the scheduler we are able to track even the overall session
booked and using these data a statistical graphs are plotted as shown below (Figs. 8 and 9).
202 B. Kalyan Ram et al.

Fig. 8. Session portal

Fig. 9. Session statistics

These graphical representation of the admin portal consists of:


– new sessions per week
– new session per week
– average session per day
– cancelled session this week
System based usage statistics is also recorded using the scheduler where the number
if time the system has been accessed from a particular start date and time to a particular
start date and time. Which can be seen in the below image. This statistics can be very
useful for user monitoring and also the system usage can be recorded.
Remote Laboratories: For Real Time Access to Experiment Setups 203

12 Resource Allocation and Utilization

In computing, resource allocation is necessary for any application to be run on the


system. When the user opens any program this will be counted as a process, and therefore
requires the computer to allocate certain resources for it to be able to run. Such resources
could have access to a section of the computer’s memory, data in a device interface
buffer, one or more files, or the required amount of processing power. A computer with
a single processor can only perform one process at a time, regardless of the amount of
programs loaded by the user (or initiated on start-up). Computers using single processors
appear to be running multiple programs at once because the processor quickly alternates
between programs, processing what is needed in very small amounts of time. This
process is known as multitasking or time slicing. The time allocation is automatic,
however higher or lower priority may be given to certain processes, essentially giving
high priority programs more/bigger slices of the processor’s time. On a computer
with multiple processors different processes can be allocated to different processors so
that the computer can truly multitask. We should allocate system resources in such a
way that the above conflicts doesn’t happen which might affect the performance of the
software. Also proper maintenance of each of the systems is to be ensured to provide
appropriate uptime to this system (Fig. 10).

Fig. 10. Resource utilization

13 Conclusion

Remote labs are the natural choice for accessing physical laboratories online to enhance
the accessibility of both Software and Hardware infrastructure in Engineering colleges
[12]. In the context of India, the data shows that the utilization of Laboratory resources
is very low and the accessibility of laboratory resources to the students is sparse [13].
The topics presented in this paper addresses the technological architecture and the tools
needed for implementation of an effective Remote Lab Infrastructure from the perspec‐
tive of OS independent, Browser independent and Application independent solution.
204 B. Kalyan Ram et al.

References

1. Wang, S.-T., Chang, H.-Y.: Development of web-based remote desktop to provide adaptive
user interfaces in cloud platform. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr.
Autom. Control Inf. Eng. 8(8), 1572–1577 (2014)
2. http://guacamole.incubator.apache.org
3. Tsai, C.-Y., Huang, W.-L.: Design and performance modeling of an efficient remote
collaboration system. Int. J. Grid Distrib. Comput. 8(4) (2015)
4. Cassandra. https://www.tutorialspoint.com/cassandra/cassandra_introduction.htm
5. SSO. https://docs.oracle.com/cd/A97337_01/ias102_otn/portal.12/a86782/concepts.htm
6. Port Forwarding. https://en.wikipedia.org/wiki/Port_forwarding
7. Introduction to A-record. https://support.dnsimple.com/articles/a-record/
8. VideoAPI. http://www.axis.com/files/manuals/vapix_video_streaming5237_en_1307.pdf
9. Apache Tomcat. http://tomcat.apache.org/
10. Gallardo, A., Richter, T., Debicki, P., et al.: A rig booking system for on-line laboratories.
In: IEEE EDUCON Education Engineering– Learning Environments and Ecosystems in
Engineering Education Session T1A, p. 6 (2011)
11. Scheduler. https://www.techopedia.com/definition/25078/scheduler
12. Kalyan Ram, B., Arun Kumar, S., Mallikarjuna Sarma, B., Bhaskar, M., Chetan Kulkarni,
S.: Remote software laboratories: facilitating access to engineering softwares online. In: 13th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 394
(2016)
13. Kalyan Ram, B., Hegde, S.R., Pruthvi, P., Hiremath, P.S., Jackson, D., Arun Kumar, S.: A
distinctive approach to enhance the utility of laboratories in Indian academia. In: 12th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 235
(2015)

You might also like