Lab Manual Cloud
Lab Manual Cloud
Page 1
Cloud Computing – Types of Cloud
Cloud computing is usually described in one of two ways. Either based on the deployment
model, or on the service that the cloud is offering.
Based on a deployment model, we can classify cloud as
Public,
Private
Hybrid
Community cloud
IaaS (Infrastructure-as-a-Service)
PaaS (Platform-as-a-Service)
SaaS (Software-as-a-Service)
or, Storage, Database, Information, Process, Application, Integration, Security,
Management, Testing-as-a-service
Basically, programs that are needed to run a certain application are now more popularly located
on a remote machine, owned by another company. This is done in order not to lose on the quality
performance due to processing power of your own computer, to save money on IT support, and
yet remain advantageous on the market. These computers that run the applications, store the data,
and use a server system, are basically what we call “the cloud”.
Public Cloud
When we talk about public cloud, we mean that the whole computing infrastructure is located on
the premises of a cloud computing company that offers the cloud service. The location remains,
thus, separate from the customer and he has no physical control over the infrastructure.
As public clouds use shared resources, they do excel mostly in performance, but are also most
vulnerable to various attacks.
GlobalDots offers worldwide Public Cloud service in leading data centers. Our experts will assist
you in choosing the right solution for you.
Page 2
Private Cloud
Private Cloud provides the same benefits of Public Cloud, but uses dedicated, private hardware.
Private cloud means using a cloud infrastructure (network) solely by one customer/organization.
It is not shared with others, yet it is remotely located. The companies have an option of choosing
an on-premise private cloud as well, which is more expensive, but they do have a physical
control over the infrastructure.
The security and control level is highest while using a private network. Yet, the cost reduction
can be minimal, if the company needs to invest in an on-premise cloud infrastructure.
GlobalDots offers worldwide private cloud service in leading data centers.
With our Private Cloud you‟ll get:
Increased redundancy
Hybrid Cloud
Hybrid cloud, of course, means, using both private and public clouds, depending on their
purpose. For example, public cloud can be used to interact with customers, while keeping their
data secured through a private cloud. Most people associate traditional public cloud service with
elastic scalability and the ability to handle constant shifts in demand. However, performance
issues can arise for certain data-intensive or high-availability workloads.
GlobalDots offer combines hybrid cloud with bare-metal and virtualized clouds into a unified
environment allowing your business to optimize for scale performance and cost simultaneousl
Page 3
Community cloud
It implies an infrastructure that is shared between organizations, usually with the shared data and
data management concerns. For example, a community cloud can belong to a government of a
single country. Community clouds can be located both on and off the premises.
The most popular services of the cloud are that of either infrastructure, platform, software,
or storage.
As explained before, the most common cloud service is that one offering data storage disks and
virtual servers, i.e. infrastructure. Examples of Infrastructure-as-a-Service (IaaS) companies are
Amazon,Rackspace,Flexiscale.
If the cloud offers a development platform, and this includes operating system, programming
language execution environment, database, and web server, the model is known as Platform-as- a-
Service (PaaS), examples of which are Google App Engine, Microsoft Azure, Salesforce. Operating
system can be frequently upgraded and developed with PaaS, services can be obtained from diverse
sources, and programming can be worked in teams (geographically distributed).
Page 4
Software-as-a-Service (SaaS), finally, means that users can access various software applications
on a pay-per-use basis. As opposed to buying licensed programs, often very expensive. Examples
of such services include widely used GMail, or Google Docs.
1. Less Costs
The services are free from capital expenditure. There are no huge costs of hardware in
cloud computing. You just have to pay as you operate it and enjoy the model based on your
subscription plan.
2. 24 X 7 Availability
3. Most of the cloud providers are truly reliable in offering their services, with most of them
maintaining an uptime of 99.9%. The workers can get onto the applications needed
basicallyfrom anywhere. Some of the applications even function off-line.
4. Flexibility in Capacity
It offers flexible facility which could be turned off, up or down as per the circumstances of the
user. For instance, a promotion of sales is very popular, capacity can be immediately and quickly
added to it for the avoidance of losing sales and crashing servers. When those sales are done, the
capacity can also be shrunk for the reduction of costs.
Page 5
5. All over Functioning
Cloud computing offers yet another advantage of working from anywhere across the globe, as
long as you have an internet connection. Even while using the critical cloud services that offer
mobile apps, there is no limitation of the device used.
6. Automated Updates on Software
In cloud computing, the server suppliers regularly update your software including the updates on
security, so that you do not need to agonize on wasting your crucial time on maintaining the
system. You find extra time to focus on the important things like „How to grow your businesses.
7. Security
Cloud computing offers great security when any sensitive data has been lost. As the data is
stored in the system, it can be easily accessed even if something happens to your computer. You
can even remotely wipe out data from the lost machines for avoiding it getting in the wrong
hands.
8. Carbon Footprint
Cloud computing is helping out organizations to reduce their carbon footprint. Organizations
utilize only the amount of resources they need, which helps them to avoid any over-provisioning.
Hence, no waste of resources and thus energy.
9. Enhanced Collaboration
Cloud applications enhance collaboration by authorizing diverse groups of people virtually meet
and exchange information with the help of shared storage. Such capability helps in improving the
customer service and product development and also reducing the marketing time.
10. Control on the Documents
Before cloud came into being, workers needed to send files in and out as the email attachments
for being worked on by a single user at one time ultimately ending up with a mess of contrary
titles, formats, and file content. Moving to cloud computing has facilitated central file storage.
Page 6
11. Easily Manageable
Cloud computing offers simplified and enhanced IT maintenance and management capacities by
agreements backed by SLA, central resource administration and managed infrastructure. You get
to enjoy a basic user interface without any requirement for installation. Plus you are assured
guaranteed and timely management, maintenance, and delivery of the IT services.
Conclusion:
Page 7
EXPERIMENT:-2
Title:
Implementation of Virtualization in Cloud Computing to Learn Virtualization
Basics, Benefits of Virtualization in Cloud using Open Source Operating
System.
Theory:
What is Virtualization in Cloud Computing?
Virtualization is the "creation of a virtual (rather than actual) version of something, such
as a server, a desktop, a storage device, an operating system or network resources". In other
words, Virtualization is a technique, which allows sharing a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3.Server Virtualization.
4.Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the hardware system is known as hardware virtualization. The main job of hypervisor is to
control and monitoring the processor, memory and other hardware resources. After virtualization
of hardware system we can install different operating system on it and run different applications
on those OS.
Usage: Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the Server system is known as server virtualization.
Usage: Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device. Storage virtualization is also
implemented by using software applications.
Usage: Storage virtualization is mainly done for back-up and recovery purposes.
Levels of Virtualization:
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the help of
ISA emulation. With this approach, it is possible to run a large amount of legacy binary code
writ-ten for various processors on any given new hardware host machine. Instruction set
emulation leads to virtual ISAs created on any hardware machine.
2. Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware. On the one hand,
this approach generates a virtual hardware environment for a VM. On the other hand, the process
manages the underlying hardware through virtualization. The idea is to virtualize a computer’s
resources, such as its processors, memory, and I/O devices. The intention is to upgrade the
hardware utilization rate by multiple users concurrently. The idea was implemented in the IBM
VM/370 in the 1960s. More recently, the Xen hypervisor has been applied to virtualize x86-
based machines to run Linux or other guest OS applications.
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers. The containers behave like real servers. OS-
level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users.
4. Library Support Level
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface becomes
another candidate for virtualization. Virtualization with library interfaces is possible by
controlling the communication link between applications and the rest of a system through API
hooks. The software tool WINE has implemented this approach to support Windows applications
on top of UNIX hosts. Another example is the vCUDA which allows applications executing
within VMs to leverage GPU hardware acceleration.
5. User-Application Level
Virtualization at the application level virtualizes an application as a VM. On a traditional OS, an
application often runs as a process. Therefore, application-level virtualization is also known
as process-level virtualization. The most popular approach is to deploy high level language
(HLL).VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs written and
compiled to a particular abstract machine definition. Any program written in the HLL and
compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java Virtual
Machine (JVM) are two good examples of this class of VM.
Advantages of Virtualization:
1. Resource optimization
2. Save resource and money
3. Enhance security
4. Easy disaster recovery
Conclusion:
EXPERIMENT:-3
Title:
Study and implementation of infrastructure as Service usingOpen
Stack.
OpenStack:
OpenStack is a free and open source, cloud computing software platform that is widely
used in the deployment of infrastructure-as-a-Service (IaaS) solutions. The core technology with
OpenStack comprises a set of interrelated projects that control the overall layers of processing,
storage and networking resources through a data center that is managed by the users using a
Web-based dashboard, command-line tools, or by using the Restful API.
Currently, OpenStack is maintained by the OpenStack Foundation, which is a non-profit
corporate organization established in September 2012 to promote OpenStack software as well as
its community. Many corporate giants have joined the project, including GoDaddy, Hewlett
Packard, IBM, Intel, Mellanox, Mirantis, NEC, NetApp, Nexenta, Oracle, Red Hat, SUSE Linux,
VMware, Arista Networks, AT&T, AMD, Avaya, Canonical, Cisco, Dell, EMC, Ericsson,
Yahoo!, etc.
OpenStack Computing Components:
OpenStack has a modular architecture that controls large pools of compute, storage and
networking resources.
Compute (Nova):
OpenStack Compute (Nova) is the fabric controller, a major component of Infrastructure
as a Service (IaaS), and has been developed to manage and automate pools of computer
resources. It works in association with a range of virtualization technologies. It is written in
Python and uses many external libraries such as Eventlet, Kombu and SQLAlchemy.
Networking (Neutron):
Formerly known as Quantum, Neutron is a specialised component of OpenStack for
managing networks as well as network IP addresses. OpenStack networking makes sure that the
network does not face bottlenecks or any complexity issues in cloud deployment. It provides the
users continuous self-service capabilities in the network’ s infrastructure. The floating IP
addresses allow traffic to be dynamically routed again to any resources in the IT infrastructure,
and therefore the users can redirect traffic during maintenance or in case of any failure. Cloud
users can create their own networks and control traffic along with the connection of servers and
devices to one or more networks. With this component, OpenStack delivers the extension
framework that can be implemented for managing additional network services including
intrusion detection systems (IDS), load balancing, firewalls, virtual private networks (VPN) and
many others.
Dashboard (Horizon):
The OpenStack dashboard (Horizon) provides the GUI (Graphical User Interface) for the
access, provision and automation of cloud-based resources. It embeds various third party
products and services including advance monitoring, billing and various management tools.
Identity services (Keystone):
Keystone provides a central directory of the users, which is mapped to the OpenStack
services they are allowed to access. It refers and acts as the centralized authentication system
across the cloud operating system and can be integrated with directory services like LDAP.
Keystone supports various authentication types including classical username and password
credentials, token-based systems and other log-in management systems.
OpenStack Image Service (Glance) integrates the registration, discovery and delivery services
for disk and server images. These stored images can be used as templates. It can also be used to
store and catalogue an unlimited number of backups. Glance can store disk and server images in
different types and varieties of back-ends, including Object Storage.
Telemetry (Ceilometer):
Open Stack telemetry services (Ceilometer) include a single point of contact for thebilling systems.
These provide all the counters needed to integrate customer billing across all current and
Future OpenStack components.
Orchestration (Heat):
Heat organizes a number of cloud applications using templates with the help of the OpenStack-
native REST API and a Cloud Formation-compatible Query API.
Database (Trove): Trove is used as database-as-a-service (DaaS), which integrates and provisions
relationaland non-relational database engines.
Elastic Map Reduce (Sahara):
Here are the steps that need to be followed for the installation.
1. Install Git
$ sudo apt-get install git
2. Clone the DevStack repository and change the directory. The code will set up the cloud
infrastructure.
$ git clone http://github.com/openstack-dev/devstack
$ cd devstack/
/devstack$ ls
localrc configurations
localrc is the file in which all the local configurations (local machine parameters) are
maintained. After the first successful stack.sh run, you will see that a localrc file gets created
with the configuration values you specified while running that script.
HOST_IP=xxx.xxx.xxx.xxx
Cinder on DevStack
Cinder is a block storage service for OpenStack that is designed to allow the use of a reference
implementation (LVM) to present storage resources to end users that can be consumed by the
OpenStack Compute Project (Nova). Cinder is used to virtualise the pools of block storage
devices. It delivers end users with a self-service API to request and use the resources, without
requiring any specific complex knowledge of the location and configuration of the storage where
it is actually deployed.
All the Cinder operations can be performed via any of the following:
1. CLI (Cinder’ s python-cinderclient command line module)
2. GUI (Using OpenStack’ s GUI project horizon)
3. Direct calling of Cinder APIs
Creation and deletion of volumes: To create a 1 GB Cinder volume with no name, run the
following command:
$ cinder create 1
To see more information about the command, just type cinder help <command>
To create a Cinder volume of size 1GB with a name, using cinder create –display-name
myvolume:
$ cinder list
ID Status Display Name Size Volume type Bootable Attached To
id1 Available Myvolume 1 None False
id2 Available None 1 None False
To delete the first volume (the one without a name), use the cinder delete
<volume_id>command. If we execute cinder list really quickly, the status of the volume going
to ‘ deleting’ can be seen, and after some time, the volume will be deleted:
$ cinder list
ID Status Display Name Size Volume type Bootable Attached To
id1 Available Myvolume 1 None False
id2 Deleting None 1 None False
Volume snapshots can be created as follows:
$ cinder snapshot-list
Theory:
Web Application and Cloud Computing
The services are accessible anywhere in the world, with the cloud appearing as a single point of
access for all the computing needs of consumers. New advances in processors, virtualization
technology, disk storage, broadband internet access and fast, inexpensive servers have all
combined to make cloud computing a compelling paradigm. Cloud computing allows users and
companies to pay for and use the software and storage that they need, when they need then and
as wireless broadband connection options grow, where they need them. This type of software
deployment is called Software as a Service (SaaS).
In the cloud computing paradigm, all of the above components are treated as services
and are in the “cloud”; users do not have to invest or pay huge licensing fees to won any of the
above resources. Infrastructure resources are storage, computing power and so forth, which can
take advantage of already existing technologies such as grid computing. The software resources
include application servers, database servers, IDE and so on. The application resources include
applications deployed as SaaS for example Google docs. The business process resources can be
standard set of common business utilities given as services to clients. Example is ERP software
such as SAP and Oracle providing standard business workflows in the cloud. Some of the major
players in cloud computing are Amazon, Google, IBM, Joint, Microsoft and Sales Force. Current
cloud computing services are storage services, spam filtering, performing applications in high
level programming languages such as Java or the use of some kind of database. In 2008, Google
has released Google App Engine, a cloud-based platform used for running applications both
individuals and businesses. Microsoft has released Windows Azure, a cloud-based operating
system, for the Community technology Preview.
A web feed (or news feed) is a data format used for providing users with frequently
updated content. Content distributors syndicate a web feed, thereby allowing users to subscribe
to it. Making a collection of web feeds accessible in one spot is known as aggregation, which is
performed by an aggregator. A web feed is also sometimes referred to as a syndicated feed.
A typical scenario of web feed use is: a content provider publishes a feed link on their
site which end users can register with an aggregator program (also called a feed reader or a news
reader) running on their own machines; doing this is usually as simple as dragging the link from
the web browser to the aggregator. When instructed, the aggregator asks all the servers in its feed
list if they have new content; if so, the aggregator either makes a note of the new content or
downloads it. Aggregators can be scheduled to check for new content periodically. Web feeds
are an example of pull technology, although they may appear to push content to the user.
The kinds of content delivered by a web feed are typically HTML (webpage content) or
links to webpages and other kinds of digital media. Often when websites provide web feeds to
notify users of content updates, they only include summaries in the web feed rather than the full
content itself. Web feeds are operated by many news websites, weblogs, schools, and podcasters.
Benefits
1. Users do not disclose their email address when subscribing to a feed and so are not
increasing their exposure to threats associated with email: spam, viruses, phishing, and
identity theft.
2. Users do not have to send an unsubscribe request to stop receiving news. They simply
remove the feed from their aggregator.
3. The feed items are automatically sorted in that each feed URL has its own sets of entries
(unlike an email box where messages must be sorted by user-defined rules and pattern
matching).
Conclusion:
EXPERIMENT:-5
Title:
Write a Program to Create, Manage and groups User accounts in own Cloud
by Installing Administrative Features.
Theory:
What is OwnCloud?
OwnCloud is a suite of client–server software for creating and using file hosting services.
OwnCloud is functionally very similar to the widely used Drop box, with the primary functional
difference being that the Server Edition of ownCloud is free and open-source, and thereby
allowing anyone to install and operate it without charge on a private server. It also supports
extensions that allow it to work like Google Drive, with online document editing, calendar and
contact synchronization, and more. Its openness avoids enforced quotas on storage space or the
number of connected clients, instead having hard limits (like on storage space or number of
users) defined only by the physical capabilities of the server.
The development of ownCloud was announced in January 2010, in order to provide a free
software replacement to proprietary storage service providers. The company was founded in
2011 and forked the code away from KDE to github.
Overview:
Design:
For desktop machines to synchronize files with their ownCloud server, desktop clients
are available for PCs running Windows, macOS, FreeBSD or Linux. Mobile clients exist
for iOS and Android devices.
Files and other data (such as calendars, contacts or bookmarks) can also be accessed, managed,
and uploaded using a web browser without any additional software.
Any updates to the file system are pushed to all computers and mobile devices connected to a
user's account.Encryption of files may be enforced by the server administrator.
The ownCloud server is written in the PHP and JavaScript scripting languages. For remote
access, it employs sabre/dav, an open-source WebDAV server.
OwnCloud is designed to work with several database management systems,
including SQLite, MariaDB, MySQL, Oracle Database, and PostgreSQL.
Features:-
ownCloud files are stored in conventional directory structures, and can be accessed
via WebDAV if necessary. User files are encrypted both at rest and during transit. ownCloud can
synchronise with local clients running Windows (Windows XP, Vista, 7 and 8), macOS (10.6 or
later), or various Linux distributions.
ownCloud users can manage calendars (CalDAV), contacts (CardDAV) scheduled tasks and
streaming media (Ampache) from within the platform.
From the administration perspective, ownCloud permits user and group administration
(via OpenID or LDAP). Content can be shared by defining granular read/write permissions
between users and/or groups. Alternatively, ownCloud users can create public URLs when
sharing files. Logging of file-related actions is available in the Enterprise and Education service
offerings.
Furthermore, users can interact with the browser-based ODF-format word
processor, bookmarking service, URL shortening suite, gallery, RSS feed reader and document
viewer tools from within ownCloud. For additional extensibility, ownCloud can be augmented
with "one-click" applications and connection to Dropbox, Google Drive and Amazon S3.
All ownCloud clients (Desktop, iOS, Android) support the OAuth 2 standard for Client
Authentication.
Enterprise Features:-
For Enterprise customers, ownCloud GmbH offers apps with additional functionality. They are
mainly useful for large organizations with more than 500 users. An Enterprise subscription includes
support services.
Commercial features include End-to-end encryption, Ransomware and Antivirus protection,
Branding, Document Classification, Single-Sign-On via Shibboleth/SAML, and a lot more.[16]
Distribution:-
ownCloud server and clients may be downloaded from the ownCloud website and from third-
party repositories, such as Google Play and Apple iTunes, and repositories maintained by Linux
distributions.
In 2014, a dispute arose between ownCloud and Ubuntu regarding the latter allegedly neglecting
maintenance of packages, resulting in the temporary removal of ownCloud from the Ubuntu
repository.
ownCloud has been integrated with the GNOME desktop. Additional projects that use or link to
ownCloud include a Raspberry Pi project to create a cloud storage system using the Raspberry
Pi's small, low-energy form-factor.
Conclusion:
EXPERIMENT:-6
Title:
Design and develop custom Application (Mini Project) using Salesforce Cloud.
Theory:
Introduction
Salesforce.com Inc. is an American cloud-based software company headquartered
in San Francisco, California. Though the bulk of its revenue comes from a customer relationship
management (CRM) product, Salesforce also sells a complementary suite of enterprise
applications focused on customer service, marketing automation, analytics and application
development.
Salesforce is the primary enterprise offering within the Salesforce platform. It provides
companies with an interface for case management and task management, and a system for
automatically routing and escalating important events. The Salesforce customer portal provides
customers the ability to track their own cases, includes a social networking plug-in that enables
the user to join the conversation about their company on social networking websites, provides
analytical tools and other services including email alert, Google search, and access to customers'
entitlement and contracts.
Lightning Platform
Community Cloud provides Salesforce customers the ability to create online web
properties for external collaboration, customer service, channel sales, and other custom portals in
their instance of Salesforce. Tightly integrated to Sales Cloud, Service Cloud, and App Cloud,
Community Cloud can be quickly customized to provide a wide variety of web properties.
What is the difference between custom application and console application in sales force?
A custom application is a collection of tabs, objects etc that function together to solve a
particular problem.
A console application uses a specific Salesforce UI - the console. Console applications
are intended to enhance productivity by allowing everything to be done from a single, tabbed,
screen.
Conclusion:
EXPERIMENT:-7
Title:
Assignment to install and configure Google App Engine.
Theory:
Introduction
Google App Engine is a web application hosting service. By “web application,” we mean
an application or service accessed over the Web, usually with a web browser: storefronts with
shopping carts, social networking sites, multiplayer games, mobile applications, survey
applications, project management, collaboration, publishing, and all the other things we’re
discovering are good uses for the Web. App Engine can serve traditional website content too,
such as documents and images, but the environment is especially designed for real-time dynamic
applications. Of course, a web browser is merely one kind of client: web application
infrastructure is well suited to mobile applications, as well.
1
Google App Engine:
These are covered by the depreciation policy and the service-level agreement of the app engine.
Any changes made to such a feature are backward-compatible and implementation of such a
feature is usually stable. These include data storage, retrieval, and search; communications;
process management; computation; app configuration and management.
Data storage, retrieval, and search include features such as HRD migration tool, Google
Cloud SQL, logs, datastore, dedicated Memcache, blobstore, Memcache and search.
Communications include features such as XMPP. channel, URL fetch, mail, and Google
Cloud Endpoints.
Process management includes features like scheduled tasks and task queue
Computation includes images.
App management and configuration cover app identity, users, capabilities, traffic splitting,
modules, SSL for custom domains, modules, remote access, and multitenancy.
2
Advantages of Google App Engine:
Scalability
For any app’s success, this is among the deciding factors. Google creates its own apps
using GFS, Big Table and other such technologies, which are available to you when you
utilize the Google app engine to create apps. You only have to write the code for the app
and Google looks after the testing on account of the automatic scaling feature that the app
engine has. Regardless of the amount of data or number of users that your app stores, the
app engine can meet your needs by scaling up or down as required.
Cost Savings
You don’t have to hire engineers to manage your servers or to do that yourself. You can
invest the money saved into other parts of your business.
Platform Independence
You can move all your data to another environment without any difficulty as there is not
many dependencies on the app engine platform.
Conclusion:
3
EXPERIMENT:-8
Title:
Design an Assignment to retrieve, verify, and store user credentials using
Firebase Authentication, the Google App Engine standard environment, and
Google Cloud Data store.
Theory:
Firebase Authentication
Most apps need to know the identity of a user. Knowing a user's identity allows an app to
securely save user data in the cloud and provide the same personalized experience across all of
the user's devices.
Firebase Authentication integrates tightly with other Firebase services, and it leverages
industry standards like OAuth 2.0 and OpenID Connect, so it can be easily integrated with your
custom backend.
To sign a user into your app, you first get authentication credentials from the user. These
credentials can be the user's email address and password, or an OAuth token from a federated
identity provider. Then, you pass these credentials to the Firebase Authentication SDK. Our
backend services will then verify those credentials and return a response to the client.
After a successful sign in, you can access the user's basic profile information, and you
can control the user's access to data stored in other Firebase products. You can also use the
provided authentication token to verify the identity of users in your own backend services.
Authenticating Users on App Engine Using Firebase:
Now we show how to retrieve, verify, and store user credentials using Firebase Authentication,
the Google App Engine standard environment, and Google Cloud Data store.
The document walks you through a simple note-taking application called Firenotes that stores
users' notes in their own personal notebooks. Notebooks are stored per user, and identified by
each user's unique Firebase Authentication ID. The application has the following components:
1. The frontend configures the sign-in user interface and retrieves the Firebase
Authentication ID. It also handles authentication state changes and lets users see their
notes.
2. FirebaseUI is an open-source, drop-in solution that handles user login, linking multiple
providers to one account, recovering passwords, and more. It implements authentication
best practices for a smooth and secure sign-in experience.
3. The backend verifies the user's authentication state and returns user profile information as
well as the user's notes.
The application stores user credentials in Cloud Data store by using the NDB client
library, but you can store the credentials in a database of your choice.
The following diagram shows how the frontend and backend communicate with each other and
how user credentials travel from Firebase to the database.
Objectives:
Costs:
This tutorial uses billable components of Cloud Platform, including:
Conclusion:
EXPERIMENT:-9
Title:
Theory:
What is Apex?
Apex is a proprietary language developed by the Salesforce.com. As per the official definition,
Apex is a strongly typed, object-oriented programming language that allows developers to
execute the flow and transaction control statements on the Force.com platform server in
conjunction with calls to the Force.com API.
It has a Java-like syntax and acts like database stored procedures. It enables the developers to
add business logic to most system events, including button clicks, related record updates, and
Visual force pages. Apex code can be initiated by Web service requests and from triggers on
objects. Apex is included in Performance Edition, Unlimited Edition, Enterprise Edition, and
Developer Edition.
Strongly Typed
Apex is a strongly typed language. It uses direct reference to schema objects like sObject
and any invalid reference quickly fails if it is deleted or if is of wrong data type.
Multitenant Environment
Apex runs in a multitenant environment. Consequently, the Apex runtime engine is
designed to guard closely against runaway code, preventing it from monopolizing shared
resources. Any code that violates limits fails with easy-to-understand error messages.
Upgrades Automatically
Apex is upgraded as part of Salesforce releases. We don't have to upgrade it manually.
Easy Testing
Apex provides built-in support for unit test creation and execution, including test results
that indicate how much code is covered, and which parts of your code can be more
efficient.
Apex Applications
We can use Apex when we want to −
Perform complex validation over multiple objects at the same time and also custom
validation implementation.
Create complex business processes that are not supported by existing workflow
functionality or flows.
Create custom transactional logic (logic that occurs over the entire transaction, not just
with a single record or object) like using the Database methods for updating the records.
Perform some logic when a record is modified or modify the related object's record when
there is some event which has caused the trigger to fire.
Flow of Actions
There are two sequence of actions when the developer saves the code and when an end user
performs some action which invokes the Apex code as shown below −
Developer Action
When a developer writes and saves Apex code to the platform, the platform application server
first compiles the code into a set of instructions that can be understood by the Apex runtime
interpreter, and then saves those instructions as metadata.
Since Apex is the proprietary language of Salesforce.com, it does not support some features
which a general programming language does. Following are a few features which Apex does
not support −
You cannot change the standard SFDC provided functionality and also it is not possible
to prevent the standard functionality execution.
You cannot change the standard SFDC provided functionality and also it is not possible
to prevent the standard functionality execution.
Variable Declaration
As strongly typed language, you must declare every variable with data type in Apex. As seen in
the code below (screenshot below), lstAcc is declared with data type as List of Accounts.
SOQL Query
This will be used to fetch the data from Salesforce database. The query shown in screenshot
below is fetching data from Account object.
Loop Statement
This loop statement is used for iterating over a list or iterating over a piece of code for a
specified number of times. In the code shown in the screenshot below, iteration will be same as
the number of records we have.
Conclusion:
Reference: https://www.tutorialspoint.com/apex/apex_overview.html
EXPERIMENT:-10
Title:
Design an Assignment based on Working with Mangrasoft Aneka
Software
Theory:
A Company named “Manjrasoft” is focused on the creation of innovative software
technologies for simplifying the development and deployment of applications on private or
public Clouds. The product Aneka plays the role of Application Platform as a Service for Cloud
Computing. Aneka supports various programming models involving Task Programming, Thread
Programming and MapReduce Programming and tools for rapid creation of applications and
their seamless deployment on private or public Clouds to distribute applications.
Aneka technology primarily consists of two key components:
Business Value
Improved reliability
Simplicity
Faster time to value
Operational Agility
Definite application performance enhancement
Optimizing the capital expenditure and operational expenditure.
All these features make Aneka a winning solution for enterprise customers in
the Platform-as-a-Service scenario.
Build
Manage
Aneka Management includes a Graphical User Interface (GUI) and APIs to set-up, monitor,
manage and maintain remote and global Aneka compute clouds. Aneka also has an accounting
mechanism and manages priorities and scalability based on SLA/QoS which enables dynamic
provisioning.
Conclusion:
Reference: http://www.manjrasoft.com/products.html