iot notes
iot notes
Ans: Smart Home: a smart home is the one in which the devices have the capability to communicate with each
other as well as to their intangible environment. A smart home gives owner the capability to customize and
control home environment for increased security.
Wearables: Wearable IoT tech is a very large domain and consists of an array of devices. These devices broadly
cover the fitness, health and entertainment requirements.The prerequisite from internet of things technology for
wearable applications is to be highly energy efficient or ultra-low power and small sized.
In Retail: Imagine the scenario when your home appliances will be able to notify you about shortage of supplies
or even order them all on their own.Applications for tracking goods, real time information exchange about
inventory among suppliers and retailers and automated delivery all existing
Smart Cities:Smart surveillance, safer and automated transportation, smarter energy management systems and
environmental monitoring all are examples of internet of things applications for smart cities.
Agriculture:With the continous increase in world’s population, demand for food supply is extremely raised.
Governments are helping farmers to use advanced techniques and research to increase food production.
Automotive/Transportation:Google’s self driving cars are known to all. IoT is making connected cars a
possibility but slowly.Application: A connected car is a vehicle which is able to optimize it’s own operation,
maintenance as well as comfort of passengers using onboard sensors and internet connectivity.
For Energy Management:Power grids of the future will not only be smart enough but also highly reliable.
Smart grid concept is becoming very popular. The basic idea behind the smart grids is to collect data in
automated fashion and analyze the behavior
4)What is LightWeight IP
Ans: lwIP (lightweight IP) is a widely used open source TCP/IP stack designed for embedded systems
lwIP (lightweight IP) is a widely used open source TCP/IP stack designed for embedded systems.The focus of
the lwIP TCP/IP implementation is to reduce resource usage while still having a full-scale TCP. This makes
lwIP suitable for use in embedded systems. lwIP has spurred a lot of interest and is today being used in many
commercial products . It has been ported to multiple platforms and operating systems and can run with or
without OS.Main features include:
- Protocols: IP, IPv6, ICMP, UDP, TCP, IGMP,
- APIs: specialized APIs for enhanced performance, optional Berkeley-alike socket API
- Extended features: IP forwarding over multiple network interfaces, TCP congestion control, RTT estimation
and fast recovery/fast retransmit
5)Define the design principles of networks.
Ans:
Application drives the design requirements.The network is the structure that facilitates the
application. Without understanding the application characteristics and its requirements, the network cannot be
designed.
Network design requires experienced personnel.The network design engineer requires broad practical
experience combined with a theoretical understanding of the technologies
Networks are designed in a lab rather than on paper. A lab is the single most important design tool. Given the
complexity of the more advanced internetwork designs, a design is not valid until it has been verified in the lab.
Network design usually involves a number of trade-offs. Cost versus performance and availability is usually
the fundamental design trade-off.
Keep it simple. Unnecessary additional complexity is likely to increase the support cost and may make the
network more difficult to manage.
Do not work to a set of rigid and possibly over-generalized design rules or templates
Only use mature and well-tested software and hardware for all devices on the network.
The fundamental design plan must not be compromised. The design may have to show some degree of
flexibility and evolve with the network. This relates to the requirement for a scalable design.
No changes should be made to the original design without the endorsement of the engineers who formulated
that design.
Predictability and consistency in performance, resilience and scalability are characteristics of a well-designed
network.
Design requires a small capable team. No one person, no matter how skilled or experienced, should be the
single and absolute authority in designing the network.
For network design, there is no one "good network design," and there is certainly no "one size
fits all." A good network design is based on many concepts, some of which are summarized by
the following key general principles:
Examine for single points of failure carefully.There should be redundancy in your
network, so that a single link or hardware failure does not isolate any portion of the
network resulting in those users losing access to network resources. The amount of
redundancy required varies from network to network.
Two aspects of redundancy need to be considered: backup and load sharing. A backup
path should be available as an alternative to the primary path so that if the primary path
fails, traffic will automatically run across the backup path. Load sharing happens when
two or more paths to a destination exist and both can be used to share the network load.
Characterize application and protocol traffic.The application data flow profiles the
client/server communication across your network and this profile is essential to allocate
sufficient resources for your users.
Analyze available bandwidth- There should not be significant difference in available
bandwidth between the different layers of the hierarchical model. It is important to keep in
mind that the hierarchical model refers to conceptual layers providing functionality in your
network, not an actual physical separation
Build networks using a hierarchical or modular model? Hierarchy in your network enables
separate segments to be networked together. A hierarchical network design gives you
three conceptual layers in your network-core, distribution, and access with each layer
providing different functionality.
3. Consumer Privacy
The challenge of securing the personal data1 of individuals as the consumer goods they use
become increasingly digitized. This is particularly challenging as the information generated
by IoT is a key to bringing better services and the management of such devices. Security will
have to be integrated as part of IoT infrastructure.
4. Data
The impact of the IoT on storage is two-pronged in types of data to be stored: personal data
(consumer-driven) and big data (enterprise-driven). IT administrators that are already
tasked with keeping the storage centers running, will also have to figure out how to store,
protect and make all the incoming data accessible.
5. Storage Management
However, even if the capacity is available now, there will be further demands made on
storage and one that will have too be addressed as the need too access this information
becomes more important. Businesses will have weigh up the economics of storage against
the value of IoT information.
Constrained Application Protocol (CoAP) is an Internet Application Protocol for constrained devices.
It enables those constrained devices to communicate with the wider Internet using similar protocols.
CoAP is designed for use between Devices on the same constrained network, between Devices and
general nodes on the Internet, and between Devices on different constrained networks both joined
by an internet. CoAP is also being used via other mechanisms, such as SMS on mobile
communication networks. CoAP is an application layer protocol that is intended for use in resource-
constrained internet devices, such as WSN nodes. CoAP is designed to easily translate to HTTP for
simplified integration with the web, while also meeting specialized requirements such as multicast
support, very low overhead, and simplicity.
Features
Overhead and parsing complexity.
URI and content-type support.
Support for the discovery of resources provided by known CoAP services.
Simple subscription for a resource, and resulting push notifications.
Simple caching based on max-age.
11. Give difference between IPv4 and IPv6?
the size of an address in IPv4 is 32-bits. Where IPv6 address fields are 128- bits.
In IPv4 IP addresses are appreared as four 1 byte decimal numbers, separated by a
dot (eg: 192.168.1.1) and in IPv6 IP addresses appears as hexadecimal numbers that
are separated by colons (eg: fe80::d4a8:6435:d2d8:d9f3b11)
Clients using IPv4 addresses use the Dynamic Host Configuration Protocol (DHCP)
server to establish an address each time they log into a network. This address
assignment process is called stateful auto-configuration. IPv6 supports a revised
DHCPv6 protocol that supports stateful auto-configuration, and supports stateless
auto-configuration of nodes. Stateless auto-configuration does not require a DHCP
server to obtain addresses
IPv4 IPv6
Packet size : 576 bytes required, Packet size : 1280 bytes required without
fragmentation optional fragmentation
Packet fragmentation : Routers and sending
Packet fragmentation : Sending hosts only
hosts
IPv4 has lack of security.
IPv4 was never designed to be secure
– Originally designed for an isolated military IPv6 has a built-in strong security
network – Encryption
– Then adapted for a public educational & – Authentication
research network
IPv4 header has 20 bytes. IPv6 header is the double, it has 40 bytes.
IPv4 header has many fields (13 fields) IPv6 header has fewer fields, it has 8 fields.
ISP have IPv4 connectivity or have both
Many ISP don’t have IPv6 connectivity
IPv4 and IPv6
Non equal geographical distribution (>50%
No geographic limitation
USA)
12.What is Stereolithography
Stereolithography (SLA) is a type of additive manufacturing process used for creating
tangible 3-dimensional objects from CAD designs or 3D scanned digital files.The process,
invented and named by Chuck Hull in 1986, uses photo-reactive (usually acrylic-based)
polymers that instantly harden when exposed to an ultraviolet beam.The first step in the
process involves an SLA machine being uploaded with a file containing a digital object that
has been “sliced” into hundreds or even thousands of individual layers. The machine creates
the first physical layer of the object by submerging a perforated metal platform into a vat of
the photopolymer to a depth equivalent to the corresponding digital layer in the computer file.
This layer is then hardened by a small ultraviolet laser beam tracing the predetermined
pattern. Once this first layer is completed, the platform descends to the depth of the next layer
hence leaving a thin coating of uncured polymer on the top of the hardened layer. The laser
repeats the process and seamlessly affixes the two layers together. The SLA machine
continues by sequentially adding each individual layer until the object is finished.
The prototype, therefore, is optimized for ease and speed of development and also the ability
to change and modify it. Many Internet of Things projects start with a prototyping
microcontroller, connected by wires to components on a prototyping board, such as a
“breadboard”, and housed in some kind of container (perhaps an old tin or a laser-cut box).
This prototype is relatively inexpensive, but you will most likely end up with something that
is serviceable rather than polished and that will cost more than someone would be willing to
pay for it in a shop. At the end of this stage, you’ll have an object that works. It may be useful
for you already. It may be a talking point to show your friends. And if you are planning to
move to production, it’s a demonstrable product that you can use to convince yourself, your
business partners, and your investors that your idea has legs and is worth trying to sell.
Finally, the process of manufacture will iron out issues of scaling up and polish. You might
substitute prototyping microcontrollers and wires with smaller chips on a printed circuit
board (PCB), and pieces improvised out of 3D-printed plastic with ones commercially
injection-moulded in their thousands. The final product will be cheaper per unit and more
professional, but will be much more expensive to change
In the open source model, you release the sources that you use to create the project to the
whole world. You might publish the software code to GitHub (http://github.com), the
electronic schematics using Fritzing (http:// fritzing.org) or SolderPad (http://solderpad.com),
and the design of the housing/shell to Thingiverse (http://www.thingiverse.com). If you’re
not used to this practice, it might seem crazy: why would you give away something that you
care about, that you’re working hard to accomplish? There are several reasons to give away
your work: ◾ You may gain positive comments from people who liked it. ◾ It acts as a
public showcase of your work, which may affect your reputation and lead to new
opportunities. ◾ People who used your work may suggest or implement features or fix bugs.
◾ By generating early interest in your project, you may get support and mindshare of a
quality that it would be hard to pay for.
Of course, this is also a gift economy: you can use other people’s free and open source
contributions within your own project. A few words of encouragement from someone who
liked your design and your blog post about it may be invaluable to get you moving when you
have a tricky moment on it. A bug fix from someone who tried using your code in a way you
had never thought of may save you hours of unpleasant debugging later. And if you’re very
lucky, you might become known as “that bubble machine guy” or get invited to conferences
to talk about your LED circuit. If you have a serious work project, you may still find that
open source is the right decision, at least for some of your work.
15.What is 3D printing
Additive manufacturing, or 3D printing as it’s often called, is fast becoming one of the most
popular forms in rapid prototyping—largely down to the ever-increasing number of personal
3D printers, available at ever-falling costs.The term additive manufacturing is used because
all the various processes which can be used to produce the output start with nothing and add
material to build up the resulting model. This is in contrast to subtractive manufacturing
techniquesVarious processes are used for building up the physical model, which affect what
materials that printer can use, among other things. However, all of them take a three-
dimensional computer model as the input. The software slices the computer model into many
layers, each a fraction of a millimetre thick, and the physical version is built up layer by
layer.Another common trick with 3D printing is to print pieces which include moving parts: it
is possible to print all the parts at the same time and print them ready-assembled. This effect
is achieved with the use of what is called “support material”
22.Explain Raspberry Pi
The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in
schools and in developing countries. The original model became far more popular than
anticipated, selling outside of its target market for uses such as robotics. Peripherals
(including keyboards, mice and cases) are not included with the Raspberry Pi. Some accessories
however have been included in several official and unofficial bundles. The Raspberry Pi is
contained on a single circuit board and features ports for:
HDMI
USB 2.0
Composite video
Analog audio
Power
Internet
SD Card
The computer runs entirely on open-source software and gives students the ability to mix and
match software according to the work they wish to do. The Raspberry Pi is believed to be an
ideal learning tool, in that it is cheap to make, easy to replace and needs only a keyboard and a
TV to run. These same strengths also make it an ideal product to jumpstart computing in the
developing world.
23.Explain CNC Milling
CNC milling is a specific form of computer numerical controlled (CNC) machining. Milling itself
is a machining process similar to both drilling and cutting, and able to achieve many of the
operations performed by cutting and drilling machines. Like drilling, milling uses a rotating
cylindrical cutting tool. However, the cutter in a milling machine is able to move along multiple
axes, and can create a variety of shapes, slots and holes. In addition, the work-piece is often
moved across the milling tool in different directions, unlike the single axis motion of a drill.
CNC milling devices are the most widely used type of CNC machine. Typically, they are
grouped by the number of axes on which they operate, which are labeled with various letters.
X and Y designate horizontal movement of the work-piece, Z represents vertical, or up-and-
down, movement, while W represents diagonal movement across a vertical plane. Most
machines offer from 3 to 5 axes, providing performance along at least the X, Y and Z axes.
These devices are extremely useful because they are able to produce shapes that would be
nearly impossible using manual tooling methods. Most CNC milling machines also integrate a
device for pumping cutting fluid to the cutting tool during machining.
24.Explain MQ Telemetry Transport Protocol
MQTT (Message Queuing Telemetry Transport) is a lightweight
messaging protocol that provides resource-constrained network clients with a simple
way to distribute telemetry information. The protocol, which uses a publish/subscribe
communication pattern, is used for machine-to-machine (M2M) communication and
plays an important role in the Internet of Things (IoT). MQTT allows devices to send
(publish) information about a given topic to a server that functions as an
MQTT message broker. The broker then pushes the information out to those clients
that have previously subscribed to the client's topic. To a human, a topic looks like a
hierarchial file path. Clients can subscribe to a specific level of a topic's hierarchy or
use a wildcard character to subscribe to multiple levels. MQTT is a good choice for
wireless networks that experience varying levels of latency due to
occasional bandwidth constraints or unreliable connections. Should the connection
from a subscribing client to the broker get broken, the broker will buffer messages and
push them out to the subscriber when it is back online. Should the connection from
the publishing client to the broker be disconnected without notice, the broker can
close the connection and send subscribers a cached message with instructions from the
publisher.
COPs guarantee sequential data delivery but are classed as an unreliable network service
because there is no process to ensure that total data received is the same as what was sent.
COPs provide circuit-switched connections or virtual circuit connections in packet-switched
networks (PSN).
27.How does the use of both segmentation and paging reduce memory usage
requirements.
ans. Segmentation - use segments that occur naturally in a program - code, stack, data, . . . or
artificially introduce the segments. Segments are different sized. Each segment code is
contiguous in memory but the separate segments need not be contiguous.
Segments cleanly separate different areas of memory
e.g., Code, data, heap, stack, shared memory regions, etc.
Use different segment registers to refer to each portion of the address space
Pages - all the same size. Program code is divided into pages and placed into memory in page
frames of the same size as the pages. A program can be placed into page frames that are not
contiguous in memory. There can be internal fragmentation in this case. Internal
fragmentation occurs because of the leftover spots in the last page frame that the last program
page is put into and does not completely fill up that page.
29 .Certification
certificate, or secure certificate, is a file installed on a secure Web server that identifies a
website. This digital certificate establishes the identity and authenticity of the company or
merchant so that online shoppers can trust that the website is secure and reliable. In order to
verify that these sites are legitimate the companies and their websites are verified by a third
party.
Once the verification company establishes the legitimacy of an organization and the
associated website, they will issue an SSL certificate . This digital certificate is installed on
the Web server and will be viewable when a user enters a secure area of the website.
30
Repurposing/ Recycling
For the waste management and recycling industry, machine-to-machine is very common right
now. M2M is a broad label and a subset of the IoT that describes how wireless devices can
capture data, exchange information, and perform actions without human assistance.
IoT is a development that gives networked connections to objects so that they can send and
receive data. For your business or corporation, that means you can manage, monetize,
operate, and/or extend the value of your operation. And guess what? That includes recycling
and waste management. Some of the possibilities:
It's possible now to use IoT technology to gather real-time data on your recyclables –
valuable company assets – to know how to best manage and monetize them. Gather real-time
data on your waste stream to reduce fees. This is happening now. For example, through the
use of IoT technology:
Residential waste volume in Cincinnati fell 17% and recycling volume grew by 49%
Finland saw a 40% reduction in waste fees
Ans: A dynamic IP address is an IP address that is assigned automatically by the system to a device,
account or user when it is connected to the network; that is, it is assigned as needed rather than in advance.
Dynamic IP addresses are assigned by the dynamic host configuration protocol (DHCP), which is one of
the key protocols in the TCP/IP protocol suite. Dynamic IP addresses contrast with static IP addresses,
which are assigned manually and semi-permanently to a device, account or user. With dynamic addressing,
a computer, account, etc. will typically have a different IP address every time it connects to the network. In
some systems, the device's IP address can change even while it is still connected to the network.The main
advantage of dynamically assigning IP addresses is that it allows them to be reused, thereby greatly
increasing the total number of computers and other devices that can use the Internet or other network.
Another advantage is enhanced security for individual users because their IP address is different every time
There was no mechanism for the server to independently send, or push, data to the
client without the client first making a request. To overcome this deficiency, Web
app developers can implement a technique called HTTP long polling, where the
client polls the server requesting new information. The server holds the request
open until new data is available. Once available, the server responds and sends the
new information. When the client receives the new information, it immediately sends
another request, and the operation is repeated. This effectively emulates a server
push feature.
SENSORS Pushbuttons and switches, which are probably the simplest sensors, allow some user input.
Potentiometers (both rotary and linear) and rotary encoders enable you to measure movement.
Sensing the environment is another easy option. Light-dependent resistors (LDRs) allow
measurement of ambient light levels, thermistors and other temperature sensors allow you to know
how warm it is, and sensors to measure humidity or moisture levels are easy to build. Microphones
obviously let you monitor sounds and audio, but piezo elements (used in certain types of
microphones) can also be used to respond to vibration. Distance-sensing modules, which work by
bouncing either an infrared or ultrasonic signal off objects, are readily available and as easy to
interface to as a potentiometer.
ACTUATORS One of the simplest and yet most useful actuators is light, because it is easy to create
electronically and gives an obvious output. Light-emitting diodes (LEDs) typically come in red and
green but also white and other colours. RGB LEDs have a more complicated setup but allow you to
mix the levels of red, green, and blue to make whatever colour of light you want. More complicated
visual outputs also are available, such as LCD screens to display text or even simple graphics.
MICROCONTROLLERS Internet of Things devices take advantage of more tightly integrated and
miniaturised solutions—from the most basic level of microcontrollers to more powerful system-on-
chip (SoC) modules. These systems combine the processor, RAM, and storage onto a single chip,
which means they are much more specialised, smaller than their PC equivalents, and also easier to
build into a custom design. These microcontrollers are the engines of countless sensors and
automated factory machinery. They are the last bastions of 8-bit computing in a world that’s long
since moved to 32-bit and beyond. Microcontrollers are very limited in their capabilities—which is
why 8-bit microcontrollers are still in use, although the price of 32-bit microcontrollers is now
dropping to the level where they’re starting to be edged out. Usually, they offer RAM
SYSTEM-ON-CHIPS In between the low-end microcontroller and a full-blown PC sits the SoC (for
example, the BeagleBone or the Raspberry Pi). Like the microcontroller, these SoCs combine a
processor and a number of peripherals onto a single chip but usually have more capabilities. The
processors usually range from a few hundred megahertz, nudging into the gigahertz for top-end
solutions, and include RAM measured in megabytes rather than kilobytes. Storage for SoC modules
tends not to be included on the chip, with SD cards being a popular solution
RAM
RAM provides the working memory for the system. If you have more RAM, you may be able to do
more things or have more flexibility over your choice of coding algorithm. If you’re handling large
datasets on the device, that could govern how much space you need. You can often find ways to
work around memory limitations, either in code or by handing off processing to an online service
Networking
How your device connects to the rest of the world is a key consideration for Internet of Things
products. Wired Ethernet is often the simplest for the user—generally plug and play—and cheapest,
but it requires a physical cable. Wireless solutions obviously avoid that requirement but introduce a
more complicated configuration. WiFi is the most widely deployed to provide an existing
infrastructure for connections, but it can be more expensive and less optimized for power
consumption than some of its competitors.
USB
If your device can rely on a more powerful computer being nearby, tethering to it via USB can be an
easy way to provide both power and networking. You can buy some of the microcontrollers in
versions which include support for USB, so choosing one of them reduces the need for an extra chip
in your circuit. Instead of the microcontroller presenting itself as a device, some can also act as the
USB “host”. This configuration lets you connect items that would normally expect to be connected to
a computer—devices such as phones, for example, using the Android ADK, additional storage
capacity, or WiFi dongles
The Electric Imp has great potential to find a niche that we weren’t even aware existed: a hosted,
walled garden, which is convenient to develop for, and where a commercial entity is responsible for
making the right decisions on behalf of its users, who, in return for this capability, happily sacrifice
control over some of the finer details. At the time of writing, the platform doesn’t feel quite ready
yet, but it is one of the more exciting developments, and it, or something like it, may well become
the face of a swathe of Internet of Things development in the future.
The Arduino is the established and de facto choice, and for good reason. It is unrivalled in the wealth
of support and documentation, and its open nature makes it easy to extend and incorporate into a
finished product. The only downside to the Arduino is in its more limited capabilities. This simplicity
is a boon in most physical computing scenarios, but with many Internet of Things applications
requiring good security, the base Arduino Uno platform is looking a little stretched.
The Linux-based systems of the Raspberry Pi and BeagleBone will give you all the processing power
and connectivity that you will need, but at the cost of additional complexity and, should you want to
take the system into mass production, an additional cost per unit. There isn’t much to choose
between the Pi and tbe BeagleBone Black when it comes to cost for a prototype, so your choice is
more likely to be down to other factors.
The Raspberry Pi has a much higher profile, and better community around it, but its greater
capability to interface to electronics and easier route to manufacture mean that, in our opinion, the
BeagleBone Black has the edge
To make it easier to refer to and refresh your memory, revisit the main points here: ◾ Move as
much data and so forth as possible into flash memory or ROM rather than RAM because the latter
tends to be in shorter supply. ◾ If items aren’t going to change, make them constant. This makes it
easier to move them into flash/ROM and lets the compiler optimise the code better. ◾ If you have
only tiny amounts of memory, favour use of the stack over the heap. ◾ Choose your algorithm
carefully. A single-pass algorithm enables you to process much more data than reading it all into
memory, and iterative rather than recursive options make memory use deterministic. ◾ For best
power usage, spend as much time as possible asleep. ◾ If you aren’t using it (whatever “it” is), turn
as much of it off as you can. This advice applies to the processor (drop into low-power mode) as
much as other subsystems of the hardware. ◾ Optimisations can live on the server side as well as in
the device. A nonpolling or reduced amount of data transferred improves both sides of the solution.
◾ Avoid premature optimisation. If you hit performance problems, profile to work out where the
issues lie. ◾ Copying memory is expensive, so try to do as little of it as you can. ◾ Work with the
compiler, rather than against it. Order your code to help the likely execution path and use constants
to help it optimise. ◾ Choose libraries carefully. One from a standard operating system might not be
a good choice for a more embedded environment. ◾ Tools such as gdb and JTAG are useful when
debugging, but you can get a long way with just outputting text to a serial terminal or flashing an
LED. ◾ Careful observation of the environment surrounding the device can help you sniff out
problems, particularly when it’s connected to the Internet and so interacting with the wider worl