Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Abdulhadi-Elhabroush-Chapter7anoter Emerginh

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

Libyan Academy of Graduate Studies

School of Engineering and Applied Science


Department of Electrical and Computer Engineering
Emerging network-ITE620

Chapter 7: Other emerging technologies

 
 
Student Name: ABDULHADI ELHABROUSH
Student Registration, NO:210100414
Under the supervision prof: ALBAHLUL AL-FGEE
Chapter 7: Other emerging technologies

01 Nanotechnology.

02 Biotechnology

03 Blockchain technology.
04 Cloud and quantum computing.

05 Autonomic computing (AC)

06 Computer vision.

07 Embedded systems

08 Cybersecurity
09 Additive manufacturing (3D Printing)
1.Nanotechnology

is science, engineering, and technology conducted at


the nanoscale, which is about 1 to 100 nanometers.
Nanoscience and nanotechnology are the study and
application of extremely small things and can be
used across all the other science fields, such as
chemistry, biology, physics, materials science, and
engineering.
History

The ideas and concepts behind nanoscience and nanotechnology


by physicist Richard Feynman at an American Physical Society
meeting at the California Institute of Technology (CalTech) on
December 29, 1959,

Invention of the scanning tunneling microscope in


1981 and discovery of fullerene(c6o) in 1985 lead
to the emergence of nanotechnology

Nanoscience and nanotechnology involve the ability to see and


to control individual atoms and molecules.

Physicist Richard Feynman


Applications of nanotechnology

Medicine: customized nanoparticles the size of molecules that can deliver drugs
directly to diseased cells in your body.
Electronics: it has some answers for how we might increase the capabilities of
electronics devices while we reduce their weight and power consumption.

Food: it has an impact on several aspects of food science, from how food is grown
to how it is packaged. Companies are developing nanomaterials that will make a
difference not only in the taste of food but also in food safety and the health
benefits that food delivery.

Agriculture: nanotechnology can possibly change the whole agriculture part and nourishment
industry anchor from generation to preservation, handling, bundling, transportation, and even
waste treatment.

Vehicle manufacturers: Much like aviation, lighter and stronger materials will be
valuable for making vehicles that are both quicker and more secure. Burning motors will
likewise profit from parts that are all the more hardwearing and higher temperature
safe.
2.Biotechnology
It is the broad area of biology involving living systems and
organisms to develop or make products, or "any technological
application that uses biological systems, living organisms, or
derivatives thereof, to make or modify products or processes for
specific use".

One example of modern biotechnology is genetic


engineering. Genetic engineering is the process of
transferring individual genes between organisms or
modifying the genes in an organism to remove or add a
desired trait or characteristic.
History
 When he coined the term in 1919, the agriculturalist Karl Ereky described
‘biotechnology’ as “all lines of work by which products are produced from
raw materials with the aid of living things.”

 In modern biotechnology, researchers modify DNA and proteins


to shape the capabilities of living cells, plants, and animals into
something useful for humans.
Application of biotechnology

1-Agriculture (Green Biotechnology): Biotechnology had contributed a lot to modify the genes
of the organism known as Genetically Modified Organisms such as Crops, Animals, Plants, Fungi,
Bacteria, etc. Genetically modified crops are formed by the manipulation of DNA to introduce a
new trait into the crops. These manipulations are done to introduce traits such as pest
resistance, insect resistance, weed resistance, etc.

2-Medicine (Medicinal Biotechnology): This helps in the formation of


genetically modified insulin known as Humulin. This helps in the treatment of a
large number of diabetes patients. It has also given rise to a technique known
as gene therapy.
Gene therapy is a technique to remove the genetic defect in an embryo or child.
This technique involves the transfer of a normal gene that works over the non-
functional gene.

3-Aquaculture Fisheries: It helps in improving the quality and quantity of fishes. Through
biotechnology, fishes are induced to breed via gonadotropin-releasing hormone.
Application of biotechnology

4-Environment (Environmental biotechnology): is used in waste treatment and pollution


prevention. Environmental biotechnology can more efficiently clean up many wastes than
conventional methods and greatly reduce our dependence on methods for land-based disposal.
Every organism ingests nutrients to live and produces by-products as a result.
3.Blockchain technology
Originally blockchain is a growing list of records, called blocks, that are linked
using cryptography.

Each block contains a cryptography hash of the previous block, a


timestamp, and transaction data (generally represented as a Merkle
tree).

A blockchain is, in the simplest of terms, a time-stamped series of


immutable records of data that is managed by a cluster of computers not
owned by any single entity.

Each of these blocks of data (i.e. block) is secured and bound to each other
using cryptographic principles (i.e. chain). “Blocks” on the blockchain are
made up of digital pieces of information. Specifically, they have three parts:
Parts of blockchain

• Blocks store information about transactions like the date, time,


and dollar amount of your most recent purchase

• Blocks store information about who is participating in transactions.

• Blocks store information that distinguishes them from other blocks.


History of blockchain

 The first work on a cryptographically secured chain of blocks was


described in 1991 by Stuart Haber and W. Scott Stornetta. They

 In 1992, Bayer, Haber, and Stornetta incorporated Merkle trees to the design, which
improved its efficiency by allowing several document certificates to be collected into
one block.

 The first blockchain was conceptualized by a person (or group of people) known as
Satoshi Nakamoto in 2008.

 Nakamoto improved the design in an important way using the Hash cash like the method to add
blocks to the chain without requiring them to be signed by a trusted party. The design was
implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin,
where it serves as the public ledger for all transactions on the network.
Blockchain Explained

 A blockchain carries no transaction cost. (An infrastructure cost yes, but no transaction cost.) The blockchain is a
simple yet ingenious way of passing information from A to B in a fully automated and safe manner. One party to a
transaction initiates the process by creating a block. This block is verified by thousands, perhaps millions of
computers distributed around the net. The verified block is added to a chain, which is stored across the net, creating
not just a unique record, but a unique record with a unique history. Falsifying a single record would mean falsifying
the entire chain in millions of instances. That is virtually impossible. Bitcoin uses this model for monetary
transactions, but it can be deployed in many other ways.
4.Cloud and quantum computing.

Cloud computing is a means of networking remote servers that are hosted


on the Internet. Rather than storing and processing data on a local server,
or a PC's hard drive, one of the following three types of cloud infrastructure
is used.

Quantum computers truly do represent the next generation of computing.


Unlike classic computers, they derive their computing power by harnessing the
power of quantum physics. Because of the rather nebulous science behind it, a
practical, working quantum computer still remains a flight of fancy.
Advantages of cloud computing
 Cost efficiency
Because a cloud provider’s hardware and software are shared,
there’s no need for the initial costly capital investment.

 No hardware required

 cloud allows you and multiple users to


access your data from any location.

 Maintenance is much cheaper,


Advantages of quantum computing

 Their gargantuan computing power would allow them to crunch very long
numbers.
 They would be able to make complex calculations that would only
overwhelm classic computers.
 Accessing a cloud-based quantum computer combines the benefits of
both technologies exponentially.
 Quantum computing could help in the discovery of new drugs, by
unlocking the complex structure of chemical molecules.
 Other uses include financial trading, risk management, and supply chain
optimization
5.Autonomic computing (AC)

Autonomic computing (AC) is an approach to address the complexity and evolution problems in
software systems. It is a self-managing computing model named after, and patterned on, the
human body's autonomic nervous system. An autonomic computing system would control the
functioning of computer applications and systems without input from the user, in the same way,
that the autonomic nervous system regulates body systems without conscious input from the
individual. The goal of autonomic computing is to create systems that run themselves, capable of
high-level functioning while keeping the system's complexity invisible to the user.
Characteristics of Autonomic Systems
 Self-Awareness: An autonomic application/system “knows itself” and is aware of its state and its
behaviors.

 Self-Configuring: An autonomic application/system should be able to configure and


reconfigure itself under varying and unpredictable conditions.

 Self-Optimizing: An autonomic application/system should be able to detect


suboptimal behaviors and optimize itself to improve its execution.

 Self-Healing: An autonomic application/system should be able to detect and


recover from potential problems and continue to function smoothly.

 Self-Protecting: An autonomic application/system should be capable of


detecting and protecting its resources from both internal and external
attacks and maintaining overall system security and integrity.
Characteristics of Autonomic Systems
 Context-Aware: An autonomic application/system should be aware of its execution environment
and be able to react to changes in the environment.

 Open: An autonomic application/system must function in a heterogeneous world and


should be portable across multiple hardware and software architectures. Consequently, it
must be built on standard and open protocols and interfaces.

 Anticipatory: An autonomic application/system should be able to anticipate to the


extent possible, its needs and behaviors and those of its context, and be able to
manage itself proactively
6.Computer vision

It is an interdisciplinary scientific field that deals with how computers can be


made to gain a high-level understanding of digital images or videos. From the
perspective of engineering, it seeks to automate tasks that the human visual
system can do.

Computer vision tasks include methods for acquiring, processing, analyzing and
understanding digital images, and extraction of high-dimensional data from the real
world in order to produce numerical or symbolic information, e.g. in the forms of
decisions.

Computer vision is building algorithms that can understand the content of


images and use it for other applications.
History

The origins of computer vision go back to an MIT undergraduate summer project in 1966. It was
believed at the time that computer vision could be solved in one summer, but we now have a 50-year
old scientific field that is still far from being solved.
Early experiments in computer vision took place in the 1950s, using some of the first neural networks
to detect the edges of an object and to sort simple objects into categories like circles and squares. In
the 1970s, the first commercial use of computer vision interpreted typed or handwritten text using
optical character recognition. This advancement was used to interpret written text for the blind. As
the internet matured in the 1990s, making large sets of images available online for analysis, facial
recognition programs flourished. These growing data sets helped make it possible for machines to
identify specific people in photos and videos.
How computer vision works

1. Acquiring an image: Images, even large sets, can be acquired in real-time


through video, photos or 3D technology for analysis.

2. Processing the image: Deep learning models automate much of this process,
but the models are often trained by first being fed thousands of labeled or pre-
identified images.

3.Understanding the image: The final step is the interpretative step,


where an object is identified or classified.
types of computer vision that are used in different
ways:
 Image segmentation partitions an image into multiple regions or pieces to be
examined separately.
 Object detection identifies a specific object in an image. Advanced object
detection recognizes many objects in a single image: a football field, an
offensive player, a defensive player, a ball and so on. These models use an X, Y
coordinate to create a bounding box and identify everything inside the box.

 Facial recognition is an advanced type of object detection that not only


recognizes a human face in an image but identifies a specific individual.
Types of computer vision that are used in different
ways:

 Edge detection is a technique used to identify the outside edge of an object or


landscape to better identify what is in the image.

 Pattern detection is a process of recognizing repeated shapes, colors and other visual
indicators in images.

 Image classification groups images into different


categories.

 Feature matching is a type of pattern detection that matches similarities in


images to help classify them.
Applications of computer vision
 Optical character recognition (OCR): reading handwritten postal codes on
letters (Figure 7.5a) and automatic number plate recognition (ANPR);

 Machine inspection: rapid parts inspection for quality assurance


using stereo vision with specialized illumination to measure
tolerances on aircraft wings or auto body parts

 Retail: object recognition for automated checkout lanes

 Medical imaging: registering pre-operative and intra-operative imagery performing long-


term studies of people’s brain morphology as they age;
Applications of computer vision
 Automotive safety: detecting unexpected obstacles such as pedestrians on the
street, under conditions where active vision techniques such as radar or lidar
do not work well

 Surveillance: monitoring for intruders, analyzing highway traffic

 Fingerprint recognition and biometrics: for automatic access authentication as well as


forensic applications
7.Embedded systems

It is a controller with a dedicated function within a larger


mechanical or electrical system, often with real-time
computing constraints. It is embedded as part of a
complete device often including hardware and
mechanical parts. Embedded systems control many
devices in common use today. Ninety-eight percent of all
microprocessors manufactured are used in embedded
systems.
Advantages and disadvantages of embedded system

Advantages of Embedded
➢ Easily Customizable
➢ Low power consumption
➢ Low cost
➢ Enhanced performance

Disadvantages of Embedded systems


➢ High development effort
➢ Larger time to market
Basic Structure of an Embedded System

Sensor − It measures the physical quantity and converts it to an electrical signal which can be
read by an observer or by any electronic instrument like an A2D converter. A sensor stores the
measured quantity to the memory.
A-D Converter − An analog-to-digital converter converts the analog signal sent by the sensor into a
digital signal.
Processor & ASICs − Processors process the data to measure the output and store it to the memory.

D-A Converter − A digital-to-analog converter converts the digital data fed by the processor to
analog data.
Actuator − An actuator compares the output given by the D-A Converter to the
actual (expected) output stored in it and stores the approved output.
8.Cybersecurity

It is the protection of computer systems from the theft of or damage to their


hardware, software, or electronic data, as well as from the disruption or
misdirection of the services they provide.

Cybersecurity is often confused with information security but it focuses on


protecting computer systems from unauthorized access or being otherwise
damaged or made inaccessible. Information security is a broader category that
looks to protect all information assets, whether in hard copy or in digital form.
Cybersecurity measures
Staff awareness training: - Human error is the leading cause of data breaches, so you
need to equip staff with the knowledge to deal with the threats they face. Training
courses will show staff how security threats affect them and help them apply best-
practice advice to real-world situations.

Application security: - Web application vulnerabilities are a common point of intrusion


for cybercriminals. As applications play an increasingly critical role in business, it is vital
to focus on web application security.

Network security: - Network security is the process of protecting the usability and
integrity of your network and data. This is achieved by conducting a network
penetration test, which scans your network for vulnerabilities and security issues.
Leadership commitment: - Leadership commitment is the key to cyber
resilience. Without it, it is very difficult to establish or enforce effective
processes. Top management must be prepared to invest in appropriate
cybersecurity resources, such as awareness training.

Password management: - Almost half of the UK population uses ‘password’, ‘123456’ or


‘qwerty’ as their password. You should implement a password management policy that
provides guidance to ensure staff create strong passwords and keep them secure.
Types of cybersecurity threats

Ransomware: - It is a type of malicious software. It is designed to extort money by


blocking access to files or the computer system until the ransom is paid. Paying the
ransom does not guarantee that the files will be recovered or the system restored.

Malware:- itis a type of software designed to gain unauthorized access or to cause


damage to a computer

Social engineering: - it is a tactic that adversaries use to trick you into


revealing sensitive information. They can solicit a monetary payment
or gain access to your confidential data. Social engineering can be
combined with any of the threats listed above to make you more likely
to click on links, download malware, or trust a malicious source.

Phishing: - it is the practice of sending fraudulent emails that resemble emails


from reputable sources. The aim is to steal sensitive data like credit card
numbers and login information. It’s the most common type of cyber-attack. You
can help protect yourself through education or a technology solution that filters
malicious emails.
Benefits of utilizing cybersecurity include:

➢ Business protection against malware, ransomware, phishing, and social


engineering.
➢ Protection for data and networks.
➢ Prevention of unauthorized users.
➢ Improves recovery time after a breach.
➢ Protection for end-users.
➢ Improved confidence in the product for both developers and customers.

Cybersecurity vendors
Vendors in cybersecurity fields will typically use endpoint, network, and advanced threat protection security as
well as data loss prevention. Three commonly known cybersecurity vendors include Cisco, McAfee, and Trend
Micro.
9. Additive manufacturing (3D Printing)

Additive manufacturing (AM) describes types of advanced manufacturing that are used to
create three-dimensional structures out of plastics, metals, polymers and other materials
that can be sprayed through a nozzle or aggregated in a vat. These constructs are added
layer by layer in real-time based on digital design. The simplicity and low cost of AM
machines, combined with the scope of their potential creations, could profoundly alter
global and local economies and affect international security.

You might also like