Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Final Final Final

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 39

Telescopes are a technology that uses light to make faraway objects like the moon or the stars

visible to humans worldwide. Telescopes have been around for centuries and have significantly
contributed to astronomy.

Telescopes as Mirror Technology:

There are two types of telescopes. One that uses a Lens called an objective Lens and is a refract-
ing telescope. Refracting Telescope uses the process of refraction, which is the bending of light
as it travels from one medium to another.The other is a reflecting telescope containing a primary
mirror and a mirror plane. This type of Telescope uses the process of Reflection in which light
reflects off a smooth or plane surface. Both Telescopes use curved surfaces to gather light from
the night sky, which is refracted or reflected at a rate of incidence, and the shape of the sec-
ondary mirror or Lens concentrates light which is what we see when we look into a telescope.

Why are Telescopes considered Mirror Technology?

Although some telescopes use Lenses, they create many problems. The most common issue with
refracting telescopes is the phenomenon of chromatic aberration. A refracting telescope is un-
favourably compared to a reflecting telescope because it is easier to make mirrors smooth and
without impurities that can affect the light-gathering process, which is essential for science tele-
scopes and observatories, which require mirrors as large as a detached house. When mirrors are
built that big, using a mirror rather than a lens is more straightforward and practical, as in the
James Webb Telescope, which contains over 18 primary mirrors.

Mirrors are much easier to work with than glass, and when Lens is used in a telescope, the mir-
rors tend to become heavier and more prominent in size and harder to hold in place, making them
very vulnerable to damage and cracks. The glass will also have to be made much thicker at larger
sizes, impacting the refraction rate and causing the light to travel slower through the medium.
Mirrors will gather more light with size and will not cause as many issues.
Chromatic aberration is a problem for refracting telescopes which contains Lens. Since different
light colours are refracted at different rates, the wavelengths tend to have different focal lengths.
If different colours are found at different locations, the image will become blurred, which could
also cause magnification problems. The final image will result in fringes.

How do Reflecting Telescopes work?

A telescope contains two mirrors: a primary mirror and a secondary mirror. A primary mirror is a
mirror near the end of the Telescope, and a secondary mirror is near the eyepiece. Usually, these
mirrors can be either concave, convex or a combination.

The image seen through a telescope works interestingly, which occurs by a concentration of
light. The light will first enter through the primary mirror, which will be convex or concave
shaped. The light will be reflected upwards, preventing the light from being concentrated in front
of the tube opening. That is where the second mirror, a plane mirror inside the top or bottom of
the tube, comes in handy as the light continues. It will become deflected by 90 degrees and enter
the focus area, where it enters the eyepiece and will be concentrated enough to become viewable.

There are many different variations of this design. However, the Newtonian Telescope design is
considered the most common reflector and, on some occasions, regarded as a classic design that
has been modified and adapted to suit various needs.

Discovery of the Technology:

Telescopes have been around for a few centuries. The earliest telescopes have existed since the
late 1600s. In the early days, they were not used for space but for voyages, military tactics and
ground-based distance viewing. In 1608 in Hague, Netherlands, Hans Lippershey was a lens
maker who first invented the design for a device that "aided in seeing faraway things as through
nearby convex and concave Lens in a tube. The original device only had a magnifying capability
of 3 or 4 times.
Galileo Galilei had heard about this innovative design and proceeded to work on his Telescope in
the summer of 1609. Galileo created telescopes that would later play essential roles in astron-
omy. His first design had been an instrument with three times viewing power and gradually
would build into a telescope that could magnify up to 9 times. His design was called the Galileo
Optic Tube; the device would use a combination of convex and concave lenses and be the first to
use for astronomy. He would go on to reaffirm that the Earth and the planet circled the sun and
go on to discover the stars in a milky way. Later, he published many other discoveries in his
book "The Starry Messenger."

Many designers would go on to create telescopes that would all contribute to science, astronomy
and technology.

The Newtonian Design is essential to mention as it is still a famous telescope used today. In 1668
Isaac Newton created a reflector telescope with a concave primary mirror and a second plane
mirror. Although he is not credited with creating the first Telescope to use mirrors rather than
lenses, he was able to create a working model of high quality. This design was much more
straightforward to make and did not create the problem of distortion through a phenomenon
called chromatic aberration. His invention has allowed us to build much larger telescopes with-
out obstruction or distortion, some of which are used in significant space projects today.

Key People:

Hans Lippershey: Born in 1570 in Middleburg, Netherlands. He was a lens maker and has been
credited with inventing the first Telescope in 1608. He brought his design to the State General of
the Netherlands, where he went to have it patented. Although he was not awarded the patent be-
cause of how easily the design could be copied, the Dutch Government gave him a hefty sum for
his device to make copies of his instrument.

Jacob Metius: Born in 1571 in Noord Holland, Netherlands. He was a Dutch mathematician who
was a rival to Hans Lippershey. Only a few weeks after Lippershey filed his application for the
Telescope, he also submitted his patent for the device, which was later denied because of the
simplicity of the design and how easy it was to copy.

Galileo Galilei: In Pisa, Italy, on February 15, 1564. He was an Italian astronomer and mathema-
tician who made some significant discoveries in the sciences of astronomy. He is essential be-
cause he is credited with creating the Galilean Telescope, with which he was able first to study
outer space. He discovered how the Earth and other planets moved around the sun and observed
and described the moons of Jupiter, the rings of Saturn, the phases of Venus, sunspots, and the
moon's slightly uneven oval shape.

Johannes Kepler: Born on December 17

1571, in Weil Der Stadt in Holy Roman Empire (Modern Germany). He was a mathematician
and astronomer who made significant improvements to the designs of telescopes—three years af-
ter hearing about inventions by Lippershey and Galileo, and he started working on improving the
Telescope's design called the Keplerian Telescope in the year 1611. His telescope design used a
convex eyepiece lens that allowed viewers to see a much larger field of view and provided much
larger magnification levels than the Galilean Telescope.

Isaac Newton: In Lincolnshire, England during the year of December 25, 1642, Isaac Newton
was born. He was a physicist and mathematician who played a significant role in optics and was
widely known for creating the Newtonian Telescope. He is responsible for creating the first
working reflector telescope with mirrors rather than Lenses. This invention was revolutionary
because it solved the problem of colour dispersion, otherwise known as Chromatic Aberration.
This design is still in use today and allowed for the creation and building of larger telescopes that
have made significant processes and contributions to space exploration, astronomy and the study
of physics.

John Hadley: Born April 16 April 16, 1682, in Hertfordshire, England. He was widely known for
successfully improving reflecting telescopes, specifically the Newtonian Telescope. His im-
provements have allowed for stronger magnification with sufficient accuracy and power for as-
tronomy.
Current Uses: The Telescope is used by astronomers for observing outer space, and it allows for
viewing celestial objects like the stars, constellations and galaxies, to name a few. It can be used
to study space, or amateurs can use it to view the natural wonders of the sky.

Astrophotography: Telescopes can use by professionals and amateurs for astrophotography.


Which is the process of photographing an object in space Using a telescope with a camera. An
astrophotographer can capture beautiful images of the stars, galaxies and other celestial deep sky
objects. It is possible to take pictures of faraway galaxies and nebulas.

Telescopes in Space: Telescopes can be helpful for astronomy and studying various cosmic ob-
jects. They can help give scientists a deeper understanding of areas of deep space and distant gal-
axies in the realms of the universe. The main advantage is that these telescopes will sit over the
horizons of the Earth and will not be affected by light distorting and other blocking effects
caused by the Earth's atmosphere and surface. The atmosphere contains shifting air pockets cap-
tured as motion blurs on telescopes on the ground, which also causes the appearance of the twin-
kling of the stars.

By exploring these cosmic objects, astronomers and scientist can gain a deeper understanding of
the beginnings of the Earth and answers questions about how the Earth was formed and when.
They can also gain an insight into how certain stars and galaxies first appeared.

The Earth's atmosphere can absorb or reflect many wavelengths on the electromagnetic spec-
trum. Space telescopes can be a great tool to explore and view visible, ultraviolet and infrared
wavelengths which can not reach the Earth due to atmospheric absorption.

Here are a few different types of telescopes that are being used for space study, which can be
used to understand the cosmic universe from the entire spectrum of light, even the sources of in-
visible light, due to a combination of telescopes and other modern technologies that have been
invented along the way.
Hubble Space Telescope: Launched in 1990 and named after astronomer Edwin Hubble it is a
sizeable space-based telescope. It sits on the Earth's orbit and is far away from any issues caused
by an Earth-based telescope, such as obstructions such as rain clouds, light pollution and atmo-
spheric distortion. Images produced by the Hubble Space Telescope can be much more transpar-
ent, brighter, and more detailed. This optical observatory can reach some of the most distant ar-
eas of the universe to give us more information about unknown galaxies and stars.

The Telescope is a large reflecting telescope which gathers light from space objects. The light
will enter through the primary mirror and reflect onward through a secondary mirror, but since
this Telescope is so much larger than telescopes on Earth and in space, it would not make sense
to have an eyepiece. Instead, it contains two cameras, a faint-object camera, a wide-field plane-
tary camera, and two spectrographs. The wide-field camera on the Telescope can take wide-field
and high-resolution images of celestial objects. It is said that this camera can capture up to 10
times or more significantly than any earth bases telescope. The faint-object camera can detect
objects 50 times fainter than anything an earth-based telescope could view. The spectrographs
will break the light from its single material into parts like a prism does to a rainbow. The faint
object spectrograph will gather information about an object's chemical composition, while a
high-resolution spectrography can reach distant ultraviolet light that can not reach the Earth's at-
mosphere.

The Hubble Telescope has made significant contributions to the field of astronomy. Some of its
accomplishments include the discovery of nearly 1500 galaxies, Hydra, Nix and the two moons
of Pluto.

Chandra X-Ray Observatory: It is an X-ray telescope owned by NASA. It is currently the best
because the Telescope has eight times more image resolution and the technology to detect
sources of X-ray emitting rays close to twenty times fainter than any other X-Ray Telescope.

Launched in 1999, this x-ray observatory was launched by the Space Shuttle and named after
Nobel prize winner Subrahmanyan Chandrasekhar. The Chandra X-Ray Observatory is a tele-
scope that can detect invisible forms of light called X-rays produced in the cosmos.
The Telescope is placed 133,000 Kilometers away from the Earth from each electromagnetic
wave reflected by the Earth's atmosphere. X-rays and other forms of radiation, like gamma, can
be found when space when stars burn or explode, which can produce sulphur, silicon and iron
and will require X-ray telescopes for viewing. This type of Telescope can obtain x-ray images of
celestial objects that can show a side of space that can not be seen by the human eye, such as
events such as massive explosions, black holes and neutron stars. X-Ray telescopes can add more
dimensions to objects that give off visible light.

Chandra X-Ray observatory has an observing power of about half a billion times more than the
first Telescope created by Galileo. It has allowed a deeper understanding of black holes, super-
novas and dark matter. It gives scientists a deep understanding of the distribution of radiation and
its role in the habitability of planets.

The Spitzer Space Telescope:

On August 25, 2003, The Spitzer Observatory was launched into space and was used by NASA
to study infrared wavelengths. The Telescope was named after Lyman Spitzer, an American the-
oretical physicist. The spacecraft was decommissioned on January 30, 2020. It is in orbit to the
sun and placed at a distance that it would not pick up interfering infrared light from the Earth.
Whereas X-rays can be produced in warm temperatures, cold temperatures are ideal for this ob-
servatory to measure infrared light. The Spitzer Space Telescope collects light emitted from
colder objects and can identify molecules to determine the temperatures of the atmosphere of dif-
ferent planets. Other uses for this type of Telescope are to see brown dwarfs(failed stars), extra-
solar planets, giant molecular clouds and organic molecules.

Plans for the Future:

James Webb Space Telescope:

NASA's most giant and most powerful Telescope has been built to date.
A ten-million-dollar infrared observatory will pick up where the Hubble Space Telescope left
off.

Although it was launched on December 25, 2021, it is still in the early stages of its launch and
has yet to accomplish any significant milestones in its early career.

The James Webb Telescope is NASA's largest and most powerful science telescope. It is a large
infrared telescope with a primary mirror consisting of 18 segments and is around 6.6 meters
wide. This Telescope has a larger aperture, diffraction-limited image quality and an infrared sen-
sitivity not available to any space or ground telescope currently in existence. One of the space
crafts objectives is to understand the outer origins of the universe by studying a distant galaxy's
infrared signature to determine its age. This Telescope will play an essential role in determining
how stars are formed by studying stages of stellar evolution and examining dense cold cloud
cores where stars are first formed. Other objectives include how planets are formed and if there
is a possibility of life.

Giant Magellan Telescope:

Currently Under Construction in the Chilean Andes at the Las Campanas Observatory, ready for
2029. When finished, it will be the largest Telescope in the world called the Giant Magellan
Telescope. It contains seven mirrors and will be 65 meters high; each is set to be 8.4 meters in
diameter and will provide a broad area of 3.691 square feet of the light collection area. Chile has
ideal weather conditions where the sky is dry and clear for most of the year. These weather con-
ditions and ecological advantages will give astronomers the ideal conditions for viewing space
objects.

This earth-based Telescope will be paired with a spectrograph tool that can take signals and sep-
arate them into component wavelengths. This tool will allow scientists to analyze light and dis-
cover the properties of materials interacting with it. It will be able to detect visible and invisible
forms of light. Harvard and Smithsonian researchers developed this tool and will be able to mea-
sure the light spectrum at a very high calibre. According to the Center For Astrophysics, the in-
strument will be able to determine the mass of a planet and where liquids and water could be on
a planet's surface. Other features include the ability to detect molecules in the atmosphere of exo-
planets like molecular oxygen.
Along with a powerful astronomical camera, the Telescope will be able to flight the issues
caused by the Earth's atmosphere using tools that employ adaptive optics, which can compensate
for distortions caused by the air; flexible secondary mirrors can change shape. As a result, as-
tronomers can capture clearer and sharper images.

It is set to create ten times clearer images than Hubble Space Telescope. One of this Telescope's
objectives is to study and observe the distant universe and look for exciting things, including life
forms. This Telescope will be making interesting contributions to astronomy by looking into the
formation of planets and the chemical composition of foreign planets, like the blanketing atmos-
pheres of Venus and Jupiter. With these new possibilities, it will help find a possibility of ex-
traterrestrial lifeforms. Other objectives of this Telescope are to look at supernovas, white
dwarfs, and black holes and their effects on the space environment they surround and the origins
of galaxies that are unknown to us.

Navy Grace Roman Space Telescope:

Set to launch in 2026 by NASA, it is a future infrared space observatory. It is predicted to have a
100 times better paranoiac view, creating the first wide-field maps from space. It is set to be lo-
cated nearly 1.5 million kilometres from our planet. This Wide Field Infrared Survey Telescope
was renamed in honour of Nacy Grace Roman, NASA's first chief astronomer. It was built to
study and understand space topics ranging from the mystery of dark energy to surveying planets
beyond our solar system. Other objectives include observing stars, black holes and other features
of faraway galaxies.

This Telescope is expected to contribute to the findings of previous space objects like The James
Webb Telescope and the deficient Hubble Telescope by using a combination of wide-field imag-
ing and spectroscopy at a high resolution. This will allow scientists to view galaxies over a long
interval of comic time, allowing for a greater understanding of how galaxies are shaped and
moulded over time. Through galaxy studies, it will give astronomers and scientists a deeper un-
derstanding of the mysteries of dark matter. Just as X-ray and infrared spectrography add more
layers to the understanding of astronomy, wide-view imaging will have similar effects by casting
a comprehensive view of space rather than focusing on one specific object.
Issues With Telescopes:

Although the technology has improved significantly from the early days of refracting telescopes,
telescopes, particularly common telescopes, still have issues that may cause issues with the im-
age.

These issues are still common today. On larger and science, they are often fixed or corrected.
Some issues include:

Spherical Aberration:

Astigmatism:

Coma

Chromatic Aberration

All of these types cause images to come out blurred and distorted in some way or another. The
failure of the light to focus correctly is because of an issue with the angle of incidence or the in-
correct curvature of the convex or concave Lens or mirror.

When first launched, the Hubble Space Telescope ran into a few different issues. The Telescope
was producing spherical aberration and produced blurry images. The primary mirror was at the
incorrect curvature. The team of astronauts and scientists created a small mirror that was in-
stalled to correct the light beams entering the primary mirror.

Parts of a Reflector Telescope:


Telescopes come in different shapes and sizes with many different variations, but all have the
same standard fundamental design and parts.

Although there are many different designs, Newtonian Design is the simplest and easiest one that
would be considered essential.

Telescope Tube: This is the outer shell of the Telescope and holds the entire instrument to-
gether.

Primary Mirror: A convex mirror gathers light from its source.

Secondary Mirror: This plane mirror concentrates the light towards the eyepiece.

Eyepiece lens: Sometimes contains a magnification lens; this is where the image is viewable.

Name of Invention: Camera

How does it work?

Cameras work like a human eye. The main appeal of this lens technology is that it is possible to
see what is seen on a picture on a page, like a drawing, but more realistic and life-like. This de-
vice can control the amount of light that enters a camera by bending and refracting it to a single
sharp focal point using a combination of convex and concave lenses.

Cameras need lenses because the device will only produce white light without them. Every cam-
era has a focal length; these numbers are displayed on the rim of a camera and can be adjusted
according to the specifications. The focal length on the camera is where the light will become the
most focused on an image to centre the subject. The greater the focal length of the camera, the
higher the magnification. For example, a focal length of 24 mm less magnification than a focal
length of 200 mm. Since focal length cannot be changed, focusing a camera lens is done by
changing image distance. Moving away makes the object distance smaller, and the image dis-
tance and size will improve.

The light will enter through the front of the camera into the aperture, at which point it will pass
through the Lens and form an actual image. Afterwards, this image will be sent to a film or sen-
sor; in modern cameras, it is easy to adjust these properties to bring the image into focus.

Film cameras work by sending light to a film strip, but in cameras like DSLR cameras or even
Mirrorless Cameras, the Lens will send the light to a digital mirror.

What is Focal Length:

Just like the human eye, objects in the distance will remain in the distance until we walk toward
them. A camera measures the distance between points of convergence of the Lens to the sensor
recording the image. The narrower the angle of view, the larger the number value.

More about the aperture:

Aperture is the concern of a camera regarding its depth of field. It determines the opening and
how big it is. The larger the opening, the shallower the depth of field. It is expressed in f stops,
and the larger the f stop, the smaller the opening. For example, f 2.8 allows more light than f4
and f 11.
Types of Camera Lenses:

Most cameras consist of two types of camera lenses today; many specialty lenses are either
prime or zoom lenses.

Prime Lenses:

These lenses are faster and sharper and produce better image quality. They are very portable be-
cause of how lightweight and flexible they are. The downside is that there is a fixed focal length,
and no zoom allows for a focus on other features.

Zoom Lenses:

These camera lenses are more versatile and allow different focal lengths from a single lens. Al-
though flexible, they take longer to produce photographs and tend to be bigger and heavier be-
cause of how much glass is contained within the lenses. These are best suited for professionals
and people passionate about photography who do not need to move from place to place.

Variety Of Lenses:

Macro Lenses:

These lenses are great for close-up photographs and can produce sharp images at a closer range
by allowing the camera to capture more details of smaller objects like bugs, birds and flowers, to
name a few. This type of Lens is prevalent in nature photography.

Telephoto Lenses:
These lenses are considered to be zoom lenses and have multiple focal points that can produce a
narrower point of view. It could isolate a subject that was located far away. Telephoto Lens can
focus on distant objects without being up close to the object. Although this type of Lens is heavy
and expensive, it is still famous for sports and wildlife photography.

Wide Angle Lenses:

This type of Lens can fit a large area into the frame. Everything stays in focus unless the subject
is too close to the Lens. It is famous for landscape, street and architecture photography.

Standard Lenses:

Standard lenses have a focal length between 35mm and 85mm. This Lens can be used as a gen-
eral-purpose lens for various photo projects, and standard lenses are great for beginners.

Parts of a Basic Camera:

Convex Lenses:

Convex lenses can form real and inverted images on a film or sensor. The image is produced
when an object's distance exceeds the focal length.

Real image:

The image exists and can be recorded on film, and the image will become inverted by the Lens.

Virtual Image:
It is a false image that can be seen but not recorded on film.

Diaphragm of F-Stop:

Controls the amount of light that will affect the final image. The smaller the f stop, the lighter the
image and the speed of the Lens. The size of the Lens is called the f stop and is the ratio of the
focal length of the diameter of the opening to the Lens. This ratio is expressed in f/d as seen on
an ordinary camera.

Shutter Speed:

The length of the open shutter affects the amount of light that affects the film. Faster Shutter
speeds are needed when the amount of light is small, and it can also reduce blur in images. A
camera is expressed in seconds; for example, if the opening time for a shutter is smaller than the
faster shutter speed, it will result in less light hitting the film.

Film:

Although the film is no longer used, modern cameras are recorded with sensors and electroni-
cally. The film was used to record the actual image formed by the camera lens, and these images
were later processed to produce pictures viewable on photo paper.

Film cameras are called analog cameras, and electronic cameras are called digital ones.

Discovery of the Technology:


The earliest forms of a camera and what started the photography revolution started in 391 bc by
Han Chinese scholar Mozi. Although it is unclear who was the actual inventor of this object
called the Camera Obscura. The name means dark room or dark chamber in Latin. The concept
behind this device is a little box with light entering through a small hole, and the adjacent wall
would cast the image.

There are records of Aristotle, greek architects all using the camera obscura for light research
and some practical application.

It was not till the 11 century when an Arab physicist named Ibn Al-Haytham published a book
about optics and wrote extensively about a device with light entering through a tiny hole in a
darkened chamber.

There are records of Leonardo da Vinci in his book Codex Atlanticus containing thorough infor-
mation and explanations on this device and how it works with detailed diagrams.

This ancient technique has existed for centuries with men of prominent status and knowledge
who have used, studied and examined this device for light projects.

However, it was not until 1816 until Joseph Niecephore Niepce and his Brother Claude were de-
termined to work on improving the Camera Obscura.

In 1826, the first working prototype was used to take the first photo ever. Niepce took a picture
of his home in Le Gras in France called "View from the Window." He positioned a paper-like
substance made of silver chloride and placed it at the camera obscura to create an image. These
images were called retinas, and there were many issues with this image, as it was not permanent;
it took eight hours to produce and started to blacken in daylight.
This French camera innovator worked on various chemical solutions to create a permeant image.
He worked on creating a positive image by using compounds that bleached by light rather than
blacked them, and he used a chemical solution containing salt, manganese black iron oxide.

Over the decade, he tried a variety of different chemicals to invent the process of what he called
heliography or sun writing. Eventually, he created a solution using dissolved light-sensitive bitu-
men, which was used for the film. This mixture was made from a tar-like oil and mixed with
something called pewter. Afterwards, the paper was placed in oil made of lavender and applied a
thin coating over a polished pewter plate to preserve as much of the image as possible. Although
Niepce was able to create the first permeant image, further improvements were still required.

In 1829 Nieceephore Niepce partnered with a French painter called Louis Jacques-Mande
Deguerre. At the time, she was well known for creating a theatre scene known as a diorama. Da-
guerre used the camera obscura and improved the Lens during the production of his art project.

The duo shared the goal of reducing long exposure times and improving the camera obscura.

Niece died in 1833, but Daguerre continued improving well after his death. Two years later, the
Daguerreotype was created. This type of camera uses a chemical process to produce an image us-
ing a chemical solution of silver iodide poured over a silver-plated sheet of copper. Then the
plate was placed behind a camera. Afterwards, the exposed plate containing the image was
placed over a solution of warm mercury, and the solution produced a chemical reaction that pro-
duced an image. Finally, the silver plate is washed with water to prevent further exposure.

In 1839 a daguerreotype camera, the first image, was recorded. In that same year, the invention
was bought by the French Government to be shared with the public. Daguerre received 6000
francs yearly, and Niepic's son Isidore received 400 francs yearly until they died in the late
1880s.

Shortly after, the product became available for wealthy households in France.
Different Types of Cameras:

Calotypes:

In 1830 an inventor named Henry Fox Talbot created and developed a type of early camera
called Calotypes.

The main benefit was that it took less time to produce images. Although it was a complex
process, it would first require making an early version of the film, which would require first by
taking some writing paper and placing in it a solution of water and table salt, secondly brushing
it lightly with a chemical called silver nitrate to from the final product. Once the light hit the
film, it created a chemical reaction resulting in image capture. The film paper with the captured
image was often waxed with tallow or beeswax to save the resulting image. Although, the im-
ages came out more blurred and produced negative images.

The Mirror Camera:

Created by Alexander S Walcott, these cameras created images with

Film Camera:

The Flim Camera was the first breakthrough moment for modern photography. In 1888 an Amer-
ican entrepreneur named George Eastman created the first working camera called the Kodak.

This camera used a single roll of paper and gradually moved towards using celluloid. These films
captured the negative image like a calotype but were much sharper, like a Daguerreotype. With
innovation and invention, these only took a few seconds to develop. In order to create a final im-
age, the film with captured images would need to remain in a container and be sent back to the
Kodak Company for image processing. The first Kodak could hold 100 pictures.

The Kodak Brownie was the first camera available to the middle class. It was released in 1900
and was an American box camera that was simple to use and came in high quality with a price
point of only 25 dollars. The release of this camera helped popularize the use of cameras for
birthdays, vacations and family gatherings.

35mm Flim Camera:

Called a 35mm or 135mm and was released in 1934 by the Kodak Flim Company. The name
comes from the film being 35mm, and the frame was a ratio of 1:1:5 and a height of 24mm
within the borders and used a cassette or roll.

The film was placed in a container or tube to protect it from light sources. The photographer
would need to place the film into the device and turn a handle to set the film into an area of the
camera known as the device spool. The handle would be retwisted after each picture was taken.

Once the roll was done, it would be placed back into the cassette. Each cassette contained 135
Flims with 36 exposures.

The Leica:

Invented In 1913 was the first camera with collapsible and detachable lenses. It was created by a
German engineer named Ernst Leitz, a director at the Optical Institute. By trade, he was a trained
watchmaker, but in 1930, he worked with Oskar Barack to create the perfect camera.
The same year, the company released another camera with a screw head attachment that allowed
the user to change between three lenses called Leica One.

The Leica Two contained a range finder and an operated viewfinder. In about 1932, the Leica
Three was released and had a shutter speed of 1/100th. These cameras were considered to be in-
novative and revolutionary for their time. These cameras remained popular until the 1950s, when
more manufacturers entered the mass markets.

The Movie Camera:

In 1882 inventor Etienne-Jules Mary created the first cinematography camera called the
chronophotography gun, which in 1891 would trigger the beginning of the movie industry. The
device produced only 12 images a second and ran around 20 to 40 images a second. The images
were exposed on a single curved plate that would capture all the images simultaneously. Today
cameras can record up to thousands of frames per second.

First Single-Lens Reflex Camera (SLR):

In 1861 inventor Thomas Sutton created a camera based on the camera obscura that used a reflex
mirror that would allow users to look through camera lenses to see the precise image being
recorded on film without any significant distortions.

The image recorded on the plate for the film was slightly different from the image that could be
viewed, which made the camera somewhat inconvenient. This technology could have been better
in the early 19th century and would only become useful in the 1970s and 1980s. The early SLR
Cameras were mainly used for professional photographers and special interest groups because, at
that time, the price was not as reasonable as cameras available for the mass market.
First Auto-Focus Camera:

This camera was invented in 1978, and the main benefit was that it contained a particular type of
Lens that would move the distance between the subject and the device without having to move
physically.

These were the basis for early rangefinders because they could manipulate distance from the sub-
ject and camera. The camera contained Lens and small motors to automate the process. At the
time, autofocus was only available for high-end SLR cameras.

Colour Photography:

IN 1961, Thomas Sutton created an image capture device that combined red, green and blue to
create any visible colour. Cameras previously would use three monochrome plates that would
only produce images in black and white or in one colour like white or grey. In 1935 colour pho-
tography became available when the Kodak Company invented the Kodachrome Flim. The new
camera used different emissions layered on the same film to record its colour. It was considered
very expensive at its first release and was mainly used by photography professionals. Today, the
same three-colour system is used to record colour.

The three-colour system would also be the basis for how Polaroid cameras operate. In 1871,
Richard Leach Maddox created the first Polaroid camera, which used gelatine dry planets to cre-
ate photographs of instant exposure.

Although modern Polaroid would not be available for colour photography until 1948, a man
named Edwin Land and his company, the Polaroid Corporation, released an instant exposure
camera under the name Polaroid which would be iconically known as we know it today. The
camera was an instant camera that could produce traditional cameras requiring the film to be de-
veloped later. However, this camera was unique because it could produce a photograph within
the device. The process was that the negative film and the positive film were printed together.
When the photograph was taken, the user would have to peel the two pieces of film and throw
away the negative part. Later versions of this camera would do this automatically and only eject
the positive. The film was three inches and contained a square border. It was trendy in the 1970s
and 80 and has seen a cultural revival in recent years as part of retro culture.
The Digital Camera:

The original concept was theorized by scientists in 1961. However, it would not come to fruition
until 1975 when a Kodak Engineer named Steven Sasso created a device that would produce a
monochrome image that would be printed to cassette tapes rather than film. This early device
would require a screen to look at images rather than be able to print them. It had a resolution of
only 0.01 megapixels. The first digital cameras could record an image that required 23 seconds
of total exposure. The technology used a device that would use electrode technology that would
change when exposed to light.

This camera would become available to the mass market at an affordable price in the 1990s when
Logitech released a device called The Decal Model 1. Specifications for this device were that it
could record data onto internal memory and had 1 megabyte of ram. The camera required users
to connect the camera to a computer. At that point, the image could be downloaded onto the
computer, where the image can be seen and enjoyed or taken to a photo processing place for
printing.

Digital Single Lens Reflex Camera (DSLR)

The first Camera Photo:

This first camera photo was released in 1999 when it was included with the Kyocera VP-210. At
the time, this camera was less useful for daily applications than a camera of its time could do,
and the feature was considered more of a gimmick than a tool like its today. It had a 110,000-
pixel camera, and the images could only be viewed on a 2-inch colour screen.

The camera photo became a revolutionary tool once the Apple company created a series of cam-
era phones. In 2007 founder Steve Jobs created the iPhone 1 or iPhone 2G. It was released on
January 9, 2007. The digital phone had a 3.5-inch screen and contained a 2-megapixel camera. It
can come in two options: the 4G and the 8G models. Once this phone was released, digital pho-
tography was popularized, and images could be comparable to digital cameras. They were also
credited with the invention of sensing and receiving images via cellular networks, a feature that
is so commonly used today. Today, the iPhone 13 works like a primary camera, has multiple
lenses with a video camera and contains up to a 12-megapixel resolution.

Key People:

Joseph Niecephore Niepce

Louis Jacques-Mande Deguerre

Henry Fox Talbot

George Eastman

Ernst Leitz

Etienne-Jules Mary

Thomas Sutton

Richard Leach Maddox

Edwin land
Steven Sasson

Steve Jobs

Current Uses:

Photography:

Today the camera is widely available at many different price points and is used for photography
by professionals and amateurs alike. Photography creates images for commercial or artistic pur-
suits often used in fashion, wedding wildlife and landscape photography. Cameras can also be
used to preserve memories and tell stories. Every time a family photo is shared on Facebook,
credit can be given to the original camera.

Videography:

Videography is capturing and creating moving images in the form of videos. Often, the content is
used for film, television and adverting. Other uses include corporate videos, weddings and other
private events. When a television show is watched or a YouTube video, it is mainly created with
a video camera. Today, many videos are uploaded to YouTube as a form of content. This new in-
fluencer social media marketing industry has a significant influence because of how accessible
the modern camera has become.

Wildlife:
Wildlife photography is documenting animals in a natural habitat using a camera or a video cam-
era, sometimes both. This process is usually done without disturbing the natural behaviour of the
wildlife animals. The objective is to capture and document animals in movement or action. This
type of observation is helpful for scientists to document animal behaviour and migration patterns.
The results sometimes lead to the documentation of certain species of animals that are endan-
gered or close to extinction. Wildlife photography can help raise awareness and incentives to
take action to create habitat protection and create solutions for artificial disasters.

American Photographer James Balog created a documentary called Chasing Ice which he used
cameras to take photographs of glaciers. The documentary helped show the ecological and envi-
ronmental effects of human impact or ecosystem interference over periods.

Wildlife photography can show the changes taking place over a long period and create awareness
of events and natural disasters in inaccessible places.

Cameras in Space:

As the Hubble Telescope contained a camera that was used alongside the Telescope, it helped
record and study the cosmos. The camera contained a competent called a charged coupled device
(CCD). They use a tiny microchip rather than film or photographic plates to capture photographs.
The microchip is similar to a sensitive detector or protons that uses the light collected by the
Telescope. The technology contains a large grid of individual light-sensing components referred
to as pixels, similar to the type found in camera resolutions. These pixels can convert light pat-
terns into numbers. The highest numbers are the brightest parts of the scene, and zero represents
darkness.

One of the primary cameras on the Hubble Telescope is called The Advance Camera for Survey-
ing Space (ACS). This camera can detect ultraviolet and near-infrared light. The camera was
mainly designed for wide-field imagery, and it can capture images with visible and infrared light,
like visible light images and ultraviolet images. Both the Telescope and the camera on the Hub-
ble use a spectrograph to break down light components like a prism. The spectrograph and the
camera work together to form images on a wide field of wavelengths.
Medical Cameras:

The endoscopic camera is a type of camera that is used for surgery. Similar to space cameras, it
is sensitive to visible and infrared spectra. The optical image is transferred from the body cavity
being examined to the camera head. It uses a ring and flexible scope attached to the camera head.
This camera gives doctors the ability to perform minimally invasive surgeries. Rather than cut-
ting the patient up, the doctor can make a small insertion to illuminate the internal cavity that
needs to be observed. The light admitted from the camera is transmitted into the human body
through the transmission bean called an optic fibre.

Plans for the Future:

Infrared Camera:

There will be innovative and modern cameras that will not only be able to capture images in visi-
ble light on the full spectrum of colour, but now with the invention of modern space cameras,
they will be able to capture images on the infrared spectrum and will be the infrared camera that
will be focused on ignoring visible light and seeking out electromagnetic radiation to see the
temperature and the effects they have on our celestial ecosystem. It will use some sensors and
thermal detectors to determine the level of infrared light, and the sensors will convert infrared
signals into electrical currents.

3D camera:

It was invented by Chris Condon and his company Stereo Vision. The camera is based on Stereo-
scopic imaging, enabling depth prediction in images in three dimensions. He originally invented
3D camera lenses, and with the support of his team, he earned the patent for 3D motion picture
lenses. Early 3D cameras created 3D movie experiences using a system that sends light through
an enclosed box and a unique lens to record an image on a light-sensitive medium.

The device works similarly to how eyes work; it can see images in multiple dimensions of
height, width and depth through multiple angles and perspectives.

This product is helpful to product designers because it will allow them to design products in 3D
and provide a better virtual sales experience. Customers will be able to view images of the prod-
uct from all angles and dimensions to situate an in-person sales experience.

It will also be helpful for real estate photos because it will make it easier to view homes virtually
without viewing individual properties in person.

Name of Invention: Optic Fibres

Discovery Of the Technology:

Two French brothers called the Chappe Brothers created and invented a device called the optical
telegraph around the year 1790. The invention used a few different lights installed on the high
points of towers. When operators use it, they pass the messages back and forth using light. There
would be many advancements over the next several centuries before we came to modern optic fi-
bre.

In 1854 British physicist John Tyndall demonstrated to the Royal Society of Britain that beams
of light could travel through a curved stream of water like a waterfall and that light signal could
be curved along the water. He set up a water tank with a tube that dispersed the water from one
side. The water would flow in the stream of direction, at which point the physicist would shine a
light on the flowing water. Once the water fell out on one side, the light arc would follow the wa-
ter down. These were the first steps in understanding how light travels.

In 1880, American inventor Alexander Graham Bell patented an optical light telephone system
called the photophone. The device worked as the concentrated sunlight with a mirror and then
talked into a mechanism that vibrated in the mirror. At the receiving end, a detector picks up the
vibrating beam and decodes it back into a voice like a phone does with an electrical signal.

The device worked by using a mirror that could concentrate sunlight, and the user would then
talk into a device that vibrated the mirror. The person receiving the call would have a detector
that captures the vibrating beam and decodes it into a voice, similar to telephone optics fibres.

Although, this would eventually come back as optic fibre technology, which would be how cur-
rent phones and most modern communication. Telephone technology was more realistic and use-
able at the time. The potential for light loss was high, and there were also issues with light inter-
ference. For example, a cloudy day could prevent the light from travelling back and forth.

In 1895 a French Engineer, Henry Saint Rene, designed an optical light system made of glass
rods for transferring light. The idea was initially used as an early attempt for television which
would prove unsuccessful.

In 1920 a British inventor named John Logie Baird and American Clarence W Hansell obtained
a patent. The patent used arrays of transparent rods to transmit images for televisions and a de-
vice called facsimiles.

A British inventor named John Logie Baird and an American inventor team up together in the
year 1920. Together they created a device that used a system of clear transparent rods to help
light travel. The device was used to transmit images for television and other technologies, such
as face smiles. They had received a patent in that same year.
In the 1930s, scientist and inventor Heinrich Lamm was the first to transfer an image through a
bundle of optical fibres.

Scientist and inventor Heinrich Lamm created a device initially invented to look at inaccessible
parts of the body but found more success using the invention to carry images. Lamm assembled a
bundle of optical fibres to use light to carry images. This was an important discovery in optical
fibres, and it can be argued that modern fibres operate similarly. Lamm had some issues with his
project. Firstly, the image produced was of poor quality. Secondly, since the device could not be
used in a practical setting, it was deemed similar to previous inventions and was denied a patent,
thirty Heinrich Lamm would eventually have to abandon this project and flee to America due to
the rise of the Nazi Regime the project was never realized to its full potential.

In 1951, a Danish inventor and scientist, Holger Moeller, created a device that would be impor-
tant in the early development of fibre optic imaging. The device worked with an explicit, trans-
parent material made from either glass or plastic. It was designed to reduce signal interference or
what was known as "crosstalk" between different modes of the fibre. Ultimately he was denied a
patent because it was considered to be too similar to an earlier design by John Logie Baird.

In 1954 using the early work of John Tyndall, a UK-based Physicist, Narinder Sigh Kapany in-
vented the first actual fibre optical cable. He was responsible for coining fibre optics and would
contribute to developing this field through his teachings and books, including the one he had
published in the 1960s.
Charles Kuen Kao, in the 1960s, is considered the father of optical communication and, in 2009,
won a Nobel Peace Prize for his work. He was able to discover through his research the physical
properties of glass, and he proved that glass fibres were suitable for a conductor of information.
Although this process was only possible by purifying the glass over thin fibres, they could carry
vast amounts of information over long distances with minimal signal loss. Dr. Koas' work would
be the groundwork for high-speed data communication and the basis for how optic fibres for
broadcast radio and television work.

In the 1970s, a group of researchers, Peter Schultz, Donald Keck and Robert Maurer, created an
optical fibre made out of Silica, a transparent material with a low refractive index. It was the first
step towards optical fibre communications because the scientists had discovered that Silica could
carry information at greater distances and more volume than traditional copper wire. Optical
Waveguide Fiber could carry almost 65,000 times more information than its predecessor. The
group had received a patent for their new device and work.

In 1973 scientists at the Bell Lab created a way to mass manufacture and developed optical fibre
at a meagre cost making it more accessible to middle-class consumers. This process remains to-
day; the material is made from ultra-transparent glass with a low refractive index. Once the optic
fibre became cheaper and more convenient, many major telephone companies in the 1970s and
1980s began moving from copper wires to optic fibres and changing communication infrastruc-
ture.

In the 1980s, Sprint's communication telephone company was the first to use a 100 percent digi-
tal fibre optic network that would operate nationwide.

In the year 1986, a researcher and professor by the name of David Payne were working at the
University of Southampton alongside a fellow scientist and professor by the name of Emmanuel
Besurivire at Bell Laboratories. Together they invented a device called the Erbium-Dope Fiber
Ampilfer which used the process and technology of laser amplification. This invention was nec-
essary because, by 1988, transatlantic telephone cable companies in the United States started us-
ing this technology. After all, this device reduced the cost of long-distance fibre systems and re-
moved the need to use optical-electrical repeaters.

In 1991 Optical Fibre Systems contained a built-in optical amplifier called Photonic Crystal
Fiber. These all-optical systems could carry 100 times more information than cable with elec-
tronic amplifiers vs traditional copper cables with amplifiers. These systems improved perfor-
mance by guiding light by diffraction from a periodic structure rather than a total internal reflec-
tion, allowing power to be carried more efficiently than older cables.

By 1991, most optical fibre systems started having built-in optical amplifiers called Photonic
Crystal Fibers. These were considered to be all optical systems and were considered to be more
effective than traditional copper cables with amplifiers. Electronic amplifiers were suggested to
have over 100 times more information capacity. Optic fibres released in earlier years would
travel through the process of total internal Reflection, with new amplifiers allowing light to
travel more efficiently using the process of guiding light by diffraction from a periodic structure.

1997 infrastructure for the next generation of Internet applications was built. It was named Fiber
Optic Link Around the Globe (FLAG), the world's most extended single cable network.

In 1997 Fiber Optic Link Around the Globe was introduced. It is the world's most extended sin-
gle cable network and lays the foundation and framework for future internet communications and
applications.

By 2000, over 25 km of cables carry most of the world's communication traffic from the internet,
television and telephones.
How does it work?

Fibre optic cables are similar to long thin strands of glass and have the diameter of a human hair.
The strands are arranged in bundles called fibre optic cables, and they can transmit light signals
over long distances.

Fibre optic cables are made out of clear glass that has been shaped into long thin strands about
the size of human hair. Once these individual strains are arranged together, the bundles are called
fibre optic cables. Fibre optic cables can make it possible to help light travel over long distances
in short amounts of time.

The light wants to move in all directions, but since the optic fibre cable will not allow it to es-
cape, it will bounce up and down the tube walls repeatedly and create a total internal reflection.

Internal Reflection worked and allowed for many internet, television and telephone invitations
that we all know and enjoy today. The light particles will move in a beam of light down the clear
glass pipe; along the way, they will hit a shallow angle of less than 42 degrees. The light will re-
flect into the pipe and create what is known as total internal Reflection, which effectively keeps
light inside the pipe.

The light particles bounce down the pipe with an internal mirror-like reflection. The Reflection
can be similar to a beam of light moving down in a clear glass pipe, and it hits a shallow angle of
fewer than 42 degrees and reflects into the pipe again. The process is called total internal Reflec-
tion, which keeps the light inside the pipe.
The Fiber optical System Consists of a Transmitter, an optical fibre and a receiver.

First electrical data input enters data into the finer optic system. Then the transmitter accepts and
converts inputs electrical signals to optical light signals and sends them by modulating a light
source output by lead or laser.

Inside an Optical Fiber:

A thin strand of glass the same size as glass which acts like a transmission medium. The light
travels through the fibre's core from one end to another by internal Reflection, and the light sig-
nal bounces down the fibre's core.

through a series of reflections to the other end of the pipe

The receiver is the light-to-electrical converter at the end of the glass strands. Optical Signals are
received by photodiodes which convert the optical light signal back into a digital electrical sig-
nal. The electrical data outputs results, which can be translated and processed by a router or net-
work switch.

The cable is made up of two separate parts:

Core: The center part of the cable is where the fibre optic cables sit, where the light will travel
through the strands of glass.

Cladding: This is the second part of the cable and is the outer layer of glass wrapped around the
core to prevent the light signals from escaping before reaching the final destination.
Two Types of Fiber Optic Cables:

Single Mode Fiber:

The simplest structure has a small core, around 5 to 10 microns in diameter. The light travels
down the middle without bouncing off the edge. The small size prevents light from bouncing off
the cladding even when the fibre is bent or curved. It is mainly used for long-distance data trans-
fer because of the minimal signal loss and the lack of interference from adjacent modes. It is
used for Internet and telephone applications that can send information over vast distances reach-
ing over 100 KM. This type of fibre is excellent for long distances because there are no disper-
sions (spreading out light), and it experiences lower attenuation (loss of optical power).

Multimode Fiber:

Ten times larger than single-mode fibre. This type of fibre is when light beams travel through a
core and follow various paths; therefore, this is great for sending data over short distances in ap-
plications like linking or interconnecting computer networks. Multimode Fibers can carry more
than one frequency of light simultaneously. Because there is a higher signal loss than multimode
fibre, it is often used for communication over short distances because of less bandwidth-intensive
applications. It is most commonly used in local area networks like buildings, corporate networks
or school campuses.

Gastroscope:

It is mainly used in endoscope technology and is a thicker type of fibre. Endoscopes are a medi-
cal tool used to check for illnesses inside the stomach by going down the throat. On one end,
there is an eyepiece and a lamp. The lamp shines its light down now part of the cable into the pa-
tient's stomach, and once the light reaches that area, the light will reflect off the stomach and into
a lens at the bottom of the device. Afterwards, the light travels back up another part of the cable
into the doctor's eyepiece.

The machine works in two parts one end of the machine is inserted into the mouth and will pass
through the body until it reaches the location that the doctor is interested in observing. The other
end contains a lamp, eyepiece, or screen where the doctor can view the cavity. The lamp will
shine light into the patient's stomach and reflect the body into the Lens at the bottom of the de-
vice. Through the optic fibre technology, the light will travel back up where it is viewable.

There is also an industrial version of this machine, and it is called a fiberscope. It is used to ex-
amine machinery in airplane engines where it is impossible to physically enter through or view
in a minimally invasive process.

Key People:

John Tyndall:

He was an Irish Experimental physicist born in County Carlow in Ireland, on August 2, 1820. He
was notable for the first practical demonstration of light propagation through a water tube via in-
ternal reflections, and he would call this device the light pipe. This early work would be the basis
for modern light technology and optic fibre communications many years later.

Narinder Sing Kapany:

Born in Punjab, India, in 1952, he was a UK-based physicist who would coin the term for optic
fibre. He would be the first to invent a fibre optic device by designing and manufacturing a glass
wire cable for transporting light based on the early demonstrations by John Tyndall. He has made
many significant contributions to optic technology through his books and teachings, specifically
the book he published in 1967, Optical Fibre Principles and Applications. Many supporters con-
sider him the father of optical fibre technology, but he failed to produce a practical model and
application.

Charles Kuen Kao:

He was born in Shanghai, China, on November 4, 1933; He would later study and work in the
Uk, where he proved that fibres made of ultra-pure glass could transmit light for distances of
high KM without total signal loss in the 1970s. Consider the father of optic fibre technology,
who in 2009 received the Nobel peace prize for discovering how light can be transmitted through
fibre optic cables shared with two other scientists. Kao's discoveries and inventions as had a sig-
nificant impact on the way modern fibre optics operate. Most of today's internet, television and
telephone rely on fibre optic technology, which can be traced back to Charles Kuen Kao's work.

Current Uses:

Before fibre optic technology, copper cables were used. Fibre optics have the advantage because
they are lighter, less heavy, flexible and can carry high amounts of data at very high speeds.
There is less attenuation and less signal loss. Other benefits include that information travels ten
times further before it needs electronic magnification or amplification. Optical cables are much
easier and more cost effective than traditional Copper Wires.

Because of Fiber Optic Tehconoly, we now have this new invention called cloud computing.

Cloud computing is where people store and process their data remotely because internet speeds
are almost 5 to ten times faster than traditional DSL broadband networks.
Since internet speeds are 5 to 10 times faster than traditional DSL broadband networks, it is eas-
ier for computer users to store information and files online than on the computer itself. Optical
fibre technology has allowed people to store and process data and access information remotely.

Faster connections have also allowed for streaming movies, online games and made the internet
much more accessible than in previous years. The world is much more connected virtually than it
has ever been before. It is easy to share information, pictures and videos of events going on
around the world issues like war and famine can be alerted to the globe within seconds through
the power of the internet.

Telephones also used fibre optic technology in 1980, ten years after the technology was born.
Many telecommunications companies started replacing their old infrastructure with new optic fi-
bre technology.

In previous years for television and radio broadcasts, the networks have broadcast stations that
use a single transmitter to shoot out electromagnetic waves through the air until it reaches thou-
sands of antennas. To obtain more channels, a user must have coaxial bales installed. These ca-
bles consisted of material similar to Cooper cables but had metal screening to prevent crosstalk
interference; the issue was that the operation would require more cables to obtain more channels.
Today most of these cables have been replaced with modern fibre optic cables; one cable can
carry enough data for serval hundred TV channels at once. They also have less interference and
better signal, picture, and sound quality. The optic fibre cables can travel long distances and do
not need as much amplification to boost signals as traditional copper wires.

Optical fibres are also used in medical applications. For example, an endoscope uses this type of
tech to help doctors peer inside a patient's body without cutting them open through a treatment
called endoscopy. To view and observe the upper digestive tract, a flexible gastroscope tube is
placed into the mouth and travels down until it reaches the small intestine passing along the
esophagus and stomach. The other end of the gastroscope tube is connected to a light and a video
camera which images can view by a doctor see a doctor. It can monitor symptoms of indigestion,
nausea or difficulty swallowing.
Plans for the future:

Lab-in-a-fiber:

Lab-in-a-fiber is a new device that inserts a thin hair of fibre optic cable with a built-in sensor
into a patient's body. It has the same composition as a communication cable but is much thinner.
This new technology contains a sensor using a laser to pass through the fibre and travels toward
the part of the body that the doctor wants to study. The result will be that the patient's body will
alter its properties like light intensity or wavelength, that the doctor will be able to measure how
light changes using interferometry techniques. This technique can measure things like the body
temperature, blood pressure levels, the PH of specific cells or if any medicine is working in the y
bloodstream.

Pacific Light Cable Network:

This network is the first direct submarine cable using optic fibres built and owned by Google,
Meta and Pacific Light Data Communications, incorporated in 2015. The network was designed
to connect and improve internet speeds in Hong Kong, Taiwan, the Philippines and the US. The
cables span a distance of approximately 13000 km, and the shortest root is between Hong Kong
and Los Angeles.

San Franciso:

San Franciso is looking to build a city-wide network using fibre optic technology designed to
help improve Internet access for residents who do not have high-speed internet at home. Resi-
dents need help with applying for jobs, obtaining education and educating children. One in seven
public schools in San Fransico needs a computer with a high-speed Internet connection. The city
aims to spend around 1.5 to 1.9 billion to improve public services. San Franciso believes that a
strong internet connection can help in industries like healthcare, education and energy usage,
leading to job booms and local economic growth.

Facebook Simba:

The Meta company is looking to build an all-optical network using new tech that allows for data
transmission at high speeds without electrical processing. The project will be named after the
lion king character Simba the Lion. This project is meant to be built in Africa to strengthen links
in the market where the use of Facebook and Whatsapp is daily and hopes to drive down the
bandwidth cost. The cables will be placed around the shores of each nation involved and will
link up at the beachhead of several countries. The goal is to build a dedicated and reliable link in
regions with poor internet access; there are still 3.8 billion people worldwide without internet ac-
cess. ';

You might also like