From Abacus To Analytical Engine
From Abacus To Analytical Engine
From Abacus To Analytical Engine
Kevin Career
Computer Language
ASCII
Hexadecimal Binary
Character
NUL 00 00000000
SOH 01 00000001
STX 02 00000010
ASCII
Hexadecimal Binary
Character
ETX 03 00000011
EOT 04 00000100
ENQ 05 00000101
ACK 06 00000110
BEL 07 00000111
BS 08 00001000
HT 09 00001001
LF 0A 00001010
VT 0B 00001011
FF 0C 00001100
CR 0D 00001101
SO 0E 00001110
SI 0F 00001111
DLE 10 00010000
DC1 11 00010001
DC2 12 00010010
DC3 13 00010011
DC4 14 00010100
NAK 15 00010101
SYN 16 00010110
ETB 17 00010111
ASCII
Hexadecimal Binary
Character
CAN 18 00011000
EM 19 00011001
SUB 1A 00011010
ESC 1B 00011011
FS 1C 00011100
GS 1D 00011101
RS 1E 00011110
US 1F 00011111
Space 20 00100000
! 21 00100001
" 22 00100010
# 23 00100011
$ 24 00100100
% 25 00100101
& 26 00100110
' 27 00100111
( 28 00101000
) 29 00101001
* 2A 00101010
+ 2B 00101011
, 2C 00101100
ASCII
Hexadecimal Binary
Character
- 2D 00101101
. 2E 00101110
/ 2F 00101111
0 30 00110000
1 31 00110001
2 32 00110010
3 33 00110011
4 34 00110100
5 35 00110101
6 36 00110110
7 37 00110111
8 38 00111000
9 39 00111001
: 3A 00111010
; 3B 00111011
< 3C 00111100
= 3D 00111101
> 3E 00111110
? 3F 00111111
@ 40 01000000
A 41 01000001
ASCII
Hexadecimal Binary
Character
B 42 01000010
C 43 01000011
D 44 01000100
E 45 01000101
F 46 01000110
G 47 01000111
H 48 01001000
I 49 01001001
J 4A 01001010
K 4B 01001011
L 4C 01001100
M 4D 01001101
N 4E 01001110
O 4F 01001111
P 50 01010000
Q 51 01010001
R 52 01010010
S 53 01010011
T 54 01010100
U 55 01010101
V 56 01010110
ASCII
Hexadecimal Binary
Character
W 57 01010111
X 58 01011000
Y 59 01011001
Z 5A 01011010
[ 5B 01011011
\ 5C 01011100
] 5D 01011101
^ 5E 01011110
_ 5F 01011111
` 60 01100000
a 61 01100001
b 62 01100010
c 63 01100011
d 64 01100100
e 65 01100101
f 66 01100110
g 67 01100111
h 68 01101000
i 69 01101001
j 6A 01101010
k 6B 01101011
ASCII
Hexadecimal Binary
Character
l 6C 01101100
m 6D 01101101
n 6E 01101110
o 6F 01101111
p 70 01110000
q 71 01110001
r 72 01110010
s 73 01110011
t 74 01110100
u 75 01110101
v 76 01110110
w 77 01110111
x 78 01111000
y 79 01111001
z 7A 01111010
{ 7B 01111011
| 7C 01111100
} 7D 01111101
~ 7E 01111110
DEL 7F 01111111
Character Encoding
Remember from the earlier video that a byte can store only zeros and ones.
That means we can have 256 possible values.
By the end of this video,
you'll learn how we can represent the words, numbers,
emojis and more we see on our screens,
from only these 256 possible values.
It's all thanks to character encoding.
Character encoding is used to assign
our binary values to characters so that we as humans can read them.
We definitely wouldn't want to see all the text in our emails and
Web pages rendered in complex sequences of zeros and ones.
This is where character encodings come in handy.
You can think of character encoding as a dictionary.
It's a way for your computers to look up
which human characters should be represented by a given binary value.
The oldest character encoding standard used this ASCII.
It represents the English alphabet,
digits, and punctuation marks.
The first character in ASCII to binary table, a lowercase a,
maps to 0 1 1 0 0 0 0 1 in binary.
This is done for all the characters you can find in
the English alphabet as well as numbers and some special symbols.
The great thing with ASCII was that we only needed to use
127 values out of our possible 256.
It lasted for a very long time,
but eventually it wasn't enough.
Other character encoding standards recreated to represent different languages,
different amounts of characters and more.
Eventually they would require more than 256 values we were allowed to have.
Then came UTF 8.
The most prevalent encoding standard used today.
Along with having the same ASCII table,
it also lets us use a variable number of bytes.
What do I mean by that? Think of any emoji.
It's not possible to make emojis with a single byte,
so as we can only store one character in a byte,
instead UTF 8 allows us to store a character in more than one byte,
which means endless emoji fun.
UTF 8 is built off the Unicode Standard.
We won't go into much of detail,
but the Unicode Standard helps us represent character encoding in a consistent manner.
Now that we've been able to represent letters, numbers,
punctuation marks and even emojis,
how do we represent color?
Well, there are all kinds of color models.
For now, let's stick to a basic one that's used in a lot of computers.
RGB or red, green, and blue model.
Just like the actual colors,
if you mix a combination of any of these,
you'll be able to get the full range of colors.
In computerland, we use 3 characters for the RGB model.
Each character represents a shade of the color and
that then changes the color of the pixel you see on your screen.
With just eight combinations of zeros and ones,
were able to represent everything that you see on your computer,
from a simple letter a,
to the very video that you're watching right now on the Coursera website.
Very cool. In the next video,
we'll discuss how we actually generate the zeros and ones.
Binary
You might be wondering how our computers get these ones and zeros.
It's a great question. Imagine we have a light bulb and
a switch that turns the state of the light on or off.
If we turn the light on,
we can denote that state is one.
If the light bulb is off,
we can represent the state is zero.
Now imagine eight light bulbs and switches,
that represents eight bits with a state of zero or one.
Let's backtrack to the punched cards that were used in Jacquard's loom.
Remember that the loom used cards with holes in them.
When the loom would reach a hole it would hooked to thread underneath,
meaning that the loom was on.
If there wasn't a hole,
it would not hook the thread, so it was off.
This is a foundational binary concept.
By utilizing the two states of on or off,
Jacquard was able to weave intricate patterns of the fabric with his looms.
Then the industry started refining the punch cards a little more.
If there was a hole,
the computer would read one.
If there wasn't a hole, it would read zero.
Then, by just translating the combination of zeros and ones,
our computer could calculate any possible amount of numbers.
Binary in today's computer isn't done by reading holes.
It uses electricity via transistors allowing electrical signals to pass through.
There's an electric voltage,
we would denote it as one.
If there isn't, we would denote it by zero.
For just having transistors isn't enough for our computer to be able to do complex tasks.
Imagine if you had two light switches on opposite ends of a room,
each controlling a light in the room.
What if when you went to turn on the light with one switch,
the other switch wouldn't turn off?
That would be a very poorly designed loom.
Both switches should either turn the light on or off depending on the state of the light.
Fortunately, we have something known as logic gates.
Logic gates allow our transistors to do more complex tasks,
like decide where to send electrical signals depending on logical conditions.
There are lots of different types of logic gates,
but we won't discuss them in detail here.
If you're curious about the role that
transistors and logic gates play in modern circuitry,
you can read more about it in the supplementary reading.
Now we know how our computer gets its ones and
zeros to calculate into meaningful instructions.
Later in this course, we'll be able to talk about how we're able to turn
human-readable instructions into zeros and
ones that are computer understands through a compiler.
That's one of the very basic building blocks of
programming that's led to the creation of our favorite social media sites,
video games, and just about everything else.
And I'm super excited to teach you how to count in binary, that's up next.
Abstraction
When we interact with our computers we use our mouse, keyboard or
even a touch screen.
We don't tell it the actual zeros and ones it needs to understand something.
But wait, we actually do.
We just don't ever have to worry about it.
We use the concept of abstraction to take a relatively complex system and
simplify it for our use.
You use abstraction every day in the real world, and you may not even know it.
If you've ever driven a car,
you don't need to know how to operate the transmission or the engine directly.
There's a steering wheel, some pedals, maybe a gear stick.
If you buy a car from a different manufacturer,
you operate it in pretty much the same way
even though the stuff under the hood might be completely different.
This is the essence of abstraction.
Abstraction hides complexity by providing a common interface, the steering wheel,
pedals, gear stick, and gauges in our car example.
Play video starting at :1:2 and follow transcript1:02
The same thing happens in our computer.
We don't need to know how works underneath the hood.
We have a mouse and a keyboard we can use to interact with it.
Thanks to abstractions,
the average computer user doesn't have to worry about the technical details.
We'll use this under the hood e metaphor throughout the program to describe
the area that contains the underlying implementation of the technology.
In computing, we use abstraction to make a very complex problem,
like how to make computers work, easier to think about.
We do that by breaking it apart into simpler ideas that describe single
concepts or individual jobs that need to be done, and then stack them in layers.
This concept of abstraction will be used throughout this entire course.
It's a fundamental concept in the computing world.
One other simple example of abstraction in an IT role that you might see a lot
is an error message.
We don't have to dig through someone else's code and find a bug.
This has been abstracted out for us already in the form of an error message.
A simple error message like file not found actually tells us a lot of information and
saves us time to figure out a solution.
Can you imagine if instead of abstracting an error message
our computer did nothing and we had no clue where to start looking for answers?
Abstraction helps us in many ways that we don't even realize.
Module Introduction