ASCII Table and Description
ASCII Table and Description
ASCII Table and Description
ASCII stands for American Standard Code for Information Interchange. Computers can only
understand numbers, so an ASCII code is the numerical representation of a character such as 'a'
or '@' or an action of some sort. ASCII was developed a long time ago and now the non-printing
characters are rarely used for their original purpose. Below is the ASCII character table and this
includes descriptions of the first 32 non-printing characters. ASCII was actually designed for use
with teletypes and so the descriptions are somewhat obscure. If someone says they want your CV
however in ASCII format, all this means is they want 'plain' text with no formatting such as tabs,
bold or underscoring - the raw format that any computer can understand. This is usually so they
can easily import the file into their own applications without issues. Notepad.exe creates ASCII
text, or in MS Word you can save a file as 'text only'
Components of a Processor
Control unit -- responsible for supervising the operation of the entire computer
system.
Arithmetic/logic unit (ALU) -- provides the computer with logical and computational
capabilities.
Register -- a storage location inside the processor.
A central processing unit (CPU), also referred to as a central processor unit,
[1]
is the hardware
within a computer that carries out the instructions of a computer program by performing the
basic arithmetical, logical, and input/output operations of the system. The term has been in use in
the computer industry at least since the early 1960s.
[2]
The form, design, and implementation of
CPUs have changed over the course of their history, but their fundamental operation remains
much the same.
A computer can have more than one CPU; this is called multiprocessing. Some integrated
circuits (ICs) can contain multiple CPUs on a single chip; those ICs are called multi-core
processors.
Two typical components of a CPU are the arithmetic logic unit (ALU), which performs
arithmetic and logical operations, and the control unit (CU), which extracts instructions from
memory and decodes and executes them, calling on the ALU when necessary.
Not all computational systems rely on a central processing unit. An array processor or vector
processor has multiple parallel computing elements, with no one unit considered the "center". In
the distributed computing model, problems are solved by a distributed interconnected set of
processors.
The abbreviation CPU is sometimes used incorrectly by people who are not computer specialists
to refer to the cased main part of a desktop computer containing the motherboard, processor, disk
drives, etc., i.e., not the display monitor or keyboard.
In computing, an arithmetic and logic unit (ALU) is a digital circuit that performs integer
arithmetic and logical operations. The ALU is a fundamental building block of the central
processing unit of a computer, and even the simplest microprocessors contain one for purposes
such as maintaining timers. The processors found inside modern CPUs and graphics processing
units (GPUs) accommodate very powerful and very complex ALUs; a single component may
contain a number of ALUs.
Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report
on the foundations for a new computer called the EDVAC. Research into ALUs remains as an
important part of computer science, falling under Arithmetic and logic structures in the ACM
Computing Classification System.
In computer architecture, a processor register is a small amount of storage available as part of a
CPU or other digital processor. Such registers are (typically) addressed by mechanisms other
than main memory and can be accessed more quickly. Almost all computers, load-store
architecture or not, load data from a larger memory into registers where it is used for arithmetic,
manipulated, or tested, by some machine instruction. Manipulated data is then often stored back
in main memory, either by the same instruction or a subsequent one. Modern processors use
either static or dynamic RAM as main memory, the latter often being implicitly accessed via one
or more cache levels. A common property of computer programs is locality of reference: the
same values are often accessed repeatedly and frequently used values held in registers improves
performance. This is what makes fast registers (and caches) meaningful.
Processor registers are normally at the top of the memory hierarchy, and provide the fastest way
to access data. The term normally refers only to the group of registers that are directly encoded
as part of an instruction, as defined by the instruction set. However, modern high performance
CPUs often have duplicates of these "architectural registers" in order to improve performance via
register renaming, allowing parallel and speculative execution. Modern x86 is perhaps the most
well known example of this technique.
[1]
Allocating frequently used variables to registers can be critical to a program's performance. This
register allocation is either performed by a compiler, in the code generation phase, or manually,
by an assembly language programmer.
Let's look at some examples of some basic Assembly Language Commands,
and how binary bit patterns are used in contrasting codes:
Probably the most basic commands to consider is "LOAD" vs "STORE":
o We could assume a binary code of 0011 1010 for a "LOAD" command, and
o We could assume a binary code of 0011 0010 for a "STORE" command (note the single
binary bit difference).
o The particular bit that went low would simply be a signal to a gate arrangement that we
are going to "WRITE" to memory, rather than "READ" from memory.
o It is the "0011 x010" that is intercepted and decoded as a basic memory access for
either of those two actions.
Now let's consider an "ADD" vs "SUBTRACT" command (consider that adding a negative number
is subtraction):
o We could assume a binary code of 1100 0110 for our "ADD" command, and
o We could assume a binary code of 1101 0110 for our "SUBTRACT" command.
o Note the one bit difference again, where that bit high was decoded as a adding a
"Negative" number.
If we want to increase or decrease a current value by only "1", then instead of adding or
subtracting, we could "INCREMENT" or "DECREMENT" instead.
o Let's assume a binary value of 0011 1100 for the "INCREMENT", and
o Let's assume a binary value of 0011 1101 for the "DECREMENT".
o Note the one bit that is used at the Instruction Decoder that the number is to be
decreased.
If we have a CPU that has a total of 8 Registers that are capable of receiving or handling data, we
would find that we could use only 3 binary bits to determine which register is being referenced.
As an example, lets use the "INCREMENT" command in reference to selecting one of those
Registers. Lets assume that 00xx x100 is the basic code for "INCREMENT", and that our Registers
are numbered as 0-7.
o 0000 0100 would be used to "INCREMENT" Register #0
o 0000 1100 would be used to "INCREMENT" Register #1
o 0001 0100 would be used to "INCREMENT" Register #2
o 0001 1000 would be used to "INCREMENT" Register #3
o 0010 0000 would be used to "INCREMENT" Register #4
o 0010 1000 would be used to "INCREMENT" Register #5
o 0011 0000 would be used to "INCREMENT" Register #6
o 0011 1000 would be used to "INCREMENT" Register #7 - Note the binary progression
that has occurred.
To do a Register "DECREMENT" we could assume the binary code that is almost like the one we
used for "INCREMENT", but with the changing of one single binary bit, to indicate the difference,
i.e. 00xx x101:
o 0000 0101 would be used to "DECREMENT" Register #0
o 0000 1101 would be used to "DECREMENT" Register #1
o 0001 0101 would be used to "DECREMENT" Register #2
o 0001 1001 would be used to "DECREMENT" Register #3
o 0010 0001 would be used to "DECREMENT" Register #4
o 0010 1001 would be used to "DECREMENT" Register #5
o 0011 0001 would be used to "DECREMENT" Register #6
o 0011 1001 would be used to "DECREMENT" Register #7 - Note the binary progression
that has occurred.