Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CS101 SHORT NOTES BY TUS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

TOPIC 109:

1. DEBUGGING: In the context of software engineering, debugging is the


process of fixing a bug in the software. In other words, it refers
to identifying, analyzing and removing errors

2. MACHINE LANGUAGE: Sometimes referred to as machine code or


object code, machine language is a collection of binary digits or bits that
the computer reads and interprets. Machine language is the only language
a computer is capable of understanding.

3. PROGRAMMING LANGUAGES: A programming language is a formal


language comprising a set of strings that produce various kinds of machine
code output. Programming languages are one kind of computer language,
and are used in computer programming to implement algorithms. Most
programming languages consist of instructions for computers.

4. IDENTIFIERS: descriptive names are often called program variables or


identifiers. An identifier is a sequence of characters in the code that
identifies a variable, function, or property.

TOPIC 110:

1. ASSEMBLERS: An assembler is a program that takes basic


computer instructions and converts them into a pattern of bits that
the computer's processor can use to perform its basic operations. Some
people call these instructions assembler language and others use the term
assembly language.

2. ASSEMBLY LANGUAGE: Assembly Language is a low-level


programming language. It helps in understanding the programming
language to machine code. In computers, there is an assembler that helps
in converting the assembly code into machine code executable. Assembly
language is designed to understand the instruction and provide it to
machine language for further processing. It mainly depends on the
architecture of the system, whether it is the operating system or computer
architecture

3. ADVANTAGES & DISADVANTAGES OF ASSEMBLY


LANGUAGES:

Advantages

1. It allows complex jobs to run in a simpler way.

2. It is memory efficient, as it requires less memory.

3. It is faster in speed, as its execution time is less.

4. It is mainly hardware-oriented.

5. It requires less instruction to get the result.

6. It is used for critical jobs.

7. It is not required to keep track of memory locations.

8. It is a low-level embedded system.

Disadvantages

1. It takes a lot of time and effort to write the code for the same.

2. It is very complex and difficult to understand.

3. The syntax is difficult to remember.

4. It has a lack of portability of program between different computer

architectures.
5. It needs more size or memory of the computer to run the long programs

written in Assembly Language.

4. MACHINE INDEPENDENT LANGUAGE: A Machine Independent

language is one that can run on any machine. An example of this

would be Java. Because of the Java Virtual Machine or JVM it can take

the compiled code for any given Java App and run it on the machine you

are attempting to run it on.

5. MACHINE DEPENDENT LANGUAGE: Machine dependent means

the program can only work on the type of computer it was designed

for while Machine independent means the program can work on any

computer system. 1. Machine language is a first-generation language

written using 1s and 0s.

6. TRANSLATORS: The most general term for a software code

converting tool is “translator.” A translator, in software programming

terms, is a generic term that could refer to a compiler, assembler, or

interpreter; anything that converts higher level code into another high-

level code (e.g., Basic, C++, Fortran, Java) or lower-level (i.e., a

language that the processor can understand), such as assembly language

or machine code.
7. INTERPRETOR: An interpreter is a computer program, which coverts

each high-level program statement into the machine code. This includes

source code, pre-compiled code, and scripts. Both compiler and

interpreters do the same job which is converting higher level

programming language to machine code. However, a compiler will

convert the code into machine code (create an exe) before program run.

Interpreters convert code into machine code when the program is run.

8. COMPILERS: A compiler is a computer program that transforms code

written in a high-level programming language into the machine code. It is

a program which translates the human-readable code to a language a

computer processor understands (binary 1 and 0 bits). The computer

processes the machine code to perform the corresponding tasks.

9. natural languages (such as English, German, and Latin) are

distinguished from formal languages (such as programming languages)

Interpreter Compiler

Translates program one statement at a Scans the entire program and translates it
time. as a whole into machine code.

Interpreters usually take less amount of Compilers usually take a large amount of
time to analyze the source code. time to analyze the source code. However,
However, the overall execution time is the overall execution time is
comparatively slower than compilers. comparatively faster than interpreters.

Generates Object Code which further


No Object Code is generated, hence are
requires linking, hence requires more
memory efficient.
memory.

Programming languages like JavaScript, Programming languages like C, C++, Java


Python, Ruby use interpreters. use compilers.

TOPIC 112:

1. PROGRAMMING PARADIGM: A programming paradigm is a style,


or “way,” of programming. Some languages make it easy to write in some
paradigms but not others.

2. IMPERATIVE PARADIGM: Imperative programming (from Latin


imperare = command) is the oldest programming paradigm. A program
based on this paradigm is made up of a clearly-defined sequence of
instructions to a computer.
Therefore, the source code for imperative languages is a series of commands,
which specify what the computer has to do – and when – in order to achieve a
desired result. Values used in variables are changed at program runtime. To control
the commands, control structures such as loops or branches are integrated into
the code.

TOPIC 113:
1. DECLARATIVE PARADIGM: Declarative programming is a
programming paradigm in which the programmer defines what needs to be
accomplished by the program without defining how it needs to be
implemented. In other words, the approach focuses on what needs to be
achieved instead of instructing how to achieve it. It is different from an
imperative program which has the command set to resolve a certain set of
problems by describing the steps required to find the solution. Declarative
programming describes a particular class of problems with language
implementation taking care of finding the solution. The declarative
programming approach helps in simplifying the programming behind some
parallel processing applications.

2. IMPERATIVE VS DECLARATIVE: Declarative programming is a


programming paradigm … that expresses the logic of a computation without
describing its control flow. Imperative programming is a programming
paradigm that uses statements that change a program’s state.
Imperative Programming
Declarative Programming

• In this, programs specify • In this, programs specify what


how it is to be done. is to be done.

• It simply describes the


control flow of • It simply expresses the logic
computation. of computation.

• Its main goal is to describe • Its main goal is to describe the


how to get it or accomplish desired result without direct
it. dictation on how to get it.

• Its advantages include


• Its advantages include ease effective code, which can be
to learn and read, the applied by using ways, easy
notional model is simple to extension, high level of
understand, etc. abstraction, etc.

• Its type includes procedural • Its type includes logic


programming, object- programming and functional
oriented programming, programming.
parallel processing
approach.

• In this, the user is allowed


to make decisions and
commands to the • In this, a compiler is allowed
compiler. to make decisions.

• It has many side effects and • It has no side effects and does
includes mutable variables not include any mutable
as compared to declarative variables as compared to
programming. imperative programming.

• It gives full control to


developers that are very • It may automate repetitive
important in low-level flow along with simplifying
programming. code structure.

TOPIC 114:
1. FUNCTIONAL PARADIGM: Functional programming (often
abbreviated FP) is the process of building software by composing pure
functions, avoiding shared state, mutable data, and side-effects.
Functional programming is declarative rather than imperative, and
application state flows through pure functions, functional
programming means using functions to the best effect for creating
clean and maintainable software. More specifically, functional
programming is a set of approaches to coding, usually described as a
programming paradigm.
TOPIC 115:
1. Object Oriented programming (OOP) is a programming paradigm
that relies on the concept of classes and objects. It is used to structure
a software program into simple, reusable pieces of code blueprints
(usually called classes), which are used to create individual instances
of objects. There are many object-oriented programming languages
including JavaScript, C++, Java, and Python.
2. A class is an abstract blueprint used to create more specific, concrete
objects.
3. Classes can also contain functions, called methods available only to
objects of that type. These functions are defined within the class and
perform some action helpful to that specific type of object.
TOPIC 116:
1. Variables are used to store information to be referenced and manipulated
in a computer program. They also provide a way of labeling data with a
descriptive name, so our programs can be understood more clearly by the
reader and ourselves. It is helpful to think of variables as containers that
hold information. Their sole purpose is to label and store data in memory.
This data can then be used throughout your program.

2. A data type is a classification of data which tells the compiler or interpreter


how the programmer intends to use the data. Most programming languages
support various types of data, including integer, real, character or string, and
Boolean.
3. The type Boolean refers to data items that can take on only the values true or
false. Operations on data of type Boolean include inquiries as to whether the
current value is true or false.

4. The data types that are included as primitives in a programming language,


such as int for integer and char for character, are called primitive data
types.
TOPIC 117:
1. Data Structure can be defined as the group of data elements which provides
an efficient way of storing and organizing data in the computer so that it can
be used efficiently.

2. An array is a data structure that contains a group of elements. Typically


these elements are all of the same data type, such as an integer or string.
Arrays are commonly used in computer programs to organize data so that a
related set of values can be easily sorted or searched.
TOPIC 118:

• The most basic imperative statement is the assignment statement, which


requests that a value be assigned to a variable (or more precisely, stored in
the memory area identified by the variable). An assignment statement gives
a value to a variable. For example,

x = 5;

gives x the value 5.

TOPIC 119:

A control statement alters the execution sequence of the program. Of all the
programming constructs. A control statement is a statement that determines
whether other statements will be executed.

• An if statement decides whether to execute another statement, or decides


which of two statements to execute.

• A loop decides how many times to execute another statement.

• IF STATEMENT: An if statement is a programming conditional statement


that, if proved true, performs a function or displays information

TOPIC 120:

float f=3.5;
if (CGPA>=3.0)
cout<<”Give Scholarship”;
else
cout<<”Sorry you do not qualify for the scholarship”;

HERE :
float= data type
f = variable
= is assignment
3.5 = value
(CGPA>=3.0) = condition

TOPIC 121:
There is another type of control structure known as loop. The loop control structure
iterates a set of instructions based on the provided condition.
LOOP: a loop is a programming structure that repeats a sequence of instructions
until a specific condition is met.

TOPIC 122:
simultaneous execution of multiple activations is called parallel processing or
concurrent processing. True parallel processing requires multiple CPU cores, one
to execute each activation.

TOPIC 123: Arithmetic Operators Examples

+Addition
-Subtraction
*Multiplication
/Division
%Modulus

+, -, and * are the same as used in the mathematics. However, the “/” has a
difference. If one of the operands is decimal number, then it results in the same
way as in mathematics, for example:
5.0/2.0 would result into 2.5.
However, when both operands are integers, then it would truncate the decimal
point and
5/2 would result into 2.
The remaining “1” can be acquired by using the modulus operator (%).
5%2 would give 1.

TOPIC 124: Relational Operators Examples

<Less than
<=Less than or equal to
>Greater than
>=Greater than or equal to
= =Equal to
!=Not Equal to
C++ Relational Operators are used to compare values of two variables. Here in
example we used the operators in if statement.
Now if the result after comparison of two variables is True, then if statement
returns value 1.
And if the result after comparison of two variables is False, then if statement
returns value 0.

TOPIC 125: Logical Operators Examples

Operators Name of the Type


Operator
&& AND Operator Binary
|| OR Operator Binary
! NOT Operator Unary

Operator Output
AND Output is 1 only when
conditions on both
sides of Operator
become True

OR Output is 0 only when


conditions on both
sides of Operator
become False

NOT It gives inverted Output


TOPIC 126:
• Software engineering is defined as a process of analyzing user
requirements and then designing, building, and testing software application
which will satisfy those requirements. Software engineering is a detailed
study of engineering to the design, development and maintenance of
software. Software engineering was introduced to address the issues of low-
quality software projects. Problems arise when a software generally exceeds
timelines, budgets, and reduced levels of quality. It ensures that the
application is built consistently, correctly, on time and on budget and within
requirements. The demand of software engineering also emerged to cater to
the immense rate of change in user requirements and environment on which
application is supposed to be working.
• Computer aided software engineering (CASE) is the implementation of
computer facilitated tools and methods in software development. CASE is
used to ensure a high-quality and defect-free software. CASE ensures a
check-pointed and disciplined approach and helps designers, developers,
testers, managers and others to see the project milestones during
development
• An IDE, or Integrated Development Environment, enables programmers
to consolidate the different aspects of writing a computer program. IDEs
increase programmer productivity by combining common activities of
writing software into a single application: editing source code, building
executables, and debugging.
TOPIC 127:
The Software Development Life Cycle (SDLC) refers to a methodology with
clearly defined processes for creating high-quality software. in detail, the SDLC
methodology focuses on the following phases of software development:

• Requirement analysis
• Planning
• Software design such as architectural design
• Software development
• Testing
• Deployment.

SDLC is a systematic process for building software that ensures the quality and
correctness of the software built. SDLC process aims to produce high-quality
software that meets customer expectations. The system development should be
complete in the pre-defined time frame and cost. SDLC consists of a detailed plan
which explains how to plan, build, and maintain specific software. Every phase of
the SDLC life Cycle has its own process and deliverables that feed into the next
phase. SDLC stands for Software Development Life Cycle and is also referred to
as the Application Development life-cycle

TOPIC 128:

• Requirement Analysis, also known as Requirement Engineering, is the


process of defining user expectations for a new software being built or
modified. In software engineering, it is sometimes referred to loosely by
names such as requirements gathering or requirements capturing.
Requirement’s analysis encompasses those tasks that go into determining
the needs or conditions to meet for a new or altered product or project,
taking account of the possibly conflicting requirements of the various
stakeholders, analyzing, documenting, validating and managing software or
system requirements.
• The purpose of the Requirements Analysis Phase is to transform the needs
and high-level requirements specified in earlier phases into unambiguous
(measurable and testable), traceable, complete, consistent, and stakeholder-
approved requirements

• ''Off the shelf'' means the shelf of products in any store, accessible to anyone
who walks into the store. Therefore, Commercial Off-the-Shelf Software
(COTS) is software that is commercially produced and sold in a retail store
or online, ready to use without any form of modification by the user, and
accessible to everyone.

TOPIC 129:

• In the design phase, one or more designs are developed, with which the
project result can apparently be achieved. Depending on the subject of
the project, the products of the design phase can include dioramas,
sketches, flow charts, site trees, HTML screen designs, prototypes, photo
impressions and UML schemas. The project supervisors use these
designs to choose the definitive design that will be produced in the
project.
• In the design phase the architecture is established. This phase starts
with the requirement document delivered by the requirement phase and
maps the requirements into an architecture. The architecture defines the
components, their interfaces and behaviors.

TOPIC 130:

• The implementation phase involves putting the project plan into action. It’s
here that the project manager will coordinate and direct project resources to
meet the objectives of the project plan
• The implementation phase is where you and your project team actually do
the project work to produce the deliverables
• It is during this phase that the project becomes visible to outsiders, to whom
it may appear that the project has just begun. The implementation phase is
the doing phase, and it is important to maintain the momentum.

TOPIC 131:

• In a software development team, a software analyst is the person who


studies the software application domain, prepares software
requirements, and specification (Software Requirements Specification)
documents. The software analyst is the seam between the software users and
the software developers.
• The testing phase of the software development lifecycle (SDLC) is where
you focus on investigation and discovery. During the testing phase,
developers find out whether their code and programming work according to
customer requirements. And while it's not possible to solve all the failures
you might find during the testing phase, it is possible to use the results from
this phase to reduce the number of errors within the software program.
• The testing phases of the software development lifecycle help companies to
identify all the bugs and errors in the software before the implementation
phase begins. If software bugs are not resolved before deployment, they can
adversely affect the client’s business.
TOPIC 132:
• Waterfall model in SDLC: The waterfall is a widely accepted SDLC
model. In this approach, the whole process of the software development is
divided into various phases of SDLC. In this SDLC model, the outcome of
one phase acts as the input for the next phase.This SDLC model is
documentation-intensive, with earlier phases documenting what need be
performed in the subsequent phases.
The waterfall model is a continuous software development model in which
development is seen as flowing steadily downwards (like a waterfall) through the
steps of requirements analysis, design, implementation, testing (validation),
integration, and maintenance.
• Incremental Model in SDLC: The incremental model is not a separate
model. It is essentially a series of waterfall cycles. The requirements are
divided into groups at the start of the project. For each group, the SDLC
model is followed to develop software. The SDLC life cycle process is
repeated, with each release adding more functionality until all requirements
are met. In this method, every cycle act as the maintenance phase for the
previous software release. Modification to the incremental model allows
development cycles to overlap. After that subsequent cycle may begin before
the previous cycle is complete.
The development process based on the Incremental model is split into several
iterations (“Lego-style” modular software design is required!). New software
modules are added in each iteration with no or little change in earlier added
modules. The development process can go either sequentially or in parallel.
Parallel development adds to the speed of delivery, while many repeated cycles of
sequential development can make the project long and costly.
• With Iterative development software changes on each iteration, evolves
and grows. As each iteration builds on the previous one, software design
remains consistent.

• Whereas the incremental model carries the notion of extending each


preliminary version of a product into a larger version, the iterative model
encompasses the concept of refining each version. In reality, the incremental
model involves an underlying iterative process, and the iterative model may
incrementally add features.
TOPIC 133:
• Prototyping is an experimental process where design teams implement
ideas into tangible forms from paper to digital. Teams build prototypes of
varying degrees of fidelity to capture design concepts and test on users. With
prototypes, you can refine and validate your designs so your brand can
release the right products.
• The most basic definition of “prototype” is, “A simulation or sample version
of a final product, which is used for testing prior to launch.” The goal of a
prototype is to test products (and ideas) and sharing them with stakeholders
before sinking lots of time and money into the final product.
• Evolutionary prototyping is a software development method where the
developer or development team first constructs a prototype. After
receiving initial feedback from the customer, subsequent prototypes are
produced, each with additional functionality or improvements, until the final
product emerges.
• Throwaway or rapid prototyping refers to the creation of a model that
will eventually be discarded rather than becoming part of the final
delivered software. ... When this goal has been achieved, the prototype
model is 'thrown away', and the system is formally developed based on the
identified requirements.
• The term open source refers to something people can modify and share
because its design is publicly accessible
• Open source software is software with source code that anyone can inspect,
modify, and enhance.
• Open source is a term that originally referred to open source software (OSS).
Open source software is code that is designed to be publicly accessible—
anyone can see, modify, and distribute the code as they see fit.

TOPIC 134 & 135:


• Modularization: Modularization is the process of dividing a software
system into multiple independent modules where each module works
independently. There are many advantages of Modularization in software
engineering. Some of these are given below:

• Easy to understand the system.


• System maintenance is easy.
• A module can be used many times as their requirements. No need to
write it again and again.
• Coupling: Coupling is the measure of the degree of interdependence
between the modules. A good software will have low coupling.

In general terms, the term coupling is defined as a thing that joins together two
objects. If we talk about software development, then the term coupling is related to
the connection between two modules, i.e. how tight interaction do the two modules
hold with each other is defined by coupling.

Hence, the term coupling is defined as follows: "“The measure of the degree of
the interdependency of two modules on each other is known as coupling."

It should be noted that a module that has high cohesion and low coupling is
functionally independent.

• Cohesion: Cohesion is a measure of the degree to which the elements of


the module are functionally related. It is the degree to which all elements
directed towards performing a single task are contained in the component.
Basically, cohesion is the internal glue that keeps the module together. A
good software design will have high cohesion.
A good software design implies clean decomposition of the problem into modules
and the neat arrangement of these modules in a hierarchy. The primary
characteristics of neat module decomposition are low coupling and high cohesion.
Cohesion is a measure of functional strength of a module. A module having low
coupling and high cohesion is said to be functionally independent of other
modules. Functional independence means that a cohesive module performs a
single function or task. A functionally independent module has very little
interaction with other modules.

TOPIC 141 142:


Software Testing can be majorly classified into two categories:

1. Black Box Testing is a software testing method in which the internal


structure/ design/ implementation of the item being tested is not known
to the tester

2. White Box Testing is a software testing method in which the internal


structure/ design/ implementation of the item being tested is known to
the tester.
Black Box Testing White Box Testing

It is a way of software testing


in which the internal structure
or the program or the code is It is a way of testing the software in which the
hidden and nothing is known tester has knowledge about the internal structure
about it. or the code or the program of the software.

It is mostly done by software


testers. It is mostly done by software developers.

No knowledge of
implementation is needed. Knowledge of implementation is required.

It can be referred as outer or


external software testing. It is the inner or the internal software testing.

It is functional test of the


software. It is structural test of the software.

This testing can be initiated on


the basis of requirement This type of testing of software is started after
specifications document. detail design document.

No knowledge of programming It is mandatory to have knowledge of


is required. programming.
Black Box Testing White Box Testing

It is the behavior testing of the


software. It is the logic testing of the software.

It is applicable to the higher It is generally applicable to the lower levels of


levels of testing of software. software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred


for algorithm testing. It is suitable for algorithm testing.

Can be done by trial and error Data domains along with inner or internal
ways and methods. boundaries can be better tested.

Example: search something on


google by using keywords Example: by input to check and verify loops

Types of Black Box Testing:

• A. Functional
Testing
Types of White Box Testing:
• B. Non-functional
testing
• A. Path Testing
• C. Regression Testing
• B. Loop Testing

• C. Condition testing
Parameter Black Box testing White Box testing
Black Box Testing White Box Testing
It is a testing approach which
It is a testing approach
is used to test the software
in which internal
Definition without the knowledge of the
structure is known to
internal structure of program
the tester.
or application.
It is also called
It also knowns as data- structural testing, clear
Alias driven, box testing, data-, and box testing, code-
functional testing. based testing, or glass
box testing.
Testing is based on external
Internal working is
expectations; internal
Base of Testing known, and the tester
behavior of the application is
can test accordingly.
unknown.
Testing is best suited
This type of testing is ideal
for a lower level of
for higher levels of testing
Usage testing like Unit
like System Testing,
Testing, Integration
Acceptance testing.
testing.
Programming
Programming knowledge is
knowledge is required
Programming knowledge not needed to perform Black
to perform White Box
Box testing.
testing.
Complete
Implementation knowledge is
understanding needs
Implementation knowledge not requiring doing Black
to implement
Box testing.
WhiteBox testing.
Test and programmer are
White Box testing is
Automation dependent on each other, so
easy to automate.
it is tough to automate.
The main objective of this The main objective of
testing is to check what White Box testing is
Objective
functionality of the system done to check the
under test. quality of the code.
Testing can start after Testing can start after
Basis for test cases preparing requirement preparing for Detail
specification document. design document.
Black Box Testing White Box Testing
Performed by the end user, Usually done by tester
Tested by
developer, and tester. and developers.
Granularity Granularity is low. Granularity is high.
Data domain and
It is based on trial and error
Testing method internal boundaries
method.
can be tested.
It is less exhaustive and time- Exhaustive and time-
Time
consuming. consuming method.
Not the best method for Best suited for
Algorithm test
algorithm testing. algorithm testing.
White box testing
requires code access.
Code access is not required
Code Access Thereby, the code
for Black Box Testing.
could be stolen if
testing is outsourced.
It allows removing the
Well suited and efficient for extra lines of code,
Benefit
large code segments. which can bring in
hidden defects.
Low skilled testers can test
the application with no Need an expert tester
knowledge of the with vast experience
Skill level
implementation of to perform white box
programming language or testing.
operating system.
Statement Coverage,
Equivalence partitioning is
Branch coverage, and
Black box testing technique
Path coverage are
is used for Blackbox testing.
White Box testing
Equivalence partitioning technique.
Techniques divides input values into
Statement Coverage
valid and invalid partitions
validates whether
and selecting corresponding
every line of the code
values from each partition of
is executed at least
the test data.
once.
Black Box Testing White Box Testing
Boundary value analysis Branch coverage
validates whether each
checks boundaries for input branch is executed at
values. least once

Path coverage method


tests all the paths of
the program.

Update to automation test Automated test cases


script is essential if you to can become useless if
Drawbacks
modify application the code base is
frequently. rapidly changing.
Steps in Quality Control
The process of quality control consists of the following steps

(i) Determination of quality standards-specification of desired quality level in terms


of weight, specific dimension, strength, chemical composition, etc.

(ii) The design of the production system which would be compatible to the
achievement of the specified quality.

(iii) Control action to ensure that established quality standards are met.

(iv) Inspection of produced products to see if the overall quality of lots satisfies the
specifications.

Thus, quality control involves inspection of raw materials, parts, gauge, tools and
finished products. It operates when materials and parts are purchased, during
manufacturing process and in case of finished products through performance testing.
Scope of quality control
There are three broad areas where quality control should be applied in manufacturing
industry.

1. Supply quality assurance. Supplier quality assurance (SQA) is a contract with


the supplier of raw materials and components. Under this agreement, the
manufacturer ensures that incoming materials and parts will be of uniform and
acceptable quality. It is also called incoming material control wherein the quality of
bough-tout components and materials is continuously manicured and maintained.
Unless the materials and parts conform to the establish quality standards, quality of
finished products cannot be maintained despite best efforts in manufacturing.
Moreover, poor quality materials lead to rejections, idle time, wastage of processing
time and labour cost and delay in supplies to customers. Therefore, effective control
should be exercised on all incoming materials, components and sub-assemblies.

2. In-process control. During the stage of processing materials, random samples of


the product are taken and their quality is measured against predetermined standards
of quality. Such tests may reveal certain defects in the production process. Corrective
steps are taken to ensure that right quality products are manufactured. In process
control helps in building the desired quality into the finished product and prevents
production of sub-standard products.
A process is considered satisfactory or under control so long as it continues to
produce products of desired quality and specification. In process control techniques
involve evaluation of process standards in terms of rework, scrap, dimensions,
rejection, etc. In process control consists of all the procedures employed to evaluate,
maintain and improve quality standards at different stages of manufacture.

3. Post-mortem inspection. It is taken after the products are manufactured or


completed. It is a technique of evaluating the quality of a product and of
classifying the units into acceptable and reject-able categories. Inspection
controls are often called quality assurance. Design controls after the products
leave the plant are known as reliability.
TOPIC 146:
Recall that an array is a “rectangular” block of data whose entries are of the same
type. The simplest form of array is the one-dimensional array, a single row of
elements with each position identified by an index

TOPIC 147: 148


• Another basic data structure is a list, which is a collection whose entries are
arranged sequentially . The beginning of a list is called the head of the list.
The other end of a list is called the tail.

• A stack is a list in which entries are inserted and removed only at the head.
An example is a stack of books where physical restrictions dictate that all
additions and deletions occur at the top (the head of a stack is called the top
of the stack. The tail of a stack is called its bottom or base.

• Inserting a new entry at the top of a stack is called pushing an entry.


Removing an entry from the top of a stack is called popping an entry.

• last-in, first-out, or LIFO (pronounced “LIE-foe”) structure.


This LIFO characteristic means that a stack is ideal for storing items that must
be retrieved in the reverse order from which they were stored, and thus a stack
is often used as the underpinning of backtracking activities. (The term
backtracking refers to the process of backing out of a system in the opposite
order from which the system was entered. A classic example is the process of
retracing one’s steps in order to find one’s way out of a forest.)

• A queue is a list in which the entries are removed only at the head and new
entries are inserted only at the tail. An example is a line, or queue, of people
waiting to buy tickets at a theater (Figure 104c)—the person at the head of
the queue is served while new arrivals step to the rear (or tail) of the queue.

• A queue is a first-in, first-out, or FIFO (pronounced “FIE-foe”) structure,


meaning that the entries are removed from a queue in the order in which
they were stored.

• A tree is a collection whose entries have a hierarchical organization similar


to that of an organization chart of a typical company (Figure 105). The
president is represented at the top, with lines branching down to the vice
presidents, who are followed by regional managers, and so on. To this
intuitive definition of a tree structure we impose one additional constraint,
which (in terms of an organization chart) is that no individual in the
company reports to two different superiors. That is, different branches of the
organization do not merge at a lower level. (We have already seen examples
of trees in Chapter 6 where they appeared in the form of parse trees.)

• Each position in a tree is called a node (Figure 106). The node at the top is
called the root node (if we turned the drawing upside down, this node would
represent the base or root of the tree). The nodes at the other extreme are
called terminal nodes (or sometimes leaf nodes). We often refer to the
number of nodes in the longest path from the root to a leaf as the depth of
the tree. In other words, the depth of a tree is the number of horizontal layers
within it

• A pointer is a storage area that contains such an encoded address. In the


case of data structures, pointers are used to record the location where data
items are stored

• A database is an organized collection of structured information, or data,


typically stored electronically in a computer system. A database is usually
controlled by a database management system (DBMS). ... The data can then
be easily accessed, managed, modified, updated, controlled, and organized.

• A flat-file database is a database stored in a file called a flat file. Records


follow a uniform format, and there are no structures for indexing or
recognizing relationships between records. The file is simple. A flat file can
be a plain text file, or a binary file.

• A database schema is the skeleton structure that represents the logical view
of the entire database. It defines how the data is organized and how the
relations among them are associated. It formulates all the constraints that are
to be applied on the data.A database schema defines its entities and the
relationship among them. It contains a descriptive detail of the database,
which can be depicted by means of schema diagrams. It’s the database
designers who design the schema to help programmers understand the
database and make it useful.
• A Database Management System (DBMS) is software designed to store,
retrieve, define, and manage data in a database.

• DBMS software primarily functions as an interface between the end user and
the database, simultaneously managing the data, the database engine, and the
database schema in order to facilitate the organization and manipulation of
data.

• relational database is a type of database that stores and provides access to


data points that are related to one another. Relational databases are based on
the relational model, an intuitive, straightforward way of representing data in
tables. In a relational database, each row in the table is a record with a
unique ID called the key. The columns of the table hold attributes of the
data, and each record usually has a value for each attribute, making it easy to
establish the relationships among data points.

• SELECT (σ)
The SELECT operation is used for selecting a subset of the tuples according to a
given selection condition. Sigma(σ)Symbol denotes it. It is used as an expression
to choose tuples which meet the selection condition. Select operator selects tuples
that satisfy a given predicate.

• Join Operations
Join operation is essentially a cartesian product followed by a selection criterion.

Join operation denoted by ⋈.

JOIN operation also allows joining variously related tuples from different relations.

• Project operator is denoted by ∏ symbol and it is used to select desired


columns (or attributes) from a table (or relation).

• Join in DBMS is a binary operation which allows you to combine join


product and selection in one single statement. The goal of creating a join
condition is that it helps you to combine the data from two or more DBMS
tables. The tables in DBMS are associated using the primary key and foreign
keys.

• An object-oriented database (OOD) is a database system that can work


with complex data objects — that is, objects that mirror those used in object-
oriented programming languages.In object-oriented
programming, everything is an object, and many objects are quite complex,
having different properties and methods.

COMMIT ROLLBACK

COMMIT permanently saves the ROLLBACK undo the changes


changes made by current transaction. made by current transaction.

Transaction can not undo changes Transaction reaches its previous


after COMMIT execution. state after ROLLBACK.

When transaction is successful, When transaction is aborted,


COMMIT is applied. ROLLBACK occurs.
• Lock Based Protocols in DBMS is a mechanism in which a transaction
cannot Read or Write the data until it acquires an appropriate lock. Lock
based protocols help to eliminate the concurrency problem in DBMS for
simultaneous transactions by locking or isolating a particular transaction to a
single user.
• 1. Shared Lock (S):A shared lock is also called a Read-only lock. With the
shared lock, the data item can be shared between transactions. This is
because you will never have permission to update data on the data item.
• Exclusive Lock (X):With the Exclusive Lock, a data item can be read as
well as written. This is exclusive and can’t be held concurrently on the same
data item. X-lock is requested using lock-x instruction. Transactions may
unlock the data item after finishing the ‘write’ operation

The Two-Phase Locking protocol allows each transaction to make a lock or unlock
request in two steps:

• Growing Phase: In this phase transaction may obtain locks but may not
release any locks.
• Shrinking Phase: In this phase, a transaction may release locks but not
obtain any new lock

• A sequential file contains records organized by the order in which they were
entered. The order of the records is fixed.Records in sequential files can be
read or written only sequentially.After you place a record into a sequential
file, you cannot shorten, lengthen, or delete the record. However, you can
update (REWRITE) a record if the length does not change. New records are
added at the end of the file.If the order in which you keep records in a file is
not important, sequential organization is a good choice whether there are
many records or only a few. Sequential output is also useful for printing
reports.

• ISAM (an acronym for indexed sequential access method) is a method for
creating, maintaining, and manipulating computer files of data so that
records can be retrieved sequentially or randomly by one or more keys

• Hash data is a numerical representation of data and is not easy for a human
to interpret. A hash file is a file that has been converted into a numerical
string by a mathematical algorithm. This data can only be understood after
it has been unencrypted with a hash key.

• Data mining is a process used to identify trends and patterns between


different sets of data in large databases. Selecting the right data from such
large amounts of data (called big data) can help show trends and patterns
between data sets, which can improve decision making dramatically.

• artificial intelligence is the simulation of human intelligence processes by


machines, especially computer systems. Specific applications of AI include
expert systems, natural language processing, speech recognition and
machine vision

• An intelligent agent (IA) is an entity that makes a decision, that


enables artificial intelligence to be put into action
• The Turing Test is a method of inquiry in artificial intelligence (AI) for
determining whether or not a computer is capable of thinking like a human
being. ... Turing proposed that a computer can be said to possess artificial
intelligence if it can mimic human responses under specific conditions.

• Syntactic analysis, also referred to as syntax analysis or parsing, is the


process of analyzing natural language with the rules of a formal
grammar. Grammatical rules are applied to categories and groups of words,
not individual words. Syntactic analysis basically assigns a semantic
structure to text

• Semantic analysis is the task of ensuring that the declarations and


statements of a program are semantically correct, i.e, that their meaning
is clear and consistent with the way in which control structures and data
types are supposed to be used.

• Contextual analysis (CA) is a holistic view of a context; the whole


environment in which programmes operate. The environment spans all of the
policies, institutions and processes, including the private sector, the
demographics, and the social, cultural environmental and economic aspects
of life in a specific area.

• CS Impact: CS impact on Society read!!!!!!!!

You might also like