Computer Application
Computer Application
Computer Application
UNIT-1
Define Computer:
A computer is a machine or device that performs processes, calculations
and operations based on instructions provided by a software or hardware
program. It has the ability to accept data (input), process it, and then
produce outputs.
Computer Components:
1. Scanner: An input device that can convert the contents of a paper document
into a digital image that can be stored in the computer.
2. CPU (Central Processing Unit) or the processor: The "brain" of the computer
where programs are run. It is one of the most expensive parts of the hardware.
Modern CPUs can perform multiple tasks simultaneously.
3. RAM (Random Access Memory): The computer's high-speed, short-term
memory. It temporarily stores data and instructions for programs that run on the
computer.
4. Expansion Cards: Circuit boards that can be inserted to add functionality to a
computer system (for example: network, sound, or video cards).
5. Power Supply: Converts electricity from the wall into the form that the other
computer components use.
6. Optical Drive: An input/output device that reads data from and writes data to
CDs and DVDs.
7. Hard Drive: An input/output device that serves as the long-term storage
memory of the computer. There are two primary kinds: mechanical drives that
use a mechanical arm to read and write data on a rotating disk, and "solid state"
drives that have no moving parts.
8. Motherboard: A circuit board that holds and connects various components of
the computer and allows their communication.
9. Speaker: An input/output device that outputs sound from the computer.
10. Monitor: An input/output device that displays information visually. Generally,
monitors are output devices where the computer visually displays information.
Touchscreens combine the functions of output and input.
11. Keyboard: An input device on which the user can type to communicate with
the computer.
12. Mouse: An input device that allows the user to interact with visual objects
displayed on the monitor.
13. External Hard Drive: An input/output device that serves as an extra hard drive
used for additional or backup storage.
14. Printer: An output device that can transfer digital data onto paper.
Control Unit: It is the circuitry in the control unit, which makes use of electrical
signals to instruct the computer system for executing already stored instructions. It
takes instructions from memory and then decodes and executes these instructions. So, it
controls and coordinates the functioning of all parts of the computer. The Control Unit's
main task is to maintain and regulate the flow of information across the processor. It
does not take part in processing and storing data.
ALU: It is the arithmetic logic unit, which performs arithmetic and logical functions.
Arithmetic functions include addition, subtraction, multiplication division, and
comparisons. Logical functions mainly include selecting, comparing, and merging the
data. A CPU may contain more than one ALU. Furthermore, ALUs can be used for
maintaining timers that help run the computer.
A storage device is any type of computing hardware that is used for storing,
porting or extracting data files and objects. Storage devices can hold and store
information both temporarily and permanently.
Measurement of Memory:
The memory required to store a word formed with 8 Bits of data/Information
is called a BYTE.
Normally One byte of memory is required to store one Character.
INPUT DEVICES:
Some commonly used input devices are listed below:
1. Key Board
2. Mouse
3. Optical Scanner
4. Magnetic Card Reader
Computer Generation:
Characteristics:
The main electronic component of first-generation computers is the
vacuum tubes.
It operated in machine language.
Its primary memories were the Magnetic tapes and magnetic drums.
It employed its Input/output devices as Paper tape and punched cards.
Characteristics:
The main electronic component of second-generation computers is
electronic transistors.
Computer Application Page 4
Sanjeev Agrawal Global Educational University Bhopal
Characteristics:
The main electronic component of third-generation computers is
integrated circuits.
It operated in High-level language.
Its primary memories were the large magnetic core and magnetic
tape/disk.
Its Input/output devices were the Magnetic tape, monitor, keyboard,
printer, etc.
Here Compiler or Interpreter used.
Limitations of Computer:
Characteristics of Computer:
1. Speed
Speed is one of the important characteristics of the computer system. Computers are
much faster than humans. It solves the mathematical problem in a millisecond. As per
human can do the calculation by takes some time, but in case of a computer, it can do
within milliseconds or nanoseconds. The speed of the computer is very higher than
humans.
2. Accuracy
In case of accurate results, the computer can do both perfectly like speed and providing
accurate results. But it is related to the program which is given by humans, if the
program is accurate then the computer provides the result accurately without any error.
3. Reliability
If the program is given to the computer is accurate, it means that all the output or result
is accurate and also reliable as it’s for providing consistent results. The computer
performs an automation process, if the algorithm is properly set in the computer then it
gives us the result accurate and reliable.
4. Consistency
Computer are usually consistent, which means they provide us a result consistently. It
performs the last number of processes without any errors. which means it provides us
the same result in milliseconds.
5. Versatility
Versatility refers to the capability of a computer to perform more than one task at the
same time which is known as versatility. The computer has the ability to perform
different types of work like versatility can be used to make payroll bills, invoices, etc.
6. Deligence
Computers can perform the Trillian of calculation with the same accurate result and the
same consistency. The computer doesn’t get tired when it time to do a trillion of
calculations or tasks, It completes all the tasks without taking any time. The routine of a
computer only is to perform tasks.
Classification of Computer:
1. Analog Computer:
Analog computer may be used in scientific and industrial applications such as
to measure the electric current, frequency and resistance of the capacitor, etc.
Analog computers directly accept the data in the measuring device without first
converting it into codes and numbers.
Cases of analog computer are temperature, pressure, telephone lines,
Speedometer, immunity of capacitor, frequency of signal and voltage.
2. Digital Computer:
The digital computer is the most widely used and used to process data with
numbers using digits, usually utilizing the binary number system.
A digital computer intended to do calculations and logical operations at a high
rate. It takes the raw data as digits or amounts and procedures using
applications stored in its memory to make output. All modern computers such
as laptops and desktops we use at office or home are digital computers.
It works on data, such as magnitudes, letters, and symbols, which expressed in
binary code–i.e., with just the two digits 1 and 0. By counting, comparing, and
manipulating those digits or their mixtures by a pair of instructions stored in its
memory.
You can easily add new features to digital systems more easily.
a. Supercomputer:
It has the ability to decrypt your password to enhance protection for security
reasons.
It is used for virtual testing of nuclear weapons and critical medical tests.
It can study and understand climate patterns and forecast weather conditions. It
can run in NOAA's system (National Oceanic and Atmospheric Administration)
that can execute any type of simple and logical data.
It helps in designing the flight simulators for pilots at the beginner level for their
training.
It also used in a smog control system where it predicts the level of fog and other
pollutants in the atmosphere.
b. Mainframe Computer :
Mainframe computers are designed to support hundreds or thousands of
users simultaneously. They can support multiple programs at the same
time. It means they can execute different processes simultaneously. These
features of mainframe computers make them ideal for big organizations
like banking and telecom sectors, which need to manage and process high
volume of data.
It has a very long life. It can run smoothly for up to 50 years after proper
installation.
It has the ability to share or distribute its workload among other processors and
input/output terminals.
It has the ability to protect the stored data and other ongoing exchange of
information and data.
c. Mini Computer :
d. Workstation:
It has larger storage capacity, better graphics, and more powerful CPU than a
personal computer.
It can handle animation, data analysis, CAD, audio and video creation and
editing.
e. Microcomputer:
It is designed for personal work and applications. Only one user can work at a
time.
It does not require the user to have special skills or training to use it.
3. Hybrid Computer:
A hybrid computer which combines the aspects of a digital computer
and an analogue computer. It’s quick like an analogue computer and
contains memory and precision like digital computers. It’s intended
to incorporate a functioning analogue unit that’s effective for
calculations, nevertheless has a readily accessible digital memory.
A hybrid computer is used in hospitals to gauge the heartbeat of this
individual.
UNIT-2
Computer Memory
It is used to store data and instructions. Computer memory is the storage space in the computer,
where data is to be processed and instructions required for processing are stored. The memory is
divided into large number of small parts called cells. Each location or cell has a unique address.
1. Primary memory
Primary memory holds only those data and instructions on which the computer is currently working. It
has a limited capacity and data is lost when power is switched off. It is generally made up of
semiconductor device. These memories are not as fast as registers. The data and instruction required to
be processed resides in the main memory.
a. RAM:
RAM (Random Access Memory) is the internal memory of the CPU for storing data, program, and
program result. It is a read/write memory which stores data until the machine is working. As soon as
the machine is switched off, data is erased.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a power
failure.
RAM is of two types −
b. ROM
ROM stands for Read Only Memory. The memory from which we can only read but cannot
write on it. This type of memory is non-volatile. The information is stored permanently in such
memories during manufacture. A ROM stores such instructions that are required to start a
computer.
MROM (Masked ROM)
The very first ROMs were hard-wired devices that contained a pre-programmed set of data or
instructions. These kind of ROMs are known as masked ROMs, which are inexpensive.
2. Secondary memory
This type of memory is also known as external memory or non-volatile. It is slower than the main
memory. These are used for storing data/information permanently. CPU directly does not access
these memories, instead they are accessed via input-output routines. The contents of secondary
memories are first transferred to the main memory, and then the CPU can access it. For example,
disk, CD-ROM, DVD, etc.
3. Cache memory
Cache memory is a very high speed semiconductor memory which can speed up the CPU. It acts
as a buffer between the CPU and the main memory. It is used to hold those parts of data and
program which are most frequently used by the CPU.
Computer bus
The electrically conducting path along which data is transmitted inside any digital
electronic device. A Computer bus consists of a set of parallel conductors, which may
be conventional wires, copper tracks on a PRINTED CIRCUIT BOARD, or
microscopic aluminum trails on the surface of a silicon chip. a bus with eight wires can
carry only 8-bit data words, and hence defines the device as an 8-bit device. A computer
bus normally has a single word memory circuit called a LATCH attached to either
end, which briefly stores the word being transmitted and ensures that each bit has
settled to its intended state before its value is transmitted.
a) Data Bus:
i. The data bus allows data to travel back and forth between the microprocessor (CPU)
and memory (RAM).
ii. Data bus carry the data.
iii. Data bus is a bidirectional bus.
iv. Data bus fetch the instructions from memory.
v. Data bus used to store the result of an instruction into memory.
vi. Data bus carry commands to an I/O device controller or port.
vii. Data bus carry data from a device controller or port.
viii. Data bus issue data to a device controller or port.
b) Address Bus:
i. The address bus carries information about the location of data in memory.
ii.Address bus carry the memory address while reading from writing into memory.
iii.Address bus carry I/O post address or device address from I/O port.
iv. In uni-directional address bus only the CPU could send address and other units could
not address the microprocessor.
v. Now a days computers are having bi-directional address bus.
c) Control Bus :
i. The control bus carries the control signals that make sure everything is flowing
smoothly from place to place.
ii. Memory Read: This signal, is issued by the CPU or DMA controller when performing
a read operation with the memory.
iii. Memory Write: This signal is issued by the CPU or DMA controller when performing
a write operation with the memory.
iv. I/O Read: This signal is issued by the CPU when it is reading from an input port.
v. I/O Write: This signal is issued by the CPU when writing into an output port.
vi. Ready: The ready is an input signal to the CPU generated in order to synchronize
the show memory or I/O ports with the fast CPU.
Advantages:
1. Texture maps of countless sizes, levels of detail, and realism can be used.
2. 3D applications will function more quickly when there is no longer a need to pre-fetch
and cache textures in local video memory with more frames per second, up to 12.6
times more.
3. AGP assists OEMs in keeping costs under control for new PC designs by reducing the
requirement for video RAM.
4. Video traffic will flow smoothly to the user's screen across the AGP bus.
5. Systems will operate more steadily by removing graphics and video traffic from the
PCI bus.
Characteristics:
6. It offers pipelining and sideband addressing as two ways for the graphics card to
access texture maps stored in system memory directly.
USB
A USB is a common computer port, which shorts for Universal Serial Bus and allows
communication between a computer and peripheral or other devices. It is the most
common interface used in today's computers, which can be used to connect printers,
scanners, keyboards, mice, game controllers, digital cameras, external hard drives and
flash drives.
There are many types of USB connectors, but type A and type B are one of two major
types-
In modern times, to connect with the computer, there are many different USB devices. Some
common are as follows:
1. Keyboard
2. Smartphone
3. Tablet
4. Webcams
5. Keypad
6. Microphone
7. Mouse
8. Joystick
10. Scanner
11. Printer
FSB
[front-side bus] FSB is also known as the processor bus, memory bus, or system bus and
connects the CPU (chipset) with the main memory and L2 cache. The FSB can range from
speeds of 66 MHz, 133 MHz, 100 MHz, 266 MHz, 400 MHz, and up. The FSB is now
another important consideration when looking at purchasing a computer motherboard or a
new computer.
The FSB speed can be set either using the system BIOS or with jumpers on the computer
motherboard. It connected the system memory, input/output (I/O) peripherals and other board
components to the CPU and acted as the main transport link for data around the computer
hardware.
Wireless Connectivity
This addresses a variety of topics associated with wireless conenctivity. Everything from Wi-
Fi, Wi-FI routers and repeaters, etc through to other forms of wireless connectivity including
Bluetooth, LoRa, NFC and many more. With the technology for Smart homes and cities
becoming more commonplace, these technologies are being used increasingly.
Wireless networking has seen an increase in popularity because it is easy to connect a node to
a network. Many different types of device, such as laptops, tablets, smart phones, interactive
TVs, media centers, games consoles and security cameras, can easily connect to a network
when needed, without having to run a cable to each device.
Wired Connectivity
Although wireless technologies like Wi-Fi are widely used, wired connectivity is important.
Ethernet is once such example as it is used for many computer conenctions. Items like
Ethernet cables and many more can be found, although with other wired connectivity areas
like USB, serial communications and networking solutions like NFV and SDN.
Wired networking is still widely used in businesses or schools where devices such as desktop
computers are unlikely to need to be relocated very often. Tasks that require large amounts of
data to be accessed from servers - such as commercial video editing - are likely to be quicker
using a wired network as the bandwidth available to each connected device is much larger.
Touch screen
Touch screen technology is the direct manipulation type of gesture-based technology. Direct
manipulation is the ability to manipulate the digital world inside a screen. A Touch screen is
an electronic visual display capable of detecting and locating a touch over its display area.
This is generally referred to as touching the display of the device with a finger or hand. This
technology most widely used in computers, user interactive machines, smart phones, tablets,
etc to replace most functions of the mouse and keyboard. Touch screen technology is the
assembly of a touch panel as well as a display device. Generally, a touch panel is covered on
an electronic visual display within a processing system. Here the display is an LCD otherwise
OLED whereas the system is normally like a Smartphone, tablet, or laptop.
Flash memory
Flash memory, also known as flash storage, is a type of nonvolatile memory that erases data
in units called blocks and rewrites data at the byte level. Flash memory is widely used for
storage and data transfer in consumer devices, enterprise systems and industrial applications.
Flash memory retains data for an extended period of time, regardless of whether a flash-
equipped device is powered on or off.
Flash memory is used in enterprise data center server, storage and networking technology, as
well as in a wide range of consumer devices, including USB flash drives -- also known as
memory sticks -- SD cards, mobile phones, digital cameras, tablet computers and PC cards in
notebook computers and embedded controllers.
DVD
DVD stands for Digital Versatile Disc. It is commonly known as Digital Video Disc. It is a
digital optical disc storage format used to store high-capacity data like high-quality videos
and movies. It is also used to store the operating system. It was invented and developed by
four companies named Philips, Sony, Toshiba, and Panasonic in 1995. DVDs provide higher
storage capacity than CDs(compact discs) and can be played in multiple types of players like
DVD players.
Troubleshooting
UNIT-3
Operating System
An Operating System (OS) is an interface between a computer user and computer
hardware and controls the execution of all kinds of programs. Every computer system
must have at least one operating system to run other programs. Some popular
Operating Systems include Linux Operating System, Windows Operating System.
ii. The user has to submit a job (written on cards or tape) to a computer operator.
iii. Then computer operator places a batch of several jobs on an input device.
v. Then a special program, the monitor, manages the execution of each program in the
batch.
vi. The monitor is always in the main memory and available for execution.
• It is very difficult to guess or know the time required by any job to complete.
Processors of the batch systems know how long the job would be when it is in queue.
• It is sometime costly.
• The other jobs will have to wait for an unknown time if any job fails
In multiprogramming operating system, if single program gets to wait for I/O transfer, then
other programs are always ready to CPU utilization. Due to this, multiple jobs can share time
of its CPU. But, in the multiprogramming operating system, it does not predefine to be
execution of their jobs at same time frame.
ii. This set of jobs is a subset of the jobs placed in the job pool.
iii. The operating system picks up and starts operating one of the tasks in memory.
iv. The multiprogramming operating system monitors the status of all active programs
and system resources using memory management programs to ensure that the CPU is
never idle unless there is a job for the process.
• Memory management is required because all types of jobs are stored in the main
memory.
• If, it contains massive load of jobs then its long time jobs have to need long waiting
time.
For example, any editing task can be performed while other programs are executing
concurrently. Other example, user can open Gmail and Power Point same time.
• Time Shareable
• Secured Memory
• Background Processing
• Good Reliability
• Memory Boundation
• Processor Boundation
• CPU Heat up
• Great Reliability
• Improve Throughput
• Cost Effective System
• Parallel Processing
Disadvantages of Multiprocessor Operating System:
• Multiprocessor has complicated nature in both form such as H/W and S/W.
• It is more expensive due to its large architecture.
• Multiprocessor operating system has a daunting task for scheduling processes due to
its shareable nature.
• Multiprocessor system needs large memory due to sharing its memory with other
resources.
• Its speed can get degrade due to fail any one processor.
• It has more time delay when processor receives message and take appropriate action.
• It has big challenge related to skew and determinism.
• It needs context switching which can be impacted its performance.
These types of OSs serve the real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time. Real-time
systems are used when there are time requirements are very strict like missile systems, air
traffic control systems, robots etc.
Examples: Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux,
Mac OS X, Novell NetWare, and BSD etc.
Each task is given some time to execute, so that all the tasks work smoothly. Each user
gets time of CPU as they use single system. These systems are also known as Multitasking
Systems. The task can be from single user or from different users also. The time that each
task gets to execute is called quantum. After this time interval is over OS switches over to
next task.
Advantages Of Time Sharing Operating System:
• Each task gets an equal opportunity
• Less chances of duplication of software
• CPU idle time can be reduced
Disadvantages Of Time Sharing Operating System
• Reliability problem.
• One must have to take care of security and integrity of user programs and data.
• Data communication problem.
• Security.
• Control over System Performance.
• Job Accounting.
• Error Detecting Aids.
• Coordination between Users and Other Software.
• Memory Management.
• Process Management.
• Device Management.
Computing Source − OS acts as an interface between the user and the hardware. It
allows users to perform different tasks like input data, process the operation, and
access the output. With the help of an operating system, users can communicate with
computers to perform various functions like arithmetic calculations.
User-Friendly Interface − Whenever the Windows operating system came into
existence with Graphical User Interface (GUI), it became user friendly. It also helps
the users to quickly understand, interact, and communicate with computer machines.
Resource Sharing − Operating systems allow resource sharing. It shares the data and
information with other users with the help of printers, modems, and Fax Machines.
With the help of networks we are able to share the information and data via mails and
also different apps, images, and media files can be transferred from PC to other
devices with the help of an operating system.
No Coding Lines − After the invention of GUI the operating systems are allowed to
access hardware without writing programs.
Safeguard of Data − We are able to store more information on the computers and are
able to access that information with the help of operating. OS is maintaining safely
and securely managing the data.
Software Update − An operating system requires an update so that it can meet the
requirements of the users in a day to day life, without complexity the operating
system updates its software.
Computer Application Page 30
Sanjeev Agrawal Global Educational University Bhopal
Multitasking − An operating system can handle more than one task simultaneously.
Expensive − When compared to other platforms like Linux, some operating systems
are costly. Users can use a free OS but generally they are a bit more difficult to run
than others. Microsoft Windows operating system with GUI and other in-built
features carry a costly price.
System Failure − The whole system will be affected if the central operating system
fails, and the computer will not work. We know that operating the heart of a
computer system without OS the system cannot function. If the central system
crashes, the whole communication will be halted, and there will be no further
processing of data.
Highly Complex − Operating systems are highly complex, and the language which
used to establish these OS are not clear and well defined.
Virus Threats − Threats to the operating systems are higher as they are open to such
virus attacks. Many users download malicious software packages on their system
which halts the functioning of the OS and slows it down.
Fragmentation − Fragmentation in the computer is a state when storage memory
breaks into pieces. Internal fragmentation occurs when the method of process is
larger than the memory size. External fragmentation occurs when the method or
process is eliminated.
2. Process Management
The process management component is the way to manage the multiple processes running
simultaneously on the OS. Process management manages all the running processes and
makes sure that all of them run efficiently. It also uses the memory allocated to them and
shuts them down when required.The execution of a process must be in a sequence such that at
least one instruction executes on behalf of the process. Following are the features of process
management:
4. Network Management
Network management administers and manages computer networks. Its services include
performance management, fault analysis, network provisioning, and service quality
management. Following are the features of network management:
It offers user access to the various resources that the network shares.
We can access shared resources. These help speed-up computation and offer data
availability and reliability.
We can access different computing resources that vary in size and function like
microprocessors, minicomputers, and many general-purpose computer systems with the
help of distributed systems.
Main Memory comprises large amounts of storage or byte where each storage and byte has
an address. In order to conduct the process of memory management a sequence of reads or
writes of specific memory addresses is used. We map it to absolute addresses and load it
inside the memory in order to execute a program.Following are the features of memory
management:
6. Secondary-Storage Management
Programs help access data in the main memory during execution. The main memory is too
small and cannot store all the data and programs permanently. Thus, secondary storage acts
as a backup to the main memory. Assemblers and compilers are stored on the disk until they
are loaded into the memory and use the disk for processing.
Allocates storage
Manages free space
Disk scheduling
7. Security Management
It is necessary to protect the processes from each other’s activities. Security management
ensures that the operating files, memory, CPU, and other hardware resources have proper
authorization from the OS. No process can do its own I/O, this maintains the integrity of
peripheral devices.
Software:
Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. It is the opposite of hardware, which describes the physical aspects of a
computer. Software is a generic term used to refer to applications, scripts and programs
that run on a device.
The two main categories of software are:
application software
system software.
1. Application software. The most common type of software, application software is a
computer software package that performs a specific function for a user, or in some
cases, for another application. An application can be self-contained, or it can be a
group of programs that run the application for the user. Examples of modern
applications include office suites, graphics software, databases and database
management programs, web browsers, word processors, software development tools,
image editors and communication platforms.
2. System software. These software programs are designed to run a computer's
application programs and hardware. System software coordinates the activities and
functions of the hardware and software. In addition, it controls the operations of the
computer hardware and provides an environment or platform for all the other types of
software to work in. The OS is the best example of system software; it manages all
the other computer programs. Other examples of system software include
the firmware, computer language translators and system utilities.
SDLC
Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality softwares. The SDLC aims to produce a high-quality
software that meets or exceeds customer expectations, reaches completion within times
and cost estimates.
In this phase of SDLC, the actual development begins, and the programming is built. The
implementation of design begins concerning writing code. Developers have to follow the
coding guidelines described by their management and programming tools like compilers,
interpreters, debuggers, etc. are used to develop and implement the code.
Stage5: Testing
After the code is generated, it is tested against the requirements to make sure that the
products are solving the needs addressed and gathered during the requirements stage. During
this stage, unit testing, integration testing, system testing, acceptance testing are done.
Stage6: Deployment
Once the software is certified, and no bugs or errors are stated, then it is deployed. Then
based on the assessment, the software may be released as it is or with suggested enhancement
in the object segment.
Stage7: Maintenance
Once when the client starts using the developed systems, then the real issues come up and
requirements to be solved from time to time. This procedure where the care is taken for the
developed product is known as maintenance.
SDLC Model
Waterfall Model
The waterfall is a universally accepted SDLC model. In this method, the whole process of
software development is divided into various phases.
System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system
architecture.
Implementation − With inputs from the system design, the system is first developed
in small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
Maintenance − There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
RAD Model
RAD or Rapid Application Development process is an adoption of the waterfall model; it
targets developing software in a short period. The RAD model is based on the concept that a
better system can be developed in lesser time by using focus groups to gather system
requirements.
a. Business Modeling
b. Data Modeling
c. Process Modeling
d. Application Generation
Iterative model :
In the Iterative model, iterative process starts with a simple implementation of a small
set of the software requirements and iteratively enhances the evolving versions until
the complete system is implemented and ready to be deployed.
d. A new technology is being used and is being learnt by the development team while
working on the project.
e. Resources with needed skill sets are not available and are planned to be used on
contract basis for specific iterations.
f. There are some high-risk features and goals which may change in the future.
Spiral Model
The spiral model is a risk-driven process model. This SDLC model helps the group to adopt
elements of one or more process models like a waterfall, incremental, waterfall, etc. The
spiral technique is a combination of rapid prototyping and concurrency in design and
development activities.
Each cycle in the spiral begins with the identification of objectives for that cycle, the different
alternatives that are possible for achieving the goals, and the constraints that exist. This is the
first quadrant of the cycle (upper-left quadrant).The next step is to develop strategies that
solve uncertainties and risks. This step may involve activities such as benchmarking,
simulation, and prototyping.
Identification
This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.
Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design
in the subsequent spirals.
Construct or Build
The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.
Evaluation and Risk Analysis
Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the
end of first iteration, the customer evaluates the software and provides feedback.
V-Model
The V-model is an SDLC model where execution of processes happens in a sequential
manner in a V-shape. It is also known as Verification and Validation model.
The V-Model is an extension of the waterfall model and is based on the association of a
testing phase for each corresponding development stage. This means that for every single
phase in the development cycle, there is a directly associated testing phase.
Features:
The results are of quite high quality.
Users can easily change the software according to requirements.
It is more secure.
Long term use.
Transparency.
Affordable.
Help in developing skills.
Example:
Flexibility: Users can make changes in the software as per their needs. Furthermore, a
user can add additional features. On the other hand, one can also delete the useless
features.
Stability: A good advantage is that even if the developers of the software stops looking
after the software it will not disappear. Since there are many people in the open source
community to look after the software. Hence, users can use the software for the long
term.
Security and Reliability: Since several people are developing and enhancing the
software. Therefore, software is more secure and reliable.
Easier Evaluation: As the source code is available. therefore, users can easily view the
code. hence, they can understand the bugs and capabilities of the software.
Better Support: Since, many number of people like developers, companies, and other
users are dealing with the software. Hence, it is quite easy to get any kind of technical
support.
Possible Savings: such software usually have a low cost in comparison to other
software. Hence they are easily affordable.