What Is Transaction Response Time?
What Is Transaction Response Time?
What Is Transaction Response Time?
Transaction Response Time represents the time taken for the application to complete a defined transaction
or business process.
The objective of a performance test is to ensure that the application is working perfectly under load.
However, the definition of "perfectly" under load may vary with different systems.
By defining an initial acceptable response time, we can benchmark the application if it is performing as
anticipated.
The importance of Transaction Response Time is that it gives the project team/ application team an idea of
how the application is performing in the measurement of time. With this information, they can relate to the
users/customers on the expected time when processing request or understanding how their application
performed.
The Transaction Response Time encompasses the time taken for the request made to the web server, there
after being process by the Web Server and sent to the Application Server. Which in most instances will
make a request to the Database Server. All this will then be repeated again backward from the Database
Server, Application Server, Web Server and back to the user. Take note that the time taken for the request
or data in the network transmission is also factored in.
Note:
Factoring the time taken for the data to return to the client.
How do we measure?
Measuring of the Transaction Response Time begins when the defined transaction makes a request to the
application. From here, till the transaction completes before proceeding with the next subsequent request
(in terms of transaction), the time is been measured and will stop when the transaction completes.
Using Transaction Response Time, Project Team can better relate to their users using transactions as a form
of language protocol that their users can comprehend. Users will be able to know that transactions (or
business processes) are performing at an acceptable level in terms of time.
Users may be unable to understand the meaning of CPU utilization or Memory usage and thus using a
common language of time is ideal to convey performance-related issues.
===========================================================
Before starting loadtest we have to make sure the following check lists: :)
1.End users, customer, and project members have been notified in advance of the execution dates and hours
for the capacity test
2.All service level agreements, response time requirements have been agreed upon by all stakeholders.
3.Contact list with names and phone numbers has been drafted for support personnel (onsite and remote)
4.Functional Testing of the application has been completed.
5.Restart the controller machine.
6.Ramp Up / Duration / Ramp Down is configured correctly.
7.All Load Generators are in a "Ready" status.
8.All Load Generators are assigned to appropriate scripts.
9.All scripts have the correct number of Vusers assigned to them.
10.All scripts have the correct number of Iterations assigned to them.
11.Correct pacing has been agreed upon and configured for all appropriate scripts.
12.Logging is set to Send messages only when an error occurs for all scripts.
13.Think Times have been enabled/disabled in the test scripts
14.Generate snapshot on error is enabled for all appropriate scripts.
15.Timeout values have been set to the appropriate values.
16.All content checks have been updated for the appropriate scripts.
17.Rendezvous points have been enabled/disabled for appropriate scripts
18.All necessary data has been prepared/staged and is updated in all scripts.
19.Any scripts with unique data requirements have been verified.
20.All scripts have been refreshed in the controller and reflect the most recent updates.
21.IP Spoofing has been enabled in the controller.
22.IP Spoofing has been configured on all appropriate Load Generators.
23.All LoadRunner Monitors have been identified, configured and tested.
24.Auto Collate Results should be enabled
25.Results directory and file name should be updated.
==========================================
It defines at any point of point of time, the maximum number of vusers running together concurrently (in
Run state). This is the "state" or usually the requirement of a load test to reach "X" number of concurrent
users. If a load test were to required to run 100 concurrent users, then the Max. Running Vuser must be
100. This is different from Vuser Quantity explained in the following.
Vuser Quantity
In the Controller, the Vuser Quantity is the total number of participating in the load test but it is different
from Max. Running Vuser which is explained above. To view the total number of Vuser participating the
load test (Vuser Quantity), open the Vuser Summary graph. Please take note that the Summary Report in
the Analysis session displays the maximum number of vusers running in the scenario and not the total vuser
particiapting.
• Constantly monitor the entire system with any monitoring tools available and keep records. This allows
you to get a background usage pattern and also lets you compare the current situation with situations
previously considered stable.
• You should run offline work during off-hours only. This ensure that there is no extra load on the system
when the users are executing online tasks and enhance performance of both online and offline activities.
• If you need to run extra tasks during the day, try to slot them into times with low user activity. Office
activity usually peaks at 9am and 2:30pm and has a load between noon and 1pm or at shift changeovers.
You should be able to determine the user-activity cycles appropriate to your system by examining the
results of normal monitoring. The reduced conflict for system resources during periods of low activity
improves performance.
• You should specify timeouts for all process under the control of your application (and others on the
system, if possible) and terminate processes that have passed their timeout value.
• Apply any partitioning available from the system to allocate determinate resources to your application.
For example, you can specify disk partitions, memory segments, and even CPUs to be allocated to
particular process.
CPU
Disk
Memory
Network
The above is taken from the publication, "Java Performance Tuning" written by Jack Shirazi. I would
recommend to read this book as it provides not just tuning and bottleneck concepts bounded by Java. A
simplified version (which is the summary of the chapter can be found here [Comming soon]).
Basics: CPU Bottlenecks
Java provides a virtual machine runtime system that is just that: an abstraction of a CPU that runs in
software. (Note that this chapter is taken from the "Java Performance Tuning" written by Jack Shirazi and
therefore alot of discussions circled around Java technologies.) These virtual machines run on a real CPU,
and in this section the book discuss the performance characteristics of those real CPUs.
CPU Load
The CPU and many other parts of the system can be monitored using system-level utilities. On Windows,
the task manager and performance monitor can be used for monitoring. On UNIX, a performance monitor
(such as perfmeter) is usually available, as well as utilities such as vmstat. Two aspects of the CPU are
worth watching as primary performance points. These are the CPU utilization (usually expressed in
percentage terms) and the run-able queue of processes and threads (often called the load or the task queue).
The first indictor is simply the percentage of the CPU (Or CPUs) being used by all the various threads. If
this is up to 100% for significant periods of time, you may have a problem. On the other hand, if it isn't, the
CPU is under-utilized, but that is usually preferable. Low CPU usage can indicate that your application
may be blocked for significant periods on disk or network I/O. High CPU usage can indicate thrashing
(lack of RAM) or CPU contention (indicating that you need to tune the code and reduce the number of
instructions being processed to reduce the impact on the CPU).
A reasonable target is 75% CPU utilization (which from what I read from different authors varies from
75% till 85%). This means that the system is being worked toward its optimum, but that you have left some
slacks for spikes due to other system or application requirements. However, note that if more than 50% of
the CPU is used by system processes (i.e. administrative and IS process), your CPU is probably under-
powered. This can be identified by looking at the load of the system over some period when you are not
running any applications.
The second performance indicator, the run-able queue, indicates the average number of processes or
threads waiting to be scheduled for the CPU by the OS. They are run-able processes, but the CPU has no
time to run them and is keeping them waiting for some significant amount of time. As soon as the run
queue goes above zero, the system may display contention for resources, nut there usually some value
above zero that still gives acceptable performance for any particular system. You need to determine what
that value is in order to use this statistics as a useful warning indicator. A simplistic way to do this is to
create a short program that repeatedly does some simple activity. You can then time each run of that
activity. You can run copies of this process one after the other so that more and one copies are
simultaneously running. Keep increasing the number of copies being run until the run queue starts
increasing. By watching the times recorded for the activity, you can graph that time against the run queue.
This should give you some indication of when the run-able queue becomes too large for useful responses
on your system, administrator if the threshold is exceeded. A guideline by Adrian Cockcroft is that
performance starts to degrade if the run queue grows bigger than four times the number of CPUs.
If you can upgrade the CPU of the target environment, doubling the CPU speed is usually better than
doubling the number of CPUs. And remember that parallelism in an application doesn't necessarily need
multiple CPUs. If I/O is significant, the CPU will have plenty of time for many threads.
Process Priorities
The OS also has the ability to prioritize the processes in terms of providing CPU time by allocating process
priority levels. CPU priorities provide a way to throttle high-demand CPU processes, thus giving other
processes a greater share of the CPU. If there are other processes that need to run on the same machine but
it doesn't matter if they were run slowly, you can give your application processes a (much) higher priority
than those other processes, thus allowing your application the lion's share of CPU time on a congested
system. This is worth keeping in mind if your application consists of multiple processes, you should also
consider the possibility of giving your various processes different levels of priority.
Being tempted to adjust the priority levels of processes, however, is often a sign that the CPU is
underpowered for the tasks you have given it.
The above is taken from the publication, "Java Performance Tuning" written by Jack Shirazi. I would
recommend to read this book as it provides not just tuning and bottleneck concepts bounded by Java.
Disks Bottlenecks
In most cases, applications can be tuned so that disk I/O does not cause any serous performance problems.
But if, application tuning, you find that disk I/O s still causing a performance problem; your best bet may
be to upgrade the system disks. Identifying whether the system has a problem with disk utilization is the
first step. Each system provides its own tools to identify disk usage (Windows has a performance monitor,
and UNIX has the sar, vmstat, iostat utilities.) At minimum, you need to identify whether the paging is an
issue (look at disk-scan rates) and assess the overall utilization of your disks (e.g. performance monitor on
Windows, output from iostat –D on UNIX). It may be that the system has a problem independent of your
application (e.g. unbalanced disks), and correcting this problem may resolve the problem issue.
If the disk analysis does not identify an obvious system problem that is causing the I/O overhead, you could
try making a disk upgrade or a reconfiguration. This type of tuning can consist of any of the following:
• Upgrading to faster disks
• Adding more swap space to handle larger buffers
• Changing the disk to be striped (where files are striped across several disks, thus providing parallel I/O.
e.g. with a RAID system)
• Running the data on raw partitions when this is shown to be faster.
• Distributing simultaneously accessed files across multiple disks to gain parallel I/O
• Using memory-mapped disks or files
If you have applications that run on many systems and you do not know the specification of the target
system, bear in mind that you can never be sure that ant particular disk is local to the user. There is a
significant possibility that the disk being used by the application is a network-mounted disk. This doubles
the variability in response times and throughput. The weakest link will probably not even be constant. A
network disk is a shared resource, as is the network itself, so performance is hugely and unpredictably
affected by other users and network load.
Disk I/O
Do not underestimates the impact of disk writes on the system as a whole. For example, all database
vendors strongly recommend that the system swap files be placed on a separate disk from their databases.
The impact of if not doing so can decrease database throughput (and system activity) but an order of
magnitude. This performance decreases come from not splitting I/O of two disk-intensive applications (in
this case, OS paging and database I/O).
Identifying that there is an I/O problem is usually fairly easy. The most basic symptom is that things take
longer than expected, while at the same time the CPU is not at all heavily worked. The disk-monitoring
utilities will also tell you that there is a lot of work being done to the disks. At the system level, you should
determine the average peak requirements on the disks. Your disks will have some statistics that are supplied
by the vendor, including:
The average and peak transfer rates, normally in megabytes (MB) per seconds, e.g. 5MB/sec. Form this,
you can calculate how long an 8K page takes to be transferred from disk, and for example, 5MB/sec is
about 5K/ms, so an 8K page takes just under 2ms to transfer.
Average seek time, normally in milliseconds (ms). This is the time required for the disk head to move
radially to the correct location on the disk.
Rotational speed, normally in revolutions per minutes (rpm), e.g. 7200rpm. From this, you can calculate the
average rotational delay in moving the disk under the disk-head reader, i.e., the time taken for half a
revolution. For example, for 7200rpm, one revolution takes 60,000ms (60 seconds) divided by 7200rpm,
which is about 8.3 ms. So half a revolution takes just over 4ms, which is consequently the average
rotational delay.
This list allows you to calculate the actual time it takes to load a random 8K page from the disk, this being
seek time + rotational delay + transfer time. Using the examples given in the list, you have 10 + 4 + 2 = 16
ms to load a random 8K page (almost an order of magnitude slower than the raw disk throughput). This
calculation gives you a worst –case scenario for the disk-transfer rates for your application, allowing you to
determine if the system is up to the required performance. Note that if you are reading data stored
sequentially in disk (as when reading a large file), the seek time and rotational delay are incurred less than
once per 8K page loaded. Basically, these two times are incurred only at the beginning of opening the file
and whenever the file is fragmented. But this calculation is confounded by other processes also executing
I/O to the disk at the same time. This overhead is part of the reason why swap and other intensive I/O files
should not be put on the same disk.
One mechanism for speeding up disk I/O is to stripe disks. Disk striping allows data from a particular file
to be spread over several disks. Striping allows reads and writes to be performed in parallel across the disks
without requiring any application changes. This can speed up disk I/O quite effectively. However, be aware
that the seek and rotational overhead previously listed still applies, and if you are making many small
random reads, there may be no performance gain from striping disks.
Finally, note again that using remote disks adversely affects I/O performance. You should not be using
remote disks mounted from the network with any I/O-intensive operations if you need good performance.
Clustering Files
Reading many files sequentially is faster if the files are clustered together on the disk, allowing the disk-
head reader to flow from one file to the next. This clustering is best done in conjunction with
defragmenting the disks. The overhead in finding the location of a file on the disk (detailed in the previous
section) is also minimized for sequential reads if the files are clustered.
If you cannot specify clustering files at the disk level, you can still provide similar functionality by putting
all the files together into one large file (as is done with the ZIP file systems). This is fine if all the files are
read-only files or if there is just one file that is writable (you place that at the end). However, when there is
more than one writable file, you need to manage the location of the internal files in your system as one or
more grow. This becomes a problem and is not usually worth the effort. (If the files have a known bounded
size, you can pad the files internally, thus regaining the single file efficiency.)
Cached File Systems (RAM Disks, tmpfs, cachefs)
Most OS provide the ability to map a file system into the system memory . This ability can speed up reads
and writes to certain files in which you control your target environment. Typically, this technique has been
used to speed up the reading and writing of temporary files. For example, some compilers (of languages in
general, not specifically Java) generate many temporary files during compilation. If these files are created
and written directly to the system memory, the speed of compilation is greatly increased. Similarly, if you
have the a set of external files that are needed by your application, it is possible to map these directly into
the system memory, thus allowing their reads and writes to be speeded up greatly.
But note that these types of file systems are not persistent. In the same way the system memory of the
machine gets cleared when it is rebooted, so these file systems are removed on reboot. If the system
crashes, anything in a memory-mapped file system is lost. For this reason, these types of file systems are
usually suitable only for temporary files or read-only versions of disk-based files (such as mapping a CD-
ROM into a memory-resident file system).
Remember that you do not have the same degree of fine control over these file systems that you have over
your application. A memory-mapped file system does not use memory resources as efficiently as working
directly from your application. If you have direct control over the files you are reading and writing, it is
usually better to optimize this within your application rather than outside it. A memory-mapped file system
takes space directly from system memory. You should consider whether it would be better to let your
application grow in memory instead of letting the file system take up that system memory. For multi-user
applications, it is usually more efficient for the system to map shared files directly into memory, as a
particular fule then takes up just one memory location rather than duplicate in each process. Note that from
SDK 1.4, memory-mapped files are directly supported from the java.nio package. Memory-mapped files
are slightly different from memory-mapped file systems. A memory-mapped file uses system resources to
read the file into system memory, and that data can then be accessed form Java through the appropriate
java.nio buffer. A memory-mapped file system does not require the java.nio package and, as far as Java is
concerned, files in that file system are simply files like any others. The OS transparently handles the
memory mapping.
The creation of memory-mapped file systems is completely system-dependent, and there is no guarantee
that it is available on any particular system (though most modern OS do support this feature). On UNIX
system, the administrator needs to look at the documentation of the mount command and its subsections on
cachefs and tmpfs. Under Windows, you should find details by looking at the documentation on how to
setup a RAM disk, a portion of memory mapped to a logical disk drive.
In a similar way, there are products available that pre-cache shared libraries (DLL) and even executables in
memory. This usually means only that an application starts quicker or loads the quicker, and so may not be
much help in speeding up a running system.
But you can apply the technique of memory-mapping file systems directly and quite usefully for
applications in which processes are frequently started. Copy the Java distribution and all class files (all
JDK, application, and third-party class files) onto a memory-mapped file system and ensure that all
executions and classloads take place from the file system. Since everything (executables, DLLs, class files,
resources, etc.) is already in memory, the startup time is much faster. Because only the startup (and class
loading) time is affected, this technique gives only a small boost to applications that are not frequently
starting processes, but can be usefully applied if startup time is a problem.
Disk Fragmentation
When files are stored on disk, the bytes in the files are note necessarily stored contiguously: their storage
depends on file size and contiguous space available on the disk. This non-contiguous storage is called
fragmentation. Any particular file may have some chunks in one place, and a pointer to the next chunk that
may be quite a distance away on the disk.
Hard disks tend to get fragmented over time. This fragmentation delays both reads from files (including
loading applications into computer memory on startup) and writes to files. This delay occurs because the
disk header must wind on to the next chunk with each fragmentation, and this takes time.
For optimum performance on any system, it is a good idea to periodically defragment the disk. This
reunites files that have been split up so that disk heads do not spend so much time searching for data once
the file-header locations have been identified, thus speeding up data access. Defragmenting may not be
effective on all systems, however.
Disk Sweet Spots
Most disks have a location from which data is transferred faster than from other locations. Usually, the
closer the data is to the outside edge of the disk, the faster it can be read from the disk. Most hard disks
rotate at constant angular speed. This means that the linear speed of the disk under a point is faster the
farther away the point is from the center of the disk. Thus, data at the edge of the disk can be read from
(and written to) at the faster possible rate commensurate with the maximum density of data storable on
disk.
This location with faster transfer rates usually termed the disk sweet spot. Some
(Commercial) utilities provide mapped access to the underlying disk and allow you to reorganize files to
optimize access. On most server systems, the administrator has control over how logical partitions of the
disk apply to the physical layout, and how to position files to the disk sweet spots. Experts for high-
performance database system sometimes try to position the index tables of the database as close as possible
to the disk sweet spot. These tables consist of relatively small amounts of data that affect the performance
of the system in a disproportionately large way, so that any speed improvement in manipulating these tables
is significant.
Note that some of the latest OS are beginning to include "awareness" of disk sweet spots, and attempt to
move executables to sweet spots when defragmenting the disk. You may need to ensure that the
defragmentation procedure does not disrupt your own use of the disk sweet spot.
The above is taken from the publication, "Java Performance Tuning" written by Jack Shirazi. I would
recommend to read this book as it provides not just tuning and bottleneck concepts bounded by Java..
============================================================
Memory Bottlenecks
Maintaining watch directly on the system memory (RAM) is not usually that helpful in identifying
performance problems. A better indication that memory might be affecting performance can be gained by
watching for paging of data from memory to the swap files. Most current OS have a virtual memory that is
made up of the actual (real) system memory using RAM chips, and one or more swap files on the system
disks. Processes that are currently running are operating in real memory. The OS can take pages from any
of the processes currently in real memory and swap them out to disk. This is known as paging. Paging
leaves free space in real memory to allocate to other processes that need to bring in a page from disk.
Obviously, if all the processes currently running can fit into real memory, there is no need for the system to
swap out any pages. However, if there are too many processes to fit into real memory, paging allows the
system to free up system memory to run more processes. Paging affects system performance in many ways.
One obvious way is that if a process has had some pages moved to disk and the process becomes run-able,
the OS has to pull back the pages from dusk before that process can be run. This leads to delays in
performance. In addition, both CPU and the disk I/O spend time doing the paging, reducing available
processing power and increasing the load on the disks. This cascading effect involving both the CPU and
I/O can degrade the performance of the whole system in such a way that it maybe difficult to even
recognize that paging is the problem. The extreme version of too much paging is thrashing, in which the
system is spending so much time moving pages around that it fails to perform any other significant work.
(The next step is likely to be a system crash).
As with run-able queues (see CPU section), a little paging of the system does not affect the performance
enough to cause concern. In fact, some paging can be considered good. It indicated that the system's
memory resources are fully utilized. But at the point where paging becomes a significant overhead, the
system is overloaded.
Monitoring paging is relatively easy. On UNIX, the utilities vmstat and iostat provide details as to the level
of paging, disk activity and memory levels. On Windows, the performance monitor has categories to show
these details, as well as being able to monitor the system swap files.
If there is more paging than is optimal, the system's RAM is insufficient or processes are too big. To
improve this situation, you need to reduce the memory being used by reducing the number of processes or
the memory utilization of some processes. Alternatively, you can add RAM. Assuming that it is your
application that is causing the paging (Otherwise, either the system needs an upgrade, or someone else's
processes may also have to be tuned), you need to reduce the memory resources you are using.
When the problem is caused by a combination of your application and others, you can partially address the
situation by using process priorities (see the CPU "section"). The equivalent to priority levels for memory
usage is an all-or-nothing option, where you can lock process in memory. This option is not available on all
systems and is more often applied to shared memory than to processes, but nevertheless, it is useful to
know. If this option is applied, the process is locked into real memory and is not paged out at all. You need
to be aware that using this option reduces the amount of RAM available to all other processes, which can
make overall system performance worse. Any deterioration in system performance is likely occurring at
heavy system load, so make sure you extrapolate the effect of reducing the system memory in this way.
Network Bottlenecks
At the network level, many things can affect performance. The bandwidth (the amount of data that can be
carried by the network) tends to be the first culprit checked. Assuming you have determined that bad
performance is attributable to the network component of an application, there is more likely cause of bad
network performance than network bandwidth. The most likely cause of bad network performance is the
application itself and how it is handling distributed data and functionality.
The overall speed of a particular network connection is limited by the slowest link in the connection chain
and the length of the chain. Identifying the slowest link is difficult and may not even be consistent: it can
vary at different times of the day or for different communication paths. A network communication path
lead from an application through a TCP/IP stack (which adds various layers of headers, possibly encrypting
and compressing data as well), then through the hardware interface, through a modem, over a phone line,
through another modem, over to a service provider's router, through many heavily congested data lines of
various carrying capacities and multiple routers with different maximum throughputs and configurations, to
a machine at the other end with its own hard interface, TCP/IP stack and application. A typical web
download route is just like this. In addition, there are dropped packets, acknowledgements, retries, bus
contention, and so on.
Because so many possibilities causes of bad network performance are external to an application, one option
you can consider including in an application is a network speed testing facility that reports to the user. This
should test the speed of data transfer from the machine to various destinations: to itself, to another machine
on the local network, to the Internet Service Provider, to the target server across the network, and to any
other destinations appropriate. This type of diagnostics report can tell users that they are obtaining bad
performance from something other than your application. If you feel that the performance of your
application is limited by the actual network communication speed, and not by other (application) factors,
this facility will report the maximum possible speeds to youruser.
Latency
Latency is different from the load-carrying capacity (bandwidth) of a network. Bandwidth refers to how
much data can be sent down the communication channel for a given period of time and is limited by the
link in the communication chain that has the lowest bandwidth. The latency is the amount of time a
particular data packet takes to get from one end of the communication channel to the other. Bandwidth tells
you the limits within which your application can operate before the performance become affected by the
volume of data being transmitted. Latency often affects the user's view of the performance even when
bandwidth isn't a problem.
In most cases, especially Internet traffic, latency is an important concern. You can determine the basic
round-trip time for a data packets from any two machines using the ping utility. This utility provides a
measure of the time it takes a packet of data to reach another machine and be returned. However, the time
measure is for a basic underlying protocol (ICMP packet) to travel between the machines. If the
communication channel is congested and the overlying protocol requires re-transmissions (often the case
for Internet traffic), one transmission at the application level can actually be equivalent to many round trips.
It is important to be aware of these limitations. It is often possible to tune the application to minimize the
number of transfers by packing data together, caching and redesigning the distributed application protocol
to aim for a less conversational mode of operation. At the network level, you need to monitor the
transmission statistics (using the ping and netstat utilities and packet sniffers) and consider tuning any
network parameters that you have access to in order to reduce re-transmissions.
TCP/IP Stacks
The TCP/IP stack is the section of code that is responsible for translating each application-level network
request (send, receive, connect, etc.) through the transport layers down to the wire and back up to the
application at the other end of the connection. Because the stacks are usually delivered with the operation
system and performance-tested before delivery (since a slow network connection on an otherwise fast
machine and fast network is pretty obvious), it is unlikely that the TCP/IP stack itself is a performance
problem.
In addition to the stack itself, stacks include several tunable parameters. Most of these parameters deal with
transmission details beyond the scope of the book. One parameter worth mentioning is the maximum
packet size. When your application sends data, the underlying protocol breaks the data into packets that are
transmitted. There is an optimal size for packets transmitted over a particular communication channel, and
the packet size actually used by the stack is compromise. Smaller packets are less likely to be dropped, but
they introduced more overhead, as data probably has to be broken up into more packets with more header
overhead.
If your communication takes place over a particular set of endpoints, you may want to alter the packet
sizes. For a LAN segment with no router involved, the packets can be big (e.g. 8KB). For a LAN with
routers, you probably want to set the maximum packet size to the size the routers allow to pass unbroken.
(Routers can break up the packets into smaller ones; 1500 bytes is the typical maximum packet size and the
standard for the Ethernet. The maximum packet size is configurable by the router's network administrator.)
If your application is likely to be sending data over the Internet and you cannot guarantee the route and
quality of routers it will pass through, 500 bytes per packet is likely to be optimal.
Network Bottlenecks
Other causes of slow network I/O can be attributed directly to the load or configuration of the network. For
example, a LAN may become congested when many machines are simultaneously trying to communicate
over the network. The potential throughput of the network could handle the load, but the algorithms to
provide communication channels slow the network, resulting in a lower maximum throughput. A congested
Ethernet network has an average throughput approximately one third the potential maximum throughputs.
Congested networks have other problems, such as dropped network packets. If you are using TCP, the
communication rate on a congested network is much slower as the protocol automatically resends the
dropped packets. If you are using UDP, your application must resend multiple copies for each transfer.
Dropping packets in this way is common for the Internet. For LANs, you need to coordinate closely with
network administrators to alert them to the problem. For single machines connected by a service provider,
suggesting improvements. The phone line to the service provider may be noisier than expected: if so, you
also need to speak to the phone line provider. It is also worth checking with the service provider, who
should have optimal configurations they can demonstrate.
Dropped packets and re-transmissions are a good indication of network congestion problems, and you
should be on constant lookup for them. Dropped packets often occur when routers are overloaded and find
it necessary to drop some of the packets being transmitted as the router's buffer overflow. This means that
the overlying protocol will request the packets to be resent. The netstat utility lists re-transmission and
other statistics that can identify these sorts of problems. Re-transmissions may indicate that the maximum
packet size is too large.
DNS Lookup
Looking up network address is an often-overlooked cause of bad network performance. When your
application tries to connect to a network address such as foo.bar.somthing.org (e.g. downloading a webpage
from http://foo.bar.something.org), your application first translates foo.bar.somthing.org into a four-byte
network IP address such as 10.33.6.45. This is the actual address that the network understands and uses for
routing network packets. The is this translation works is that your system is configured with some seldom-
used files that can specify this translation, and a more frequently used Domain Name System (DNS) server
that can dynamically provide you with the address from the given string. DBS translation works as follows:
1. The machine running the application sends the text string of the hostname (e.g. foo.bar.something.org) to
the DNS server.
2. The DNS server checks its cache to find an IP address corresponding to that hostname. If the server does
not find an entry in the cache, it asks its own DNS server (usually further up the Internet domain-name
hierarchy) until ultimately the name is resolved. (This may be by components of the name being resolved,
e.g. first .org, then something.org, etc, each time asking another machine as the search request is
successively resolved.) This resolved IP address is added to the DBS server's cache.
3. The IP address s returned to the original machine running the application.
4. The application uses the IP address to connect to the desired destination.
The address lookup does not need to be repeated once a connection is established, but any other
connections (within the same session of the application or in other session s at the same time and later)
need to repeat the lookup procedure to start another connection.
You can improve this situation by running a DNS server locally on the machine, or on a local server if the
application uses a LAN. A DNS server can be run as a "caching-only" server that resets its cache each time
the machine is rebooted. There would be little point in doing this if the machine used only one or two
connections per hostname between successive reboots. For more frequent connections, a local DNS server
can provide a noticeable speedup to connections. Nslookup is useful for investigating how a particular
system does translations.
LR-Basics
What is LoadRunner?
Software applications are becoming advanced and complex, they are now capable of holding 100s of 1000s
of users. With complexity and large volumes, arises problem of managing them and making them work at
any given point of time.
Also, almost every organization is moving in the era of Web 2.0 (or 3.0). This intricate network comes
along with lot of challenges to any company. With servers, routers ,cables, applications all interlinked to
each other in a mesh like structure every single point becomes a candidate for performance bottlenecks.
The best way to test and overcome the performance problems is to use testing tools which are capable of
simulating the end user behavior.
1. Virtual User Generator (VUGen): We can emulate the real world user behavior using VuGen
that's why the name virtual user [Dictionary meaning: Existing or resulting in essence or effect
though not in actual fact, form, or name]. This is the place where we record and write automated
scripts.
2. Controller: Here we run the scripts generated above. This controls
the various load generators* and scenarios** associated with them.
3. Analysis: This gives the detailed results and presents them beautifully using reports, charts and
graphics.
This was just a brief overview. We will talk in details on the three parts of LR in the coming posts.
**Scenarios: This describes aspects like which scripts will run, no of virtual users and association of load
generators with scripts
• When a script is opened in Controller, run-time settings also gets copied from VUGen to
controller.
• Any changes done in the script and run-time settings are not reflected in the controller unless you
refresh them.
• Refresh in controller can be done by going to Design > {Highlighting scenario group} that are
using script in question > Clicking Details button > Clicking the REFRESH button on the Group
Information pop-up window. So next time when controller asks you to load new script iteration
settings do the refresh.
• While doing Save As:
o Default directory in VUGen can be changed by going to vugen.ini file located under
C:\Program Files\HP\LoadRunner\config and appending the required file path to
LastScriptPath (as shown on the right).
o Default directory in Controller can be changed by going to wlrun.ini file located under
C:\Program Files\HP\LoadRunner\config and appending the required file path to
M_ROOT
Note that THINK time is ignored in VUGen while played back as recorded in Controller.
LoadRunner — Correlation
If you simply record and playback a script in VuGen, you might encounter errors in your playback. Often,
those errors are related to the session values which are sent by the server to the client to identify that
particular session.
Why error? Well, session values will change with every playback of the script.
To overcome this we need a way which can capture these dynamically generated session values and pass it
subsequently to any part of the script, wherever required. This method to identify and set the dynamic
generated value is known as correlation.
If your new to loadtesting, don't confuse this term with parameter which you might have used in tools like
QTP to pass varying values. Parameter is not a dynamic value captured from server response but it is
something for which the user has predefined data values available.
1. Web_reg_save_param
2. Web_create_html_param
3. Web_create_html_param_ex
• web_url is not a context sensitive function while web_link is a context sensitive function. Context
sensitive functions describe your actions in terms of GUI objects (such as windows, lists, and
buttons). Check HTML vs URL recording mode.
• If web_url statement occurs before a context sensitive statement like web_link, it should hit the
server, otherwise your script will get error'ed out.
• While recording, if you switch between the actions, the first statement recorded in a given action
will never be a context sensitive statement.
• The first argument of a web_link, web_url, web_image or in general web_* does not affect the
script replay. For example: if your web_link statements were recorded as
web_link("Hi There",
"Text=Hello, ABC",
LAST);
Now, when you parameterize/correlate the first argument to
web_link("{Welcome to LearnLoadRunner}",
"Text=Hello, ABC",
LAST);
On executing the above script you won't find the actual text of the parameter {Welcome to Learn
LoadRunner} instead you will find {Welcome to Learn LoadRunner} itself in the execution
log. However to show the correlated/parameterized data you can use lr_eval_string to evaluate the
parameter
There are three types of recording mode/levels in LoadRunner. GUI-based, HTML based and URL based.
For the uninitiated, recording levels tells you the amount of and what information is recorded during the
recording process. As the title says, for this post we will keep focus on HTML based and URL based
recording levels only and will touch upon GUI based mode, in a later post.
1. HTML based mode, records script for every user action that is performed during recording
(hmmm…sounds like QTP) while URL based mode records each and every browser request to the
server and resources received from the server. Confused? ok, HTML based mode does recording
as you perform clicks and doesn't give you inside information like what is happening behind the
recording while URL based mode records each and every step and emulate Javascript code.
2. From the point1) above you can guess, HTML mode would have less correlation to do while URL
mode has much more complex correlation requirements.
3. HTML mode is smaller and is more intuitive to read as the statements are inside the functions
corresponding to the user action performed. In the case of URL based, all statements gets recorded
into web_url()
4. HTML mode is recommended for browser applications while URL mode is recommended for non-
browser applications.
5. Lastly, don't get the impression that I am advocating for HTML mode :). URL mode can be of real
help when you want to have control over the resources that need to be or need not to be
downloaded, since you have each and every statement in-front of you (point 1)
For example in a Yahoo Mail application: Suppose a scenario consists of 100 vusers with 3 tasks – 1)
Login, 2) Check no of unread mails 3) Logout. Vusers at 1) + 2) + 3) will be called as concurrent vusers as
they are part of same scenario performing some task but if have set a rendezvous point so that say 25 vuser
perform the 2) task at the same time these 25 vusers would be termed as simultaneous vusers.
What is memory leak, page fault and how they affect LoadRunner performance?
A memory leak is a particular type of unintentional memory consumption by a computer program where
the program fails to release memory when no longer needed. This condition is normally the result of a bug
in a program that prevents it from freeing up memory that it no longer needs.This term has the potential to
be confusing, since memory is not physically lost from the computer. Rather, memory is allocated to a
program, and that program subsequently loses the ability to access it due to program logic flaws.
An interrupt that occurs when a program requests data that is not currently in real memory. The interrupt
triggers the operating system to fetch the data from a virtual memory and load it into RAM.
An invalid page fault or page fault error occurs when the operating system cannot find the data in virtual
memory. This usually happens when the virtual memory area, or the table that maps virtual addresses to
real addresses, becomes corrupt.
Now the most important question comes up, how do they affect LoadRunner functioning?
As you might guess, memory leak, if left unattended and not corrected, could prove to be fatal. Memory
leaks can be found out by running tests for long duration (say about an hour) and continuously checking
memory usage.
Issues caused by memory leaks are essentially based on two variable for a standalone windows application
1) Frequency of usage 2) size of memory leak . If either one or both are very high it could cause the
computer to come to a point when no memory is available for other applications causing it to crash. If it is a
network based application then you will also have to consider network traffic . If each network transaction
causes a memory leak , then a high volume of network transactions could also prove dangerous
Source
In terms of Loadrunner, when we run Vuser as a process, LoadRunner creates 1 process called mmdrv.exe
per Vuser. So if we have 10 Vusers, we will have 10 mmdrv.exe processes on our machines.
while when we run Vuser as a thread, LoadRunner creates 1 thread per Vuser. So if we have 10 Vusers,
then we will have 1 process with 10 threads running inside it if the limit is 10 threads per process.
Running Vuser as a thread is more memory efficient that running Vuser as a process for obvious reasons
that less memory resources are utilized when we run them as thread. I read somewhere that running as a
process has an advantage that system becomes more stable. Now how is that stability achieved
You are going to encounter these terms again and again on your journey to become a LoadRunner expert.
We will clarify their meaning first, and shall see how are they related to LoadRunner.
1. Hard Disk is used for long-term storage of work while RAM is used to store your current work.
2. Hard Disk holds the original copy of the program permanently while When you want to use a
program, a temporary copy is put into RAM and that's the copy you use.
3. When working on a file, the original file is left untouched in the Hard Drive until you do a "save;"
the "save" copies the new version of the file that's in RAM onto the Hard Disk (and usually
replaces the original file) while The file you are modifying, plus all the changes you make, are
kept in RAM until you do a "save"
Virtual Memory is an essential part of all Operating Systems. As we saw above, RAM stores info about all
the programs currently running on your desktop. If you open a program when RAM is full, your OS will try
to locate programs on RAM which are not in use currently. It will then transfer those programs to some
areas of hard disk, that ways space will be created on RAM for your new programs to run. So effectively,
though there was no space on RAM but your OS created a memory space with the help of your hard disk.
This memory is called as Virtual Memory. The area of hard disk where RAM image is copied is known as
page file and process as paging.
You might ask why can't we eliminate the use of hard disk or RAM, given the above scenario…here is a
beautiful explanation of this, from the source cited below.
The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not
geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual
memory, you will notice a significant performance drop. The key is to have enough RAM to handle
everything you tend to work on simultaneously — then, the only time you "feel" the slowness of virtual
memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory
is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between
RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.
CPU Usage:
It represent the percentage of time that a process used the CPU since the last update. The steps to find out
current CPU usage:
Go to "Windows Task Manager" [Ctrl-Shift-Esc] > Performance > Top left graph shows you CPU usage as
shown below.
In terms of LoadRunner you should ensure that CPU usage should always be below (80-85)% on your
loadgenerator machines for efficient functioning.
Memory usage:
It is the current working set of processes in kilobytes. In the above figure, Commit Charge (K) represents
Memory usage. In terms of LoadRunner, you should ensure that Commit charge should always be less than
Physical Memory (RAM) on your loadgenerator machines so that minimal paging is required.
As explained in one of my previous posts, web_reg_save_param is THE most important function when you
are working with LoadRunner. We will start with the syntax and then touch upon some examples to get a
clear idea.
Find below the available attributes [<List Of Attributes>]. Note that the attribute value strings (e.g.
Search=all) are not case sensitive.
NotFound The handling method when a boundary is not found and an empty string is generated.
"ERROR," the default, indicates that VuGen should issue an error when a boundary is not found. When set
to "EMPTY," no error message is issued and script execution continues. Note that if Continue on Error is
enabled for the script, then even when NOTFOUND is set to "ERROR," the script continues when the
boundary is not found, but it writes an error message to the Extended log file.
LB The left boundary of the parameter or the dynamic data. This parameter must be a non-empty, null-
terminated character string. Boundary parameters are case sensitive; to ignore the case, add "/IC" after the
boundary. Specify "/BIN" after the boundary to specify binary data.
RB The right boundary of the parameter or the dynamic data. This parameter must be a non-empty, null-
terminated character string. Boundary parameters are case sensitive; to ignore the case, add "/IC" after the
boundary. Specify "/BIN" after the boundary to specify binary data.
RelFrameID The hierarchy level of the HTML page relative to the requested URL.
Search The scope of the search—where to search for the delimited data. The possible values are Headers
(search only the headers), Body (search only Body data, not headers), or ALL (search Body and headers).
The default value is ALL.
ORD This optional parameter indicates the ordinal or occurrence number of the match. The default ordinal
is 1. If you specify "All," it saves the parameter values in an array.
SaveOffset The offset of a sub-string of the found value, to save to the parameter. The default is 0. The
offset value must be non-negative.
Savelen The length of a sub-string of the found value, from the specified offset, to save to the parameter.
The default is -1, indicating until the end of the string.
Examples:
The examples below are taken from the LoadRunner tutorial to give clarity on topic. We will see more
examples in the coming posts.
web_url("FirstTimeVisitors","URL=/exec/obidos/subst/help/first-time-visitors.html/002-8481703-
4784428>Buy books for a penny ",
"TargetFrame=","RecContentType=text/html","SupportFrames=0″,LAST);
After implementing correlated statements, the modified script looks like this, where user_access_number is
the name of the parameter representing the dynamic data.
web_url("FirstTImeVisitors","URL=/exec/obidos/subst/help/first-time-""visitors.html/
{user_access_number}Buy books for a penny ",
"TargetFrame=","RecContentType=text/html","SupportFrames=0″,LAST);
Note: Each correlation function retrieves dynamic data once, for the subsequent HTTP request. If another
HTTP request at a later point in the script generates new dynamic data, you must insert another correlation
function.
Also as I wrote in my last post don't confuse correlation with parameter which you might have used in tools
like QTP to pass varying values. Parameter is not a dynamic value captured from server response but it
is something for which the user has predefined data values available.
• Always analyze the location of the dynamic data within the HTML code itself, and not in the
recorded script.
• Identify the string that is immediately to the left of the dynamic data. This string defines the left
boundary of the dynamic data.
• Identify the string that is immediately to the right of the dynamic data. This string defines the right
boundary of the dynamic data.
• web_reg_save_param looks for the characters between (but not including) the specified
boundaries and saves the information beginning one byte after the left boundary and ending one
byte before the right boundary. web_reg_save_param does not support embedded boundary
characters.
For example, if the input buffer is {a{b{c} and "{" is specified as a left boundary, and "}" as a right
boundary, the first instance is c and there are no further instances—it found the right and left boundaries
but it does not allow embedded boundaries, so "c" is the only valid match. By default, the maximum length
of any boundary string is 256 characters.
Include a web_set_max_html_param_len function in your script to increase the maximum permitted length.
For example, the following function increases the maximum length to 1024 characters:
web_set_max_html_param_len("1024");
Advantages of LoadRunner
Any performance testing tool (or for that matter any other automation tool) should be used on a case-to-
case basis, depending upon the requirements, client budget etc. Since the topic of our blog is limited to
LoadRunner, I would like to present some advantages and disadvantages of using LoadRunner.
Advantages:
1. No need to install it on the server under test. It uses native monitors. For Ex: perfmon for windows
or rstatd daemon for Unix
2. Uses ANSI C as the default programming language1 and other languages like Java and VB.
3. Excellent monitoring and analysis interface where you can see reports in easy to understand
colored charts and graphics.
4. Supports most of the protocols2.
5. Makes correlation3 much easier. We will dig into correlation through a series of posts later.
6. Nice GUI generated script through a one click recording, of course you would need to modify the
script according to your needs.
7. Excellent tutorials, exhaustive documentation and active tool support from HP.
Disadvantages:
The only disadvantage I can think is the prohibitive cost associated with the tool but that can also be
compensated in the long run when you start getting a good ROI from the tool.
1
Programming/Scripting language is used to represent the captured protocol data and manipulate the data
for play-back.
2
Protocol is simply a language that your client uses to communicate with the system.
3
Correlation is a way to substitute values in dynamic data to enable successful playback.
What is a bug?
A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from
working correctly or produces an incorrect result.
If a bug has high severity then usually that is treated as high priority, then why do priority given by
test engineers/project managers and severity given by testers?
High severity bugs affects the end users ….testers tests an application with the users point of view, hence it
is given as high severity. High priority is given to the bugs which affects the production. Project managers
assign a high priority based on production point of view.
do u know about configuration management tool, what is the purpose of maintaining all the
documents in configuration management tool?
It is focused primarily on maintaining the file changes in the history.
Documents are subjected to change for ex: consider the Test case document.
Initially you draft the Test cases document and place it in Version control tool (Visual Source Safe for
ex).Then you send it for Peer Review .They will provide some comments and that document will be saved
in VSS again.Similary the document undergoes changes and all the changes history will be maintained in
Version control.
It helps in referring to the previous version of a document.
Also one person can work on a document (by checking out) at a time.
Also it keeps track that has done the changes, time and date.
Generally all the Test Plan, Test cases, Automation design docs are placed in VSS.
Proper access rights needs to be given so that the documents don't get deleted or modified.
Suppose if you press a link in yahoo shopping site in leads to some other company website? How to
test if any problem in linking from one site to another site?
1) First I will check whether the mouse cursor is turning into hand icon or not?
2) I will check the link is highlighting when I place the cursor on the link or not?
3) The site is opening or not?
4) If the site is opening then I will check is it opening in another window or the same window that the link
itself exist (to check user-friendly ness of the link)
5) How fast that website is opening?
6) Is the correct site is opening according to the link?
7) All the items in the site are opening or not?
All other sub links are opening or not?
What is difference between the Web application testing and Client Server testing?
Testing the application in intranet (without browser) is an example for client -server. (The company
firewalls for the server are not open to outside world. Outside people cannot access the application.)So
there will be limited number of people using that application.
Testing an application in internet (using browser) is called web testing. The application which is accessible
by numerous numbers around the world (World Wide Web.)
So testing web application, apart from the above said two testing there are many other testing to be done
depending on the type of web application we are testing.
If it is a secured application (like banking site- we go for security testing etc.)
If it is an e-commerce testing application we go for Usability etc… Testing.
If you have executed 100 test cases ,every test case passed but apart from these test case you found
some defect for which test case is not prepared,thwn how you can report the bug?
While reporting this bug into bug tracking tool you will generate the test case mean put the steps to
reproduce the bug.
What is the difference between web based application and client server application?
The basic difference between web based application & client server application is that the web application
are 3 tier & client based are 2 trier.In web based changes are made at one place & it is reflected on other
layers also whereas client based separate changes need be installed on client machine also.
What is test plan? And can you tell the test plan contents?
Test plan is a high level document which explains the test strategy, time lines and available resources in
detail. Typically a test plan contains:
-Objective
-Test strategy
-Resources
-Entry criteria
-Exit criteria
-Use cases/Test cases
-Tasks
-Features to be tested and not tested
-Risks/Assumptions.
How many test cases can you write per a day, an average figure?
Complex test cases 4-7 per day
Medium test cases 10-15 per day
Normal test cases 20-30 per day
Who will prepare FRS (functional requirement documents)? What is the important of FRS?
The Business Analyst will pre pare the FRS.
Based on this we are going to prepare test cases.
It contains
1. Overview of the project
2. Page elements of the Application (Filed Names)
3. Prototype of the of the application
4. Business rules and error states.
5. Data Flow diagrams
6. Use cases contains Actor and Actions and System Responses
How you can decide the number of test cases is enough for testing the given module?
The developed test cases are covered all the functionality of the application we can say test cases are
enough. If u knows the functionality covered or not u can use RTM.
Retesting: it is manual process in which application will be tested with entire new set of data.
Data Driven Testing(DDT)-It is a Automated testing process in which application is tested with multiple
test dated is very easy procedure than retesting because the tester should sit and need to give different new
inputs manually from front end and it is very tedious and boring
Procedure.
After the Bug fixed, testing the application whether the fixed bug is affecting remaining functionality of the
application or not. Majorly in regression testing Bug fixed module and it's
Connected modules are checked for their integrity after bug fixation.
How does u test web application?
How does u perform regression testing, means what test cases u select for regression?
Regression testing will be conducted after any bug fixed or any functionality changed.
During defect fixing procedure some part of coding may be changed or functionality may be manipulated.
In this case the old test cases will be updated or completely re written
According to new features of the application where bug fixed area. Here possible areas are old test cases
will be executed as usual or some new test cases will be added to existing test cases or some test cases may
be deleted.
What r the client side scripting languages and server side scripting languages?
If a very low defect (user interface) is detected by u and the developer not compromising with that
defect,what will u do?
User interface defect is a high visibility defect and easy to reproduce.
Follow the below procedure
1. Reproduce the defect
2. Capture the defect screen shots
3. Document the proper inputs that you are used to get the defect in the defect report
3. Send the defect report with screen shots, i/ps and procedure for defect reproduction.
Before going to this you must check your computer hard ware configuration that is same as developer
system configuration. And also check the system graphic drivers are properly
Installed or not. If the problem in graphic drivers the User interfaces error will come.
So first check your side if it is correct from your side then reports the defect by following the above
method.
if u r only person in the office and client asked u for some changes and u didn't get what the client
asked for what will u do?
One thing here is very important. Nobody will ask test engineer to change software that is
not your duty, even if it is related to testing and anybody is not there try to listen carefully if you are not
understand ask him again and inform to the corresponding people immediately.
Here the client need speedy service, we (our company) should not get any blame from customer side.
How many Test-Cases can be written for the calculator having 0-9 buttons, Add, Equalto buttons?
The test cases should be focused only on add-functionality but mot GUI.What is those test-cases?
Test-Cases for the calculator
so here we have 12 buttons totalize 0,1,2,3,4,5,6,7,8,9,ADD,Equalto -12 buttons
here u can press at least 4 buttons at a time minimum for example 0+1= for zero u should press 'zero'
labeled button for plus u should press '+' labeled button for one u should press 'one' labeled button for equal
to u should press 'equal to' labeled button 0+1=here + and = positions will not vary so first number position
can be varied from 0 to 9 i.e. from permutation and combinations u can fill that space in 10 ways in the
same way second number position can be varied from 0 to 9 i.e. from permutation and combinations u can
fill that space in 10 ways
Total number of possibilities are =10×10=100
This is exhaustive testing methodology and this is not possible in all cases.
In mathematics we have one policy that the function satisfies the starting and ending values of a range then
it can satisfy for entire range of values from starting to ending.
then we check the starting conditions i.e. one test case for '0+0=' (expected values you know that's '0′) then
another test case for '9+9='(expected values you know that's '18′) only two test cases are enough to test the
calculator functionality.
How will you prepare Test plan. What are the techniques involved in preparing the Test plan?
Test plan means planning for the release. This includes Project background
Test Objectives: Brief overview and description of the document
Test Scope: setting the boundaries
Features being tested (Functionalities)
Hardware requirements
Software requirements
Entrance Criteria (When to start testing):
Test environment established, Builder received from developer, Test case prepared and reviewed.
Exit criteria (when to stop testing):
All bug status cycle are closed, all functionalities are tested, and all high and medium bugs are resolved.
Project milestones: dead lines
Expalin about metrics Management?
Metrics: is nothing but a measurement analysis.Measurment analysis and Improvement is one of the
process area in CMM I L2.
In which way tester get Build A, Build B, Build Z of an application, just explains the process?
After preparation of test cases project manager will release software release note in that Document there
will be URL path of the website link from that we will receive
The build In case of web server projects, you will be provided with an URL or a 92.168. ***. *** (Web
address) which will help you access the project using a browser from your system.
In case of Client server, the build is placed in the VSS (Configuration tool) which will help you get the .exe
downloaded to your computer.
Apart from bug reporting what is your involvement in project life cycle?
As a Test engineer we design test cases, prepare test cases Execute Test cases, track the bugs, analyze the
results report the bugs. Involved in regression testing, performance of system
Testing system integration testing at last preparation of Test summary Report
HIGH LEVEL TC
1. Verify that User is able to login with valid login and valid password.
2. Verify that User is not able to login with invalid login and valid password.
Etc…
..
3. Verify that Reset button clears the filled screen.
4. Verify that a pop up message is displayed for blank login.
Etc…
Etc.
LOW LEVEL TC
1. Verify that after launching the URL of the application below fields are displays in the screen.
1. Login Name 2.Password.3.OK BUTTON 4.RESET button etc.
5. Check box, provided for the label "remember my pwd" is unchecked.
2. Verify that OK button should be disabled before selecting login and password fields.
3. Verify that OK button should we enabled after selecting login and password.
4. Verify that User is able to check the check box, providedfor the label "remember my password".
Etc.
In this way, we can categories all the test cases under HIGH LEVEL and LOW LEVEL.
If a project is long term project, requirements are also changes then test plan will change or not?
Why?
Yes. Definitely. If requirement changes, the design documents, specifications (for that particular module
which implements the requirements) will also change. Hence the test plan would also need to be updated.
This is because "Resource Allocation" is one section in the test
Plan. We would need to write new test cases, review, and execute it. Hence resource allocation would have
to be done accordingly. As a result the Test plan would change
When will do the beta test? When will do the alpha test?
Alpha and Beta tests comes under User acceptance test. We will conduct these two systems being released.
We are giving opportunity to customer to check all punctualities covered or not.
Alpha testing conducting for software application by real customer at development site.
Beta testing conducting for software product by model customer at customer site.
How do you select test cases for Regression Testing (The point is when there is change code how do
you come to know which part of code or modules it will affect)?
Consider an example of a form which has a user name, password and Login button.
There is a code change and a new button "Reset" is introduced. Regression testing (for that build) will
include testing only the "Login" button and not the Reset button (testing Reset button will be a part of
conation testing). Hence the Regression tester need not worry about the change in code, functionality. But
he has to make sure that the existing functionality is working as desired. Testing of "Reset" button will be
included as a part of Regression, for the next build
Can you explain with example of high severity and low priority, low severity and high priority, high
severity and high priority, low severity and low priority?
1. High severity and high priority - Database connectivity cannot be established by multiple users.
2. Low severity and low priority - Small issues like, incorrect number of decimal digits in the output.
3. Low severity and high priority - Images not updated.
4. High severity and low priority - In a module of say 2 interfaces, the link between them is broken or is not
functioning.
(1)High priority & High Severity: If u clicks on explorer icon or any other icon then system crash.
(2) Low priority & low severity: In login window, spell of ok button is "Ko".
(3)Low priority & high serverty: In login window, there is a restriction login name should be 8 characters if
user enter 9 or than 9 in that case system get crash.
(4)High priority & low severity: Suppose logo of any brand company is not proper in their product. So it
affects their business.
What will be the Test case for ATM Machine & Coffee Machine?
Test cases for ATM Machine
1. Successful inspection of ATM card
2. Un successful operation due to insert card in wrong angle
3. Un successful operation due to invalid account ex: other bank card or time expired card
4. Successful entry of PIN number
5. Un successful operation due to enter wrong PIN number 3times
6. Successful selection of language
7. Successful selection of account type
8. Un successful operation due to invalid account type
10. Successful selection of withdraw operation
11. Successful selection of amount to be withdraw
12. Successful withdraw operation
13. Unsuccessful withdraw operation due to wrong denominations
14. Unsuccessful withdraw operation due to amount is greater than day limit
15. Unsuccessful withdraw operation due to lack of money in ATM
16. Unsuccessful withdraw operation due to amount is greater than possible balance
17. Unsuccessful withdraw operation due to transactions is greater than day limit
18. Unsuccessful withdraw operation due to click cancel after insert card
19. Unsuccessful withdraw operation due to click cancel after insert card & pin number
20. Unsuccessful withdraw operation due to click cancel after insert card, pin number & language
21. Unsuccessful withdraw operation due to click cancel after insert card, pin number, language &account
type
22. Unsuccessful withdraw operation due to click cancel after insert card , pin number , language ,account
type & withdraw operation
23.unsuccessful withdraw operation due to click cancel after insert card , pin number , language ,account
type ,withdraw operation &amount to be withdraw
In SDLC process what is the role of PM, TL, DEVELOPER, tester in each and every phase? Please
explain me in detail?
In the SDLC we have these phases
1. Initial phase
2. Analysis phase
3. Designing phase
4. Coding phase
5. Testing
6. Delivery and maintenance
In the initial phase project manager can prepare a document for the requirements, team leader will
prepare a team which is having test engineers, developer will provided by the project manager,
tested will prepare test cases for that particular project
Analysis phase all the members have a meeting to finalize the technology to develop that project,
the employee, time…
Designing phase the project manager like senior level management will give the directions and
source code to the team members to develop the actual code that is guidelines will be given in this
phase
Coding phase developer will develop the actual code using the source code and they release the
application to the tested
Testing phase they deploy their test cases to that application and prepare a bug profile document if
there is any defect/bug in that application and send it back to developer, developer may rectify and
releases than application as next build and if the bug not understand it will send to the project lead
in the delivery phase the so test eng can deploy the application in the client environment
Maintenance phase if the client get any problem with the application it may solved by the project
lead with help of testers and developers
How do you Test Application with having any requirement and Document?
If it is an existing system or if a build is available then we explore the system while testing. This helps
knowing the functional use of the system, and its usability.
By asking questions to end users and how they use it will be more beneficial. Also, you may work with BA
to know more about the system.
Black box test is nothing but the same where you explore the system without having any prior knowledge
to the system.
What are the reasons why parameterization is necessary when load testing the Web server and the
database server?
When you test your applications, you may want to check how the application performs the same operations
with multiple sets of data. For example, suppose you want to check how
Your Web site responds to ten separate sets of data. You could record ten separate tests, each with its own
set of data. Alternatively, you can create Data Table parameters so that your test runs ten times, each time
using a different set of data.
What is the document needed to create a test case? How u tell it is test case?
System requirements specification, Use case document, Test Plan
In customer details form having fields like customer name, customer address. After completion of
this module, client raise the change as insert the two radio buttons after customer address. How you
can check as a tester?
1. First we need to verify whether the radio button is there are not?
2. Conform the radio buttons are present after the customer address or not.
3. Verify the no of radio button.
4. Verify only one radio button should be checked initially when we open the Customer details form (if it is
mentioned in FS)
5. Verify the functionality of the radio buttons i.e. if we check one ratio button, second radio button should
be unchecked.
6. Verify the spell check of radio button label name.
7. Verify the alignment of radio buttons in the form.
At the time of testing web based applications and client server applications, what you absorbed as a
tester?
We generally check for the links, data retrieving and posting.
We perform load and stress testing especially for Web based and Client-Server applications.
What is testing policy and testing methodology? And what is the difference?
Testing policy means all types of testing or testing techniques (i.e. functional testing, sanity testing
etc).Testing methodology means white box and black box testing.
What participation a manual tester can do in documentation? Are there any tools available for only
documentation?
Yes, Manual tester will do Sub Test plan documents, as of my knowledge no tool is used to prepare
documentation
What is the difference between low and high level test cases? Give Examples?
High level Test cases are those which covers major functionality in the application (i.e. retrieve, update
display, cancel (functionality related test cases), database test cases).
Low level test cases are those which are related to UI related testcases.
Is it mandatory to use USECASES or directly one can write test cases from requirements?
It's not mandatory to write Use Cases, if the requirements are clear you can go ahead with Test Cases. Use
Cases are written to know the business flow of the module/application.
Given requirement collection doc, tester can prepare which test plan?
Test lead can prepare a test plan which performs testing on an application in an efficient effective and in an
optimized way. Test development will done by the testers using the test
Plan in the test plan they prepare the test strategy
As far as the SDLC is concerned last test case,will it be written for "Maintenance Phase"?
As far as the SDLC is concerned last test case will be written for "Acceptance Testing"
What is the difference between Project Based Testing and Product Based Testing?
Project based is nothing but client requirements. Product based is nothing but market requirements.
Ex.stiching shirt is a project based and ready made shirt is product based.
What is testing process in related to Application testing process is the one which tells you how the
application should be tested in order to minimize the bugs in the application?
One main thing no application can be released as bug free application which is impossible.
What is the difference between quality assurance and system testing explains in detail with an
example?
Quality Assurance: It is nothing but building an adequate confidence in the customer that the developed
software is acceding to requirements. Entire SDLC comes under QA. It is process oriented.
System Testing: It is the process of executing entire system i.e. checking the s/w as well as parts of system.
If there is no sufficient time for testing & u have to complete the testing, then what will u do?
When I have less time to test the Product then I will take these following steps—
1) Sanity or smoke testing
2) Usability Testing
3) Formal Functionality and GUI Testing
4) Walk through with the Product
Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-
testing objectives.
Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks
performed by Vusers as a whole, and tasks measured as transactions.
Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It
includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using
LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual
scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be
assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that
our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the
application under different loads. We use LoadRunnerâ€â"¢s graphs and reports to analyze the
applicationâ€â"¢s performance.
We perform load testing once we are done with interface (GUI) testing. Modern system architectures are
large and complex. Whereas single user testing primarily on functionality and user interface of a system
component, application testing focuses on performance and reliability of an entire system. For example, a
typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This
gives rise to issues such as what is the response time of the system, does it crash, will it go with different
software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when
we set do load and performance testing.
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process,
LoadRunner Analysis and Monitoring, LoadRunner Books Online.
The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser
scripts for a variety of application types and communication protocols.
What Component of LoadRunner would you use to play Back the script in multi user mode? –
The
Controller component is used to playback the script in multi-user mode. This is done during a scenario run
where a vuser script is executed by a number of vusers in a group.
What is a scenario? –
A scenario defines the events that occur during each testing session. For example, a scenario defines and
controls the number of users to emulate, the actions to be performed, and the machines on which the virtual
users run their emulations.
We use VuGen to develop a Vuser script by recording a user performing typical business processes on a
client application. VuGen creates the script by recording the activity between the client and the server. For
example, in web based applications, VuGen monitors the client end of the database and traces all the
requests sent to, and received from, the database server. We use VuGen to: Monitor the communication
between the application and the server; Generate the required function calls; and Insert the generated
function calls into a Vuser script.
Parameters are like script variables. They are used to vary input to the server and to emulate real users.
Different sets of data are sent to the server each time the script is run. Better simulate the usage model for
more accurate testing from the Controller; one script can emulate many different users on the system.
What is correlation? Explain the difference between automatic correlation and manual correlation?
–
Correlation is used to obtain data which are unique for each run of the script and which are generated by
nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also
optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for
correlation. It can be application server specific. Here values are replaced by data which are created by
these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used
to correlate.
How do you find out where correlation is required? Give few examples from your projects? –
Two ways:
First we can scan for correlations, and see the list of values which can be
correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and
compare them. We can look up the difference file to see for the values which needed to be correlated. In
my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it
was generated automatically and it was sequential and this value was unique. I had to correlate this value,
in order to avoid errors while running my script. I did using scan for correlation.
Automatic correlation from web point of view can be set in recording options and correlation tab. Here we
can enable correlation for the entire script and choose either issue online messages or offline actions, where
we can define rules for that correlation. Automatic correlation for database can be done using show output
window and scan for correlation and picking the correlate query tab and choose which query value we want
to correlate. If we know the specific value to be correlated, we just do create correlation for the value and
specify how the value to be created.
When do you disable log in Virtual User Generator, When do you choose standard and extended
logs? –
Once we debug our script and verify that it is functional, we can enable logging for errors only. When we
add a script to a scenario, logging is automatically disabled.
Standard Log Option:
When you select
Standard log, it creates a standard log of functions and messages sent during script execution to use for
debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario,
logging is automatically disabled
Extended Log Option: Select
extended log to create an extended log, including warnings and other messages. Disable this option for
large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We
can specify which additional information should be added to the extended log using the Extended log
options.
VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints.
The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed
during scenario execution. The debug information is written to the Output window. We can manually set
the message class within your script using the lr_set_debug_message function. This is useful if we want to
receive debug information about a small section of the script only.
How do you write user defined functions in LR? Give me few functions you wrote in your previous
project? –
Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then
we assign user defined function as a parameter. The function should have the following format: __declspec
(dllexport) char* <function name>(char*, char*)Examples of user defined functions are as
follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier
project.
c) Extended Think Time - In think time we have two options like Ignore think time and Replay think
time.
d) General - Under general tab we can set the vusers as process or as multithreading and whether each step
as a transaction.
We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings,
Pacing tab, set number of iterations.
Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of
Vusers, we can determine how much load the server can sustain.
This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and
a value to wait between intervals can be
specified. To set Ramp Up, go to 'Scenario Scheduling Options'
VuGen provides the facility to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser,
thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for
the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers
to be run per generator.
If you want to stop the execution of your script on error, how do you do that? –
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the
Actions section, execute the vuser_end section and end the execution. This function is useful when you
need to manually abort a script execution as a result of a specific error condition. When you end a script
using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first
uncheck the â€Å"Continue on errorâ€Â� option in Run-Time Settings.
The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a
second. When we compare this with the transaction response time, we will notice that as throughput
decreased, the response time also decreased. Similarly, the peak throughput and highest response time
would occur approximately at the same time.
The configuration of our systems refers to that of the client machines on which we run the Vusers. The
configuration of any client machine includes its hardware settings, memory, operating system, software
applications, development tools, etc. This system component configuration should match with the overall
system configuration that would include the network infrastructure, the web server, the database server, and
any other components that go with this larger system so as to achieve the load testing objectives.
How do you identify the performance bottlenecks?
Performance Bottlenecks can be detected by using monitors. These monitors might be application server
monitors, web server monitors, database server monitors and network monitors. They help in finding out
the troubled area in our scenario which causes increased response time. The measurements made are
usually performance response time, throughput, hits/sec, network delay graphs, etc.
Web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for the
application.
Using Web resource monitors we can find the performance of web servers. Using these monitors we can
analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of downloaded pages per
second.
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the
merged graph showâ€â"¢s the current graphâ€â"¢s value & Right Y-axis show the value of Y-axis
of the graph that was merged.
Correlate Graph: Plot the Y-axis of two graphs against each other. The active graphâ€â"¢s Y-axis
becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graphâ€â"¢s
Y-axis.
How did you plan the Load? What are the Criteria? –
Load test is planned to decide the number of users, what kind of machines we are going to use and from
where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction
profile. Task Distribution Diagram gives us the information on number of users for a particular transaction
and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile
gives us the information about the transactions name and their priority levels with regard to the scenario we
are deciding.
Think time is the time that a real user waits between actions. Example: When a user receives data from a
server, the user may wait several seconds to review the data before responding. This delay is known as the
think time. Changing the Threshold: Threshold level is the level below which the recorded think time will
be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording
options of the Vugen.
The standard log sends a subset of functions and messages sent during script execution to a log. The subset
depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This
is mainly used during debugging when we want information about: Parameter substitution. Data returned
by the server. Advanced trace.
lr_debug_message - The lr_debug_message function sends a debug message to the output log when the
specified message class is set.
lr_output_message - The lr_output_message function sends notifications to the Controller Output window
and the Vuser log file.
lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output
window.
lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This
function sets a SQL statement to be processed.
lrd_fetch - The lrd_fetch function fetches the next row from the result set.
Throughput - If the throughput scales upward as time progresses and the number of Vusers
increase, this indicates that the bandwidth is sufficient. If the graph
were to remain relatively flat as the number of Vusers increased, it would
be reasonable to conclude that the bandwidth is constraining the volume of
data delivered.
Load Runner provides you with five different types of goals in a goal oriented scenario:
In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers
increases, the average response time of the check itinerary transaction very gradually increases. In other
words, the average response time steadily increases as the load
increases. At 56 Vusers, there is a sudden, sharp increase in the average response
time. We say that the test broke the server. That is the mean time before failure (MTBF). The response
time clearly began to degrade when there were more than 56 Vusers running simultaneously.
What is correlation? Explain the difference between automatic correlation and manual correlation?
–
Correlation is used to obtain data which are unique for each run of the script and which are generated by
nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also
optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for
correlation. It can be application server specific.
Here values are replaced by data which are created by these rules. In manual correlation, the value we want
to correlate is scanned and create correlation is used to correlate.
Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we
can enable correlation for the entire script and choose either issue online messages or offline actions, where
we can define rules for that correlation. Automatic correlation for database, can be done using show output
window and scan for correlation and picking the correlate query tab and choose which query value we want
to correlate. If we know the specific value to be correlated, we just do create correlation for the value and
specify how the value to be created.
While editing the script we have to insert the transaction point and rendezvous point.
Rendezvous point inserted into script to calculate the peak load of the server.
Syntax: lr-rendezvous("rendezvous point")
Title bar ( Name of the scenario presently working). Menu bar ( selecting the various command ). Tool bar.
status bar.
What are the 5 icons appear in the bottom of the controller windows?
What is .lrs?
Loadrunner save the information in a scenario files.
We can filter the information display only those items that meet the selected criteria(filter box) exam you
can filter Vuser only those who are in ready state.
Sorting we can sort all the Vuser in the vuser list. In order to their vuser ID(1,2,3,4,5,6,7,8,9).
How you set maximum number of vuser that a host can run?
We can modify the maximum number of Vuser according to the (available resource, the needs of yours
scenario, Loadrunner license arguments).
Capabilities of the host that at a time how many vuser are initialize.
What protocol does LR supports?
Industry slandered protocols .For EX: HTTP and ODBC are explicitly supported by LR. Further more any
protocol that communicates over a windows socket can be supported.
What do i need to know to do load testing in addition to knowing how to use the LR tool?
Monitor system bottle necks during a test run can capture and display the performance data from every
server or component.
Application components used are client, data base or additionally business application server
Web server works on and through LAN, WAN or WWW. connection.
Application server components are client, business server and data base server with out use of www but
through protocols like ftp.
This function is developed to use in mercury load runner performance tool. The main use of this function to
return the current system time at any given point of time while LR script is running. This function can be
used to report transaction times, script start time and end time.
long get_secs_since_midnight(void)
{
char * curr_hr;/*pointer to a parameter with current clock hr */
char * curr_min;/*pointer to a parameter with current clock min */
char * curr_sec;/*pointer to a parameter with current clock sec */
long current_time; /*current number of seconds since midnight */
hr_secs;/*current hour converted to secs */
min_secs;/*current minuets converted to secs */
secs_secs;/*current number of seconds */
curr_hr=Lr_eval_string("{current_hr}>");
curr_min=Lr_eval_string("{current_min}");
curr_sec=Lr_eval_string("{current_sec}");
current_time=hr_secs+min_secs+secs_secs;
return (current_time);
}
What are the reasons why parameterization is necessary when load testing the web server and the
data base server?
What is LR?
When LR is used?
RCL enables the controller to start the application on the host machine.
Wrun.ini
Instruct the Vuser to use hosts winrunner configuration file.\ What do you mean by path?
Use winrunner configuration file i.e., in a specific location on the network.
What are the information contained by script windows for each script in the list?
It displays the number of Vusers that execute vuser script during each second of the scenario run. Only
running and rendezvous state are include(loading, ready and pause are not displayed).
Each report viewer contains the report header and report viewer tool bar.
It display general scenario information and it contains the information like title, scenario, result start time,
end time and duration.
The percentage of transaction that were performed within a given time range.
Web sites are essentially client/server applications - with web servers and 'browser'
clients. Consideration should be given to the interactions between html pages,
TCP/IP communications, Internet connections, firewalls, applications that run in web
pages (such as applets, java script, plug-in applications), and applications that run
on the server side (such as cgi scripts, database interfaces, logging applications,
dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers
and browsers, various versions of each, small but sometimes significant differences
between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can
become a major ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?),
and what kind of performance is required under such loads (such as web server
response time, database query response times). What kinds of tools will be needed
for performance testing (such as web load testing tools, other tools already in house
that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kinds
of connection speeds will them by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should page
appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? How
much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and
what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that
affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and
what are the requirements for maintaining, tracking, and controlling page content,
graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be
allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variations, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
• How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, java scripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup
'comp.security.announce' and links concerning web site security in the 'Other
Resources' section.
Some usability guidelines to consider - these are subjective and may or may not
apply to a given situation (Note: more information on usability testing issues can be
found in articles about web site usability in the 'Other Resources' section):
• Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so
that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided
or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end
pages.
• The page owner, revision date, and a link to a contact person or organization
should be included on each page.
Many new web site test tools have appeared in the recent years and more than 280
of them are listed in the 'Web Test Tools' section.
Client Server testing is a three tier architecture and when testing has to be done on
this we need to consider all types of testing like the stress testing , data - volume
testing, load testing and performance testing.
When u are doing a normal web testing then you will be testing navigation testing,
frame testing, broken links or missing URL's and static text testing.
In 3 tier architecture there are 3 layers in the architecture. They are 1) Application
(Presentation) layer 2)Business Logic Layer and 3)Data layer
In n tier architecture, Data layer is divided into 2 layers i.e. Data access and
Database.
In n tier architecture, Data access layer and Database layer may or may not reside
on the same location. Keeping that into consideration we have to prepare Test
strategy and Test Approach
During password field testing, the below options should be given focus:
Q. How to do browser testing (creates a standard script and run it for the
different browser combinations.)
The GUI architecture and events messaging differs from browser to browser. Like IE
uses Win32::OLE messaging and Firefox uses some GTK based messaging. So it is
generally difficult to create one standard script that runs on all browsers. But tools
like Win Runner, QTP use complex procedures inside them to handle different
browsers. Manual testing can always be performed if the application supports
different browsers like IE, Firefox, Opera, Netscape etc.
Q. What bugs are mainly come in web testing what severity and priority we
are giving?
In web testing, mainly the bugs come from navigation area. These could be missing
links, broken links, invalid links etc. Also there are bugs in downloading
data/image/audio/video files from the website to the local machine and in uploading
data/image/audio/video from local machine to the web server. Other than these a lot
of bugs also come from the contents/look and feel/cosmetic issues.
1. Type URL in the address bar (for e.g. click www.yahoo.com) and click 'go' button.
2. Check to see whether the page is navigated to the yahoo home page.
3. If navigated to yahoo home page test case is passed else failed
4.also check to see when we enter the URL in the address bar and press the enter
button in the keyboard it navigates to the yahoo home page.
5. When we click the refresh button in the yahoo home page the same page should
be displayed.
Q. what happens in a web application when you enter all the data and click
on submit button
suddenly the connection goes off? Will the data be present if you return to
the page?
If the data reaches the web server by the time of disconnection, the system will
persist the data in the database .If the connection fails before reaching the server,
the data won't be persisted and data will be lost.
Q. What are the important scenarios for testing emails? how do you test
emails? which tool is best for testing email?
We can categorize the different part on which tester may perform the testing:
All the scenarios that we mentioned above for incoming mails are valid for outgoing
mails also. Hence all of them have to be verified.
3) Mail failure: Check mail failure if mail is send to incorrect address and also the
failure notice should indicate the reason for failure.
Web Services testing is nothing, but testing the application. In this testing we just see
whether the concept (functionality) of web services is working or not.
Usability testing is done for "user friendliness". In this we check how comfortable the
customer is in using the application. Suppose for an example while logging in he forgot
his password, in usability testing u have to check whether there is an "forgot password
option" and if we click this it is asking for secret question or not and many things u can
test like there should be minimize and maximize button for a window....and so on.
This can be done by using Microsoft Visual Studio 2005 and 2008. In which you can find
new ways to test a particular webpage such as load testing, website testing and so on.
There is one software called Fidller which is used to calculate the traffic rate for a
website which is currently used by a number of users.
Q. What test cases you will execute for compatibility testing with different
browsers?
There are lot of issues which may arise while testing in different browsers.
Following are some points while comparing IE with Firefox, in IE following issues may
occur which need compatibility testing but do not occur in Firefox
The page may take too long to load...or the page may fail to load
by performing performance testing we can verify how the system behaves when
subjected to or beyond specification and requirements load limits.
Perform configuration testing to determine how the system deals with hardware,
software, operating systems, network conditions etc.
PERL is basically used for scripting. So it will be helpful when you need to automate
your test cases. Also for result collection. It actually depends upon use.
Q. Give some test cases for testing a search engine website say Google.
The test cases for a search engine would be very vast. It totally depends upon the scope of
testing. Some of the test cases are as mentioned below:
1) Check for simple strings like "European Premier League" or "Grammy Awards 2005".
2) Test the functionality of multiple page display by clicking on page number.
3) Verify whether a combination string works like "European Premier League_Christiano
Ronaldo"
4) Perform test for opening the links in new windows.
Q. What can be the the security checks on the web site, other than
login/password screens?
Other than login/password....you can do other security testing checks like SQL injection
methods, Cookie encryption testing, testing of authorization & authentication etc
This can be bug but we can not be sure before checking the below factors:
- Is the internet connection working fine?
- Is the browser on which the error comes supported for the software?
- Is the URL correct?
- Is the popup blocker off?
If the answer to all the above questions is yes then this is a bug
Q. How will you determine if the architecture of any web site is of two tier,
three tier or multi tier?
The architecture of different tier can be determined by checking the client, server
and database. If there is a client and a database then this is two tier architecture. If
the web application has an application server and database then it is a three tier
architecture.
Response time is the time taken by the server to give response to a particular action or
request.
Pages per second gives the number of pages downloaded per second.
Transaction Response time is the time taken to perform a transaction in the scenario.
All the SDLC models can be used for testing web applications. A web application is a
combination of one or more modules. Depending upon the web application we use
different models. Ex:- V-model, Spiral model and waterfall model.
Scenario1:
Login into the application and put the application in the idle state for the time equal
to or a little more than the prescribed session time out time. As the application is
open, click on any of the links. It should give error that session has expired.
Scenario2:
Login into the application and put the application in the idle state for the time a little
less than the prescribed session time out time. As the application is open, click on
any of the links. The link should open the appropriate page.
Q. What are the different ways in which cookie testing can be done for a
website?
Following are the different ways to perform cookie testing:
1. Disabling Cookies- This is probably the easiest area of cookie testing. What
happens to the Web site if all cookies are disabled? Start by closing all instances of
your browser and deleting all cookies from your PC set by the site under test. The
cookie file is kept open by the browser while it's running, so you must close the
browser to delete the cookies. Closing the browser also removes any per-session
cookies in memory. Disable all cookies and attempt to use the site's major features
and functions. Most of the time, you will find that these sites won't work when
cookies are disabled. This isn't a bug, but rather a fact of life: disabling cookies on a
site that requires cookies (of course!) disables the site's functionality.
2. Selectively Rejecting Cookies- What happens to the site if some cookies are
accepted and others are rejected? Start by deleting all cookies from your PC set by
the site under test and set your browser's cookie option to prompt you whenever a
Web site attempts to set a cookie. Exercise the site's major functions. You will be
prompted for each and every cookie the site attempts to set. Accept some and reject
others. (Analyze site cookie usage in advance and draw up a test plan detailing what
cookies to reject/accept for each function.) How does the site hold up under this
selective cookie rejection? As above, does the Web server detect that certain cookies
are being rejected and respond with an appropriate message? Or does the site
malfunction, crash, corrupt data, or misbehave in other ways?
3. Corrupting Cookies- Along the way, as cookies are created and modified, try things
like
a. Altering the data in the persistent cookies. Since the per-session cookies are stored only
in memory, they aren't readily accessible for editing.
b. Selectively deleting cookies. Allow the cookie to be written (or modified), perform
several more actions on the site, then delete that cookie. Continue using the site. What
happens? Is it easy to recover? Any data loss or corrupted?
4. Cookie Encryption - While investigating cookie usage on the site you're testing, pay
particular attention to the meaning of the cookie data. Sensitive information like
usernames and passwords should NOT be stored in plain text for all the world to read;
this data should be encrypted before it is sent to your computer.
HTTP is Hyper Text Transport Protocol and is transmitted over the wire via PORT
80(TCP). You normally use HTTP when you are browsing the web, it is not secure, so
someone can eavesdrop on the conversation between your computer and the web
server.HTTPS (Hypertext Transfer Protocol over Secure Socket Layer or HTTP over
SSL) is a Web protocol developed by Netscape and built into its browser that encrypts
and decrypts user page requests as well as the pages that are returned by the Web server.
HTTPS is really just the use of Netscape's Secure Socket Layer (SSL) as a sub layer
under its regular HTTP application layering. (HTTPS uses port 443 instead of HTTP port
80 in its interactions with the lower layer, TCP/IP.) SSL uses a 40-bit key size for the
RC4 stream encryption algorithm, new-age browsers use 128-bit key size which is more
secure than the former, it is considered an adequate degree of encryption for commercial
exchange. HTTPS is normally used in login pages, shopping/commercial sites.
The purpose of scalability testing is to determine whether your application scales for
the workload growth. Suppose your company expects a six-fold load increase on
your server in the next two months. You may need to increase the server
performance and to shorten the request processing time to better serve visitors. If
your application is scalable, you can shorten this time by upgrading the server
hardware, for example, you can increase the CPU frequency and add more RAM
(also, you can increase the request performance by changing the server software, for
example, by replacing the text-file data storages with SQL Server databases. To find
a better solution, first you can test hardware changes, then software changes and
after that compare the results of the tests).
If the scalability tests report that the application is not scalable, this means there is a
bottleneck somewhere within the application.
Scalability testing can be performed as a series of load tests with different hardware
or software configurations keeping other settings of testing environment unchanged.
When performing scalability testing, you can vary such variables as the CPU
frequency, number and type of servers, amount of available RAM, and so on
Web server statistics, database server statistics, networks statistics are monitored during
the performance of the web application.
Q. How do you test the server response time? Do you use any tool? How to
do it manually?
You can check the server response time using Load Testing (Non functionality testing).
For this Load runner tool can be used. You can check server response time manually for
limited users. For doing this for large number of users heavy resources are required
which can be easily done using Load runner tool. But this is difficult to do manually.
1. Clicking the Addresses icon should open the address book. Existing contact
information should be displayed
2. By clicking the Add Contact, should be able to add a contact.
3. Contact information should be editable by clicking the contact button and selecting
edit
4. Contact information should be able to delete by clicking the contact and selecting
delete
5. Should receive the message to add a contact after sending mail to a new contact
Q. How can u test the security of a web site both manually and by using a
tool .If by a tool then which one and how?
Following are some test cases for testing security of websites manually:
1. User should not be able to login after entering incorrect username/password.
2. User information like id etc.. Should not be displayed along with the site address
i.e. the browser.
3. After clicking logout user should not be able to access the application using back
button.
Q/A1
What is Software Quality Assurance?
Software QA covers the complete software development process - monitoring and improving the process,
making sure that all standards and procedures are followed, and guarantying that issues are found and dealt
with. SQA is oriented to defect 'prevention'.
It's impossible to declare one of the testing approaches to be better then another. It depends of Quality
Assurance Engineer skill set, the type of the project, what is trying to be achieved during testing.
In recent years the term gray box testing has appear into common usage. Gray box testing is a software
testing procedure that uses an amalgamation of black box testing and white box testing techniques.
With gray box testing approach, Quality Assurance Engineer does have the knowledge of some of the
internal structure of the application under test. In gray box testing, Quality Assurance Engineer creates
some test cases for the internal mechanism of the application under test. For the rest of the test case,
Quality Assurance Engineer uses a black box approach in applying inputs to the application under test and
validates the outputs.
Sometimes White box testing called in different Quality Assurance organizations as glass box testing or
clear box testing, uses an internal perspective of the application under test to design test cases based on the
knowledge of internal structure. In order to work as white box tester, the tester has to work with the
application code and therefore is needed to possess knowledge of coding and logic.
What is the difference between QA and testing?
The main difference between QA and testing is that software quality assurance is oriented to defect
'prevention', while software testing is oriented to defect 'detection'. In other words testing measures the
quality of a developed software application, QA measures the quality of processes used to create a quality
software application.
Ad hoc software testing is a type of testing executed without documentation and planning. Ad hoc tests are
intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing.
Acceptance Testing is black box testing performed by customer to determine whether to accept a software
product. Normally performed prior to software application delivery to validate the software application
meets a set of agreed acceptance criteria.
The test strategy is defined set of methods and objectives that direct test design and execution. The test
strategy describes the overall testing approach for the testing of application under test including stages of
testing, completion criteria, and general testing techniques. The test strategy forms the basis for test plans
In software quality assurance, a test harness is a software application configured to verify an application
under test or test environment.
In software quality assurance, a traceability matrix can be used to show relationships between software
requirements and test cases.
In software quality assurance, a test suite is a collection of test cases used to validate the software program
to show it has some defined set of behaviours. Usually a test suite holds prerequisite steps, clear goals and
instructions for each collection of test cases in addition to information on the system and environment
configuration to be used during testing and validation.
Most types of testing benefit from automation, but some testing type's needs a real human attention and
intelligence. It is possible, but difficult to automate GUI even with agile compatible tools like Selenium.
Usability testing, exploratory testing and test that will never fail should not be considered as targets for test
automation.
Some testing type's needs human attention and intelligence, but most types of testing benefit from
automation. In the same time QA Engineer should automate only that which needs automating. No one can
automate 100 percent of testing work, but in certain areas like performance testing, load testing, stress
testing regression testing, your team may have chance of reaching near to 100 percent of test automation.
Other areas of easy automation would be API testing, test data set up and creation
Proper test automation should be a core agile practice. Successful agile projects depend on test automation.
Thriving agile teams expect to have working software all the time, which allows them to build and deploy
production ready software application as often as customer needed. Agile teams cannot accomplish this
goal without constant and proper testing. The following list contains main reasons for test automation in
agile process:
Stress testing is used evaluate the application's behaviour when it is pushed beyond the normal or peak load
conditions. The main goal of stress testing is to discover application issues that appear only under high load
conditions. These can contain such issues as synchronization problems, race conditions, and memory leaks.
Graceful performance degradation under high load leading to non-catastrophic failure is the desired result.
Load test engineer could use the same scripts and tools as were used for performance testing, but using a
very high level of simulated load.
Performance testing is used to determine the response time/latency, throughput, resource utilization (CPU,
RAM, network I/O, disk I/O) and workload of a software application. The main goal of performance testing
is to identify how well your application performs in relation to your performance objectives. The intent of
the performance testing is not to break the application under test. The intent is to observe and document
performance under expected usage conditions. There are performance test available to help simulate load,
for example Apache JMeter, WebLOAD, LoadRunner and so on. Using test automation tools Load Test
Engineer can simulate load in terms of users, connections, data, and in any other ways.
The following problems are often encountered during test automation projects and may result in failing test
automation projects:
• Management doesn't treat test automation as software development. Anyone can test and
automation is easy – just record and playback.
• QA team select wrong set of test cases for test automation. In the same time, management aims for
100% test automation of all test cases.
• QA Engineers spend time between manual and automation testing, instead of concentrating on one
task.
• No one realized that automation test cases are difficult to maintain and manage.
• The development, maintenance, and management of automated test scripts often need additional
time and resources than manual test execution by inexperienced tester.
How do you know when to stop testing?
There are several ways for Quality Assurance Engineer to find out when testing stops:
A good QA Engineer should able to perform the following tasks successfully in any environment:
Verification: A good QA Engineer can officially state that it is possible to accomplish certain
tasks.
Detection: A good QA Engineer seeks issues that exist, either in the process or the product.
Prevention: A good QA Engineer recognizes potential issues before they become visible.
Reflection: A good QA Engineer looks back at how problems and bugs ended up in the product
and analyzes this data to find out how to make the process better in the future.
Imagine that you were asked to evaluate web application from test automation friendliest point of view.
These criteria could be used to name web application automation friendly in order to test an application
with SilktTest, QTP, Selenium or any other test automation tool.
If tester doesn't understand what interviewer means by continuous integration, the tester probably didn't
work in a good software environment. How can QA Engineer get steady code build for testing if there is no
bulletproof method of building and deploying code to testing and production environment? If there is no
continuous integration process in place, QA Engineers most likely would spend time finding and reporting
"show-stopper" and unit level bugs. The interviewee should be prepared to answer what source control
(also known as version control, source control or (source) code management (SCM) systems they used.
There are plenty of them around and most popular are SV, Perforce and VSS. The interviewee also needs to
know about continuous integration software like CruiseControl, Bamboo or Hudson.
The test interview is not only a test of interviewee specific knowledge, but an opportunity to knowledge
exchange. As an interviewer I have to spend at least half an hour interviewing some potential Quality
Assurance Engineer and I want to use these minutes wisely. For example, I like to interview testers about
various tools they use during preparation and actual testing. Here are some wonderful tools I use in my day-
to-day testing routine:
Firebug - extension for Mozilla Firefox browser allows the debugging, editing, and monitoring of any
website's CSS, HTML, DOM, and JavaScript;
Windows Virtual PC - is a virtualization suite for Microsoft Windows operating systems, and an emulation
suite for Mac OS X on PowerPC-based systems. Virtual PC allows you create separate virtual machines on
your Windows desktop;
OpenSTA - GUI-based web server benchmarking utility that can perform scripted HTTP and HTTPS heavy
load tests with performance measurements;
WinSCP - an open source SFTP and FTP client for Microsoft Windows;
Tester interview questions usually focused on positive results, like most obvious test interview question is
what do you like about testing, but should it be the case. I believe asking reverse interview question would
open the real mind of candidate for Test Engineer position and would perfectly describe the software
development organization where Test Engineer works now.
The most hated term among all testers is "UI Automation" and the misunderstanding from management
around "UI automation", thinking it is the silver bullet to all software development problems. As result the
company spends money on unproductive test automation software.
Next most hated issue would be the developers. Some developers know how to test, when to test and what
to test, another just throw the code over the fence with issue so bad that a basic sanity test could have
caught as blocking issue.
The testers do not like managers, because every time the customer raises a defect in shipped product the
management would question the testing team why this defect was missed during the testing cycle and who
missed the issue instead of doing the root cause analysis for the defect. Some managers continuously call
Test Engineer as Quality Assurance Engineer while Quality Assurance is a process not a title and request
the software application to be QA's when the meant tested.
Of course test engineers hate themselves. There are testers who get comfortable with what they already
know and stop pushing themselves to learn more, other testers doing the same manual tasks again and
again, while basic automation should be applied and used.
"I've been working with my wonderful company to advance the state of testing. My management has
reached a point where they are satisfied with the state of quality assurance team, while I am still striving to
improve in the art of quality assurance. I feel that I can no longer add value at my present company and it is
time for me to start a new life"
I'm not sure I want to leave my company, but in the same time your job posting interested me and I really
would like to talk about the opportunity your company has available.