Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Sqa Interview Questions and Answers

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

kasperanalytics.

com

+918130877931

SQA INTERVIEW
Q/A

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

SQA Interview Questions

1. What is Quality Assurance and Quality Control?

Quality Assurance (QA): Focuses on improving the processes to deliver quality


products, ensuring that quality is built into the process. Quality Control (QC):
Involves the actual testing of the product to ensure it meets the required
standards and identifying defects.

2. What is testing?

Testing: The process of evaluating a system or its components to find whether it


satisfies specified requirements, identifying defects, and ensuring the product is
defect-free.

3. Software Testing Lifecycle?

Software Testing Lifecycle (STLC): Involves phases like requirement analysis, test
planning, test case development, environment setup, test execution, defect
reporting, and test closure.

4. Software Development Lifecycle?

Software Development Lifecycle (SDLC): A process that includes phases like


requirement analysis, design, implementation, testing, deployment, and
maintenance for developing software applications.

5. Difference between manual and automated testing?

Manual Testing: Performed by humans without using automation tools, suitable


for exploratory, usability, and ad-hoc testing. Automated Testing: Uses scripts and
tools to perform tests, ideal for regression, performance, and repetitive tasks.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

6. Software Development Models?

Waterfall, Agile, V-Model, Spiral: These are methodologies used to structure, plan,
and control the process of developing an information system.

7. Agile Testing Methodologies?

Agile Testing: Involves continuous testing of software in parallel with development,


embracing methodologies like Scrum, Kanban, and Extreme Programming (XP).

8. Agile and Scrum?

Agile: A flexible, iterative approach to software development. Scrum: A framework


within Agile for managing work, focusing on sprints and iterative progress through
daily stand-ups and reviews.

9. Database Testing?

Database Testing: Involves verifying the integrity, accuracy, and consistency of


data in databases, including testing of schema, tables, triggers, and procedures.

10. UI Testing?

UI Testing: Ensures that the graphical user interface meets specifications,


checking elements like buttons, icons, and fields for functionality and visual
correctness.

11. UX Testing?

UX Testing: Evaluates the user experience by assessing ease of use, efficiency, and
satisfaction provided by the software to the end-users.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

12. Usability Testing?

Usability Testing: A technique used to evaluate a product by testing it on users,


focusing on ease of use, task completion, and user satisfaction.

13. API Testing?

API Testing: Involves testing application programming interfaces (APIs) directly to


ensure they meet functionality, reliability, performance, and security expectations.

14. What is Test Ware?

Test Ware: All the materials produced during the testing process, including test
cases, test plans, test scripts, and test data.

15. What is Test Artifact?

Test Artifact: Any document or work product created during the testing process,
such as test plans, test cases, defect reports, and test scripts.

16. Why testing is important?

Importance of Testing: Ensures software quality, verifies functionality, identifies


defects, reduces development costs, and increases user satisfaction and
reliability.

17. Different Types of Testing?

Types of Testing: Includes unit testing, integration testing, system testing,


acceptance testing, regression testing, performance testing, and usability testing.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

18. Smoke Testing?

Smoke Testing: A preliminary test to check the basic functionality of an


application, ensuring that the most critical features work before further testing.

19. Regression Testing?

Regression Testing: Re-tests software after changes to ensure existing


functionalities are unaffected by new code or updates.

20. Unit Testing?

Unit Testing: Tests individual components or modules of software to ensure they


work as expected, typically performed by developers during development.

21. What is Monkey Testing?

Monkey Testing: Random testing performed without any predefined test cases or
plans to identify unexpected behaviors or crashes.

22. Ad-hoc Testing?

Ad-hoc Testing: Informal, unstructured testing performed without planning or


documentation, aimed at finding defects through random checking.

23. Integration Testing?

Integration Testing: Tests the interfaces and interaction between integrated


units/modules to ensure they work together correctly.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

24. Sanity Testing?

Sanity Testing: A quick check to ensure that a particular function or bug fix works
correctly after changes, performed before regression testing.

25. Retesting?

Retesting: The process of testing the same functionality again after a defect has
been fixed to ensure the issue is resolved.

26. Functional Testing?

Functional Testing: Validates the software against the functional requirements,


ensuring each function operates according to the specified behavior.

27. ELT Testing?

ELT Testing: Extract, Load, Transform testing involves verifying data extraction from
sources, loading into the destination, and transforming it correctly within data
warehouses.

28. What is Test Driven Development?

Test Driven Development (TDD): A software development approach where test


cases are written before writing the code, driving the design of the software.

29. What is Data Driven Testing?

Data Driven Testing: An automation testing framework where test data is driven
from external data sources like spreadsheets, databases, or CSV files.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

30. Nonfunctional Testing?

Nonfunctional Testing: Evaluates aspects like performance, usability, reliability,


and security, ensuring the software meets nonfunctional requirements.

31. What is Performance Testing?

Performance Testing: Measures the speed, responsiveness, and stability of


software under various conditions to ensure it performs well under expected
workloads.

32. Techniques of Testing?

Testing Techniques: Includes black box testing, white box testing, exploratory
testing, ad-hoc testing, and risk- based testing.

33. BVA Approach?

Boundary Value Analysis (BVA): A testing technique focusing on the boundaries


between partitions, identifying defects at the edge values.

34. ECP Approach?

Equivalence Class Partitioning (ECP): Divides input data into equivalence classes
where all members are expected to be treated the same, reducing the number of
test cases.

35. What is Validation?

Validation: Ensures the product meets the user's needs and requirements,
verifying the final product through testing and user feedback.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

36. What is Verification?

Verification: Ensures the product is being built correctly according to the


specifications and design documents, typically through reviews and inspections.

37. Bug Lifecycle?

Bug Lifecycle: Includes stages like new, assigned, open, fixed, retest, verified, and
closed, tracking the bug from identification to resolution.

38. What is Test Plan?

Test Plan: A document outlining the scope, approach, resources, and schedule of
testing activities, defining what will be tested and how.

39. What is Test Scenario?

Test Scenario: A high-level description of what to test, including the possible


actions and outcomes, based on user requirements and functionality.

40. What is Test Case?

Test Case: A detailed set of instructions for testing a particular feature or


functionality, including preconditions, steps, inputs, expected results, and
postconditions.

41. What is Use Case Testing?

Use Case Testing: Testing based on use cases that describe interactions between
users and the system to ensure all possible scenarios are covered.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

42. What is Test Script?

Test Script: A set of instructions written in a programming or scripting language to


automate the execution of test cases.

43. What is JMeter?

JMeter: An open-source tool used for performance and load testing, simulating
multiple users to test the performance of web applications.

44. What is Jira?

Jira: A popular project management tool used for issue tracking, bug tracking,
and project management in Agile development environments.

45. What are Selenium and its flavors and their working?

Selenium: An open-source tool for automating web browser interactions. Flavors


include Selenium WebDriver (cross-browser automation), Selenium IDE (record
and playback), and Selenium Grid (parallel execution).

46. What is Mobile Application Testing?

Mobile Application Testing: Ensures the functionality, usability, and performance of


mobile applications across different devices and operating systems.

47. SQL Queries?

SQL Queries: Commands used to interact with databases, performing operations


like data retrieval (SELECT), insertion (INSERT), updating (UPDATE), and deletion
(DELETE).

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

48. OOP Concepts?

OOP Concepts: Includes principles like encapsulation (data hiding), inheritance


(parent-child relationship), polymorphism (many forms), and abstraction
(simplified complexity).

49. Database Concepts, Stored Procedure?

Database Concepts: Involve understanding tables, relationships, normalization,


and SQL operations. A stored procedure is a precompiled set of SQL statements
stored in the database.

50. What are Cookies and its Types?

Cookies: Small pieces of data stored by web browsers to remember user


information. Types include session cookies (temporary) and persistent cookies
(stored on disk).

51. How to Test the Cookies?

Testing Cookies: Verify creation, expiration, data storage, and security by checking
their behavior under different conditions and ensuring compliance with privacy
policies.

52. Bug Priority and Severity?

Bug Priority: Indicates the urgency of fixing a bug. Bug Severity: Indicates the
impact of the bug on the system's functionality.

53. Bug with High Priority and Low Severity?

Example: A typo in the company's homepage title is not severe but needs
immediate fixing due to its high visibility.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

54. Versions of Android, iOS and Web Browsers?

Versions: Regularly updated, with each platform releasing new versions


periodically to add features and fix bugs (e.g., Android 12, iOS 15, Chrome 91).

55. Web Elements Spacing, Android, iOS?

Web Elements Spacing: Ensures elements are appropriately spaced for usability.
For Android and iOS, guidelines specify minimum touch targets and spacing for
accessibility.

56. What is Captcha and Why Use Captcha?

Captcha: A challenge-response test used to determine if the user is human,


preventing automated bots from abusing online services.

57. Difference Between DBMS and RDBMS?

DBMS: Database Management System that manages databases. RDBMS:


Relational Database Management System that uses a structured format and
relationships between tables to manage data.

58. What is Penetration Testing?

Penetration Testing: A security testing technique to identify vulnerabilities by


simulating attacks on the system.

59. What is SQL Injection?

SQL Injection: A code injection technique that exploits vulnerabilities in an


application's software by inserting malicious SQL statements.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

60. What is Brute Force Attack?

Brute Force Attack: A trial-and-error method used to decode encrypted data such
as passwords by systematically trying all possible combinations.

61. What are Normalization and its Types?

Normalization: The process of organizing data in a database to reduce


redundancy and improve data integrity. Types include 1NF, 2NF, 3NF, BCNF, etc.

62. What is Traceability Matrix (RTM)?

Traceability Matrix: A document that maps and traces user requirements with test
cases, ensuring all requirements are covered by tests.

63. What is Clustering?

Clustering: A technique in data analysis where data is grouped into clusters that
share similar characteristics, used for pattern recognition and classification.

64. Principles of Testing?

Principles of Testing: Includes early testing, defect clustering, pesticide paradox,


exhaustive testing is impossible, testing shows presence of defects, and absence-
of-errors fallacy.

65. What is Backlog or Product Backlog?

Backlog/Product Backlog: A prioritized list of features, enhancements, and bug


fixes that are yet to be worked on in an Agile project.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

66. Mobile App Performance Testing Checklist?

Checklist: Includes testing for load time, responsiveness, battery consumption,


memory usage, network usage, and compatibility across devices and OS versions.

67. User Acceptance Testing?

User Acceptance Testing (UAT): The final phase of testing where actual users test
the system to verify it can handle required tasks in real-world scenarios.

68. Fault, Failure and Error?

Fault: A defect in the system. Failure: When the system does not perform as
expected due to a fault. Error: A human mistake causing a fault.

69. Exploratory Testing?

Exploratory Testing: Simultaneous learning, test design, and execution,


emphasizing tester creativity and intuition.

70. Security Testing?

Security Testing: Ensures the software is free from vulnerabilities, protecting data
and resources from threats and attacks.

71. What is Bug, Defect and Error?

Bug: A flaw in the software causing incorrect results. Defect: A deviation from the
requirements. Error: A mistake made by a human.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

72. What are Mutants and Mutation Testing?

Mutants: Small changes made to software code to create variations. Mutation


Testing: A method to evaluate test cases by introducing faults (mutants) and
checking if tests can detect them.

73. Basic Knowledge of HTML, CSS, JavaScript, Asp.net, OOP and Java?

HTML/CSS/JavaScript: Core web technologies for creating and styling web pages.
ASP.NET: A framework for building web applications and services with .NET. OOP
and Java: Object-oriented programming principles used in Java for modular,
reusable code.

74. Entry and Exit Criteria?

Entry Criteria: Conditions that must be met before testing begins (e.g., test
environment setup). Exit Criteria:

Conditions that must be met before testing is concluded (e.g., all critical bugs
fixed).

75. Static & Dynamic Testing?

Static Testing: Reviews and inspections of code or documentation without


executing the program. Dynamic Testing: Involves executing code and validating
the output with expected results.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

SQA Real-time Interview Questions

1. Write a java program for the largest number from three numbers

java

Copy code

public class LargestNumber {

public static void main(String[] args) {

int a = 10, b = 20, c = 30;

int largest = (a > b) ? (a > c ? a : c) : (b > c ? b : c);

System.out.println("Largest number is: " + largest);

2. What is SDLC and STLC? And Explain its phases

SDLC (Software Development Life Cycle): A process for planning, creating, testing,
and deploying an information system, involving phases like requirement analysis,
design, implementation, testing, deployment, and maintenance.

STLC (Software Testing Life Cycle): A sequence of specific activities conducted


during the testing process to ensure software quality, involving phases like
requirement analysis, test planning, test case development, environment setup,
test execution, and test cycle closure.

3. Define your roles and responsibility.

Roles and Responsibilities: Responsibilities typically include analyzing


requirements, creating and executing test cases, identifying defects,

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

collaborating with development teams, ensuring product quality, and


maintaining documentation.

4. What is regression testing?

Regression Testing: A type of software testing that ensures that recent code
changes have not adversely affected existing functionalities of the software.

5. What are different methodologies of SDLC? Explain each

Waterfall: Sequential design process where progress is seen as flowing steadily


downwards through phases. Agile: Iterative approach focusing on collaboration,
customer feedback, and small, rapid releases. V-Model: Extension of the waterfall
model where each development phase is associated with a testing phase. Spiral:
Combines iterative development (prototyping) and systematic aspects of the
waterfall model.

6. Define Agile.

Agile: A methodology that promotes continuous iteration of development and


testing throughout the software development lifecycle, encouraging collaborative
and flexible responses to change.

7. Define Scrum and Sprint.

Scrum: A framework within Agile for managing and completing complex projects,
typically through incremental work called sprints. Sprint: A set time period within
which specific work has to be completed and made ready for review.

8. What is the estimation in Sprint?

Sprint Estimation: The process of predicting the effort required to complete the
tasks in a sprint, often using story points, hours, or other units of measure.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

9. What is sprint backlog?

Sprint Backlog: A list of tasks and user stories selected from the product backlog
to be completed during a sprint.

10. What are the different reports in Testing?

Testing Reports: Common types include Test Summary Report, Defect Report, Test
Execution Report, and Traceability Matrix.

11. What are the key components of the Test Case report?

Test Case Report Components: Test case ID, description, steps, expected result,
actual result, status (pass/fail), and remarks.

12. What are the components of a defect report?

Defect Report Components: Defect ID, summary, description, steps to reproduce,


severity, priority, environment, assigned to, status, and attachments/screenshots.

13. What is Jira?

Jira: A popular project management tool used for bug tracking, issue tracking,
and project management in Agile development.

14. How do you log a defect in Jira?

Log a Defect in Jira: Navigate to the appropriate project, click on "Create," fill in the
defect details (summary, description, severity, priority, steps to reproduce), and
save.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

15. How do you link bugs with the user story?

Link Bugs with User Story: In Jira, use the "Link" option to associate the bug with the
relevant user story, selecting the appropriate relationship (e.g., "relates to,"
"blocks," etc.).

16. What is sprint?

Sprint: A time-boxed iteration of continuous development in Scrum, typically


lasting 1-4 weeks, aimed at delivering a usable product increment.

17. Define black box and white box testing.

Black Box Testing: Testing based on external expectations, without knowledge of


internal code structure. White Box Testing: Testing based on knowledge of internal
code structure, focusing on code logic, paths, and conditions.

18. Define functional testing.

Functional Testing: A type of testing that validates the software system against
the functional requirements/specifications, focusing on what the system does.

19. Define the OOP concept in Java.

OOP Concept in Java: Object-Oriented Programming principles include


encapsulation, inheritance, polymorphism, and abstraction, promoting modular
and reusable code.

20. Give me examples of OOP which you used in your framework.

Examples of OOP in Framework: Using inheritance for base test classes,


encapsulation in page object models, polymorphism for test execution, and
abstraction to define interfaces for various modules.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

21. What is TestNG?

TestNG: A testing framework inspired by JUnit and NUnit, designed to simplify a


broad range of testing needs, from unit to integration testing.

22. What is usability testing?

Usability Testing: A technique used to evaluate a product by testing it on users to


improve its usability, ensuring it is easy to use and understand.

23. What are the steps for reporting the defect in Jira?

Steps for Reporting a Defect in Jira: Identify the issue, click "Create" in Jira, enter
the defect details (summary, description, steps to reproduce, severity, etc.), and
submit the defect.

24. Define Structure of Selenium.

Structure of Selenium: Comprises Selenium IDE (recording tool), Selenium


WebDriver (browser automation), Selenium Grid (parallel execution), and various
language bindings (Java, C#, Python, etc.).

25. How will you handle the dropdown in Selenium?

Handle Dropdown in Selenium: Use the Select class to interact with dropdown
elements:

java

Copy code

Select dropdown = new Select(driver.findElement(By.id("dropdownId")));


dropdown.selectByVisibleText("Option");

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

26. Different types of wait in selenium? Explain each of them.

Implicit Wait: Sets a default wait time for the entire session. Explicit Wait: Waits for
a specific condition to be met before proceeding. Fluent Wait: Waits for a specific
condition, polling at regular intervals and ignoring specific exceptions.

27. Difference between hard and soft assertion?

Hard Assertion: Immediately stops test execution upon failure. Soft Assertion:
Collects all errors and continues execution, reporting all failures at the end.

28. Why are we using "WebDriver driver = new ChromeDriver()"? Why can't we
write "RemoteDriver driver = new ChromeDriver()"?

WebDriver vs. RemoteWebDriver: WebDriver is the interface implemented by


ChromeDriver, whereas RemoteWebDriver is a class used for remote execution,
not typically instantiated directly for local testing.

29. Explain the different Annotation in TestNG.

TestNG Annotations:

• @Test: Marks a method as a test.


• @BeforeMethod and @AfterMethod: Run before and after each test
method.
• @BeforeClass and @AfterClass: Run before and after all test methods in the
current class.
• @BeforeSuite and @AfterSuite: Run before and after all tests in the suite.

30. Define Priority and Severity of the Bug.

Priority: Indicates the order in which a bug should be fixed. Severity: Indicates the
impact of the bug on the system's functionality.

hr@kasperanalytics.com

kasper-analytics
kasperanalytics.com

+918130877931

31. How to maximize the screen in Selenium?

Maximize Screen in Selenium:

java

Copy code

driver.manage().window().maximize();

32. What are the different closure reports?

Closure Reports: Include Test Closure Report, Project Closure Report, and Incident
Closure Report, summarizing the activities and findings after testing phases or
project completion.

33. What is the difference between a Test Plan and a Test Strategy document?

Test Plan: A document detailing the scope, approach, resources, and schedule of
testing activities. Test Strategy: A high-level document outlining the testing
approach, objectives, and general principles to be followed during testing.

34. Define Bug lifecycle of JIRA.

Bug Lifecycle in JIRA: New → Assigned → Open → Fixed → Retest → Reopen (if not
fixed) → Verified → Closed.

hr@kasperanalytics.com

kasper-analytics

You might also like