The document discusses testing axioms, which are context-neutral rules for testing systems. It proposes that testing axioms can be used to advance testing practices by providing a framework for critical thinking about testing. Specifically, separating axioms, context, and values allows testers to clarify positions and approaches for different contexts. It also suggests testing axioms can help identify important skills for testers, such as understanding test models and their limitations. Finally, it explores ideas from "quantum testing" such as assigning significance to individual tests, rather than attempting to quantify their value.
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...TEST Huddle
This document discusses using functional test automation to prevent defects from escaping the development phase. It recommends automating acceptance tests during development to catch bugs early from the user perspective. The process involves preparing for automation by exploring and selecting test candidates, automating the tests as close to development as possible, and repeating the automation across areas, platforms and versions to prevent regression bugs. Continuous integration and handling test errors are also suggested to provide feedback and react to issues identified through automation. The overall goal is to shift testing left in the development cycle through early and frequent automation from a user perspective.
James Whittaker - Pursuing Quality-You Won't Get There - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Pursuing Quality-You Won't Get There by James Whittaker. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Isabel Evans - Quality In Use - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Quality In Use by Isabel Evans. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Julian Harty - Alternatives To Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on "Presentation Title" by "Speaker Name". See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Otto Vinter - Analysing Your Defect Data for Improvement PotentialTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Analysing Your Defect Data for Improvement Potential by Otto Vinter. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Eric Jimmink - The Specialized Testers of the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on The Specialized Testers of the Future by Eric Jimmink. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Mats Grindal - Risk-Based Testing - Details of Our Success TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Risk-Based Testing - Details of Our Success by Mats Grindal. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Establishing Testing Knowledge and Experience Sharing at Siemens by Peter Zimmerer. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Johan Jonasson - Introducing Exploratory Testing to Save the ProjectTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Introducing Exploratory Testing to Save the Project by Johan Jonasson . See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Geoff Thompson - Why Do We Bother With Test StrategiesTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Why Do We Bother With Test Strategies by Geoff Thompson. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Kristian Fischer - Put Test in the Driver's SeatTEST Huddle
The document discusses how test managers are often seen as "black sheep" who raise issues without solutions and cause delays. It argues that test managers need to shift from a reactive to proactive role by getting involved early in projects, changing attitudes, and applying a test management dashboard to provide transparency and value. The dashboard would use KPIs and metrics to track testing progress, quality, risks, and deliver early warnings so test managers are seen as project victors rather than victims.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Clive Bates - A Pragmatic Approach to Improving Your Testing Process - EuroST...TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on A Pragmatic Approach to Improving Your Testing Process by Clive Bates. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Dirk Van Dael - Test Accounting - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Test Accounting by Dirk Van Dael. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Fredrik Rydberg - Can Exploratory Testing Save Lives - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Can Exploratory Testing Save Lives by Fredrik Rydberg. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on From Model Driven Testing to Test Driven Modelling by Darius Silingas. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Christian Bk Hansen - Agile on Huge Banking Mainframe Legacy Systems - EuroST...TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Agile on Huge Banking Mainframe Legacy Systems by Christian Bk Hansen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Thomas Axen - Lean Kaizen Applied To Software Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Lean Kaizen Applied To Software Testing by Thomas Axen . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
The document discusses how test axioms can be used to advance testing practices. It introduces 16 proposed test axioms grouped into stakeholder, design, and delivery axioms. The axioms represent critical thinking processes for testing any system. The document discusses how the axioms can help testers design test strategies, assess improvement opportunities, and define needed skills. It also proposes a "first equation of testing" that separates axioms, context, values, and thinking to allow for different valid approaches. Additionally, the concept of "quantum testing" is introduced to discuss assigning significance to tests rather than defining their value, which can only be determined by stakeholders.
- The speaker proposes 16 "test axioms" that are intended to provide a framework for testing approaches and represent principles that are context-insensitive and self-evidently true.
- The axioms are grouped into three categories: stakeholders, design, and delivery. The speaker argues the axioms can help testers think critically about testing and identify flaws in arguments.
- It is argued that process improvement models are not effective for improving testing because there is no consensus on best practices and processes must be tailored to context. True improvement requires understanding why current approaches are used given the context.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product. We start with a vague idea of our strategy, organize it quickly, and document as needed in a concise way. In the end, the strategy can be as formal and detailed as you want it to be. In the beginning, though, we start small. If you want to focus on testing and not paperwork, this approach is for you.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
What is a Testing Framework?
“Tools of the trade”; what are they?
Test Phases, Test Stages, Testing Activities, Verification Methods, a Testing Workflow, Status Reporting, Test Types
Discussions:
What do you bring to the table?
What does the Client bring to the table?
Why are you there?
How do you adapt & why do should you?
The document discusses a new model for testing that focuses on exploration of knowledge sources to build test models that inform testing. It outlines three patterns of software development (structured, agile, continuous) and argues testing involves exploring knowledge sources and building test models, with all testing being exploratory in nature. A new test process is proposed involving exploration support tools that capture testing plans and activity in real-time. The roles of developers and testers may become blurred in the future under this new model.
Modeling Framework to Support Evidence-Based DecisionsAlbert Simard
Describes a framework for modelling in a regulatory environment founded on sound scientific and knowledge management concepts. It includes 1) demand (isue-driven) and supply (model driven) approaches to modelling, 2) balancing modeler, manager, and user perspectives, 3) documentation to demonstrate due diligence, and a 700-term glossary.
The document discusses principles of software testing. It defines testing as identifying defects by developing test cases and test data. A test case specifies starting and ending states and events, while test data provides inputs. Different types of testing are described, including unit testing of individual components, integration testing of groups of components, and system testing of full systems. Factors like usability, performance, and user acceptance are also discussed. Who performs different types of testing is outlined.
The document discusses understanding stakeholder needs when developing requirements for a software system. It describes sources of requirements like customers and users, characteristics of different types of customers, potential problems that can be encountered, and techniques for eliciting requirements like workshops, brainstorming, use cases, interviews, and questionnaires.
Usability Primer - for Alberta Municipal Webmasters Working GroupNormanMendoza
Presentation provided on December 1, 2006. References:
“A Practical Guide to Usability Testing” by Joseph S. Dumas and Janice C. Redish
The Elements of User Experience, diagram by Jesse James Garrett
The document provides an overview of agile testing principles and practices. It discusses that agile testing involves the entire cross-functional team working together to test software iteratively. Key aspects of agile testing covered include continuous feedback, delivering value to customers, enabling face-to-face communication, and keeping testing simple. The document also outlines typical testing activities in an agile project such as test planning, driving development, facilitating communication, and completing testing tasks within each sprint.
This document discusses principles of software testing. It covers different types of testing including unit testing, integration testing, usability testing, and user acceptance testing. It describes who typically performs each type of testing, such as programmers performing unit testing and users involved in usability and acceptance testing. The document also discusses test cases, test data, and test types that can detect different types of defects.
This document provides an overview of principles of software testing. It discusses different types of testing including unit testing, integration testing, usability testing, and user acceptance testing. It describes who typically performs different types of testing such as programmers performing unit testing and users involved in usability and acceptance testing. Quality assurance personnel are typically responsible for test planning and identifying needed changes. The document also outlines topics that will be covered in subsequent parts of the course, including test cases, test data, test types, and defects detected by different test approaches.
The document outlines the DECIDE framework for guiding evaluations. The framework consists of 6 steps: 1) Determine goals, 2) Explore questions, 3) Choose approach and methods, 4) Identify practical issues, 5) Decide how to deal with ethical issues, and 6) Evaluate, analyze, interpret and present data. Key aspects of each step are discussed such as determining evaluation goals, choosing appropriate evaluation designs that consider threats to validity, and ensuring reliability, validity and scope when analyzing and presenting results.
The document discusses various models for assessing learning outcomes and evaluating educational programs, including the CIPP, Kirkpatrick, Moore, and Miller models. It summarizes the key aspects of each model and notes their strengths and limitations. For example, it indicates that outcome-based models may provide limited usefulness on their own and that new models are needed that incorporate both processes and outcomes. The document also discusses the importance of rigorous assessment instruments to the success of evaluations and notes attributes such as reliability, validity, and responsiveness that high-quality instruments should possess.
This document provides an overview of software testing principles and concepts. It discusses different types of testing including unit testing, integration testing, usability testing, and user acceptance testing. It also describes test cases, test data, and who is typically involved in software testing. The key goals of testing are to identify defects by developing test cases to evaluate different components, interfaces, and the overall system or software.
Trends in Software Testing: There has been a slow realization among the top executives that simply outsourcing testing to the lowest bidder is not resulting in a sufficient level of quality in their software products. In this session, Paul Holland will discuss how American companies are starting to reconsider “factory school” testing and are no longer satisfied with the current situation of simply outsourcing their “checking”. As the development side of software continues its dramatic shift toward Agile development – what role can testers have and how can testers still add value?
11 - Evaluating Framework in Interaction Design_new.pptxZahirahZairul2
The document discusses evaluation frameworks in interaction design. It introduces key concepts like prototypes, evaluation paradigms, and techniques. Low and high fidelity prototyping are described. Evaluation paradigms include quick and dirty evaluations, usability testing, field studies, and predictive evaluation. Common techniques involve observing, asking, and testing users. The DECIDE framework is presented as a process for planning evaluations by determining goals, exploring questions, choosing techniques, and addressing practical and ethical concerns. Pilot studies are recommended to test evaluation plans.
The document provides guidance for leading and managing QA testing teams. It discusses carrying out the mandate of a QA testing manager by ensuring products meet user needs. It also covers establishing cooperative relationships with colleagues through open communication and mutual trust. Additionally, it discusses developing efficient processes by implementing quality initiatives and standards. Finally, it discusses planning and conducting product testing through test case selection, execution, and reporting.
I believe that our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches. The certification schemes that should represent the interests and integrity of our profession don’t, and we are left with schemes that are popular, but have low value, lower esteem and attract harsh criticism. My goal in proposing the New Model is to stimulate new thinking in this area.
eurostarconferences.com
testhuddle.com
Similar to Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010 (20)
Why We Need Diversity in Testing- AccentureTEST Huddle
In this webinar Rasa (Testing capability lead for Denmark) and Matthias (EALA Testing capability lead) will share some of their own experiences why diversity matters, give insights into how Accenture as a global firm is promoting diversity and how we are in the process of changing our attitudes and processes to make all of this sustainable
Keys to continuous testing for faster delivery euro star webinar TEST Huddle
Your business needs to deliver faster. To accommodate, Development needs to introduce fewer changes but in a much more frequent cadence. This creates a challenge for test teams to keep up with the rapid pace of change without compromising on quality. Automation is paramount to the success or failure of Continuous Delivery, and Continuous Testing enables early and frequent quality feedback throughout the CI/CD pipeline.
In this webinar, Eran & Ayal will explore how to implement Continuous Testing to ensure high quality releases in a Continuous Delivery environment; including what to test and when to automate new functionality in order to optimize your efforts.
Why you Shouldnt Automated But You Will Anyway TEST Huddle
The document discusses automation in software testing. It begins by outlining common claims made about the benefits of automation, such as saving time and improving quality, but argues that these claims often don't hold true. Automation does not inherently save time, guarantee quality, or reduce resources needed. It also does not always save money when development, maintenance, and infrastructure costs are considered. The document provides a formula for determining when automation is worthwhile based on how many times a test case would need to be rerun manually. It concludes by acknowledging that, despite these drawbacks, organizations will still automate testing because it is exciting, managers demand it, and it benefits careers.
In this webinar Carsten will explore the role of the tester in a Scrum team. He will examine where the tester play an important role in Scrum and how you can contribute to a teams performance.
Leveraging Visual Testing with Your Functional TestsTEST Huddle
Designing and implementing (or selecting) the right automation strategy, for functional testing, with visual testing, can help your project with greater test coverage while improving test scalability
Big Data: The Magic to Attain New HeightsTEST Huddle
This document discusses how big data and data science can be used to attain new heights, likening it to magic. It provides an overview of Ken Johnston's background and experiences in data science. It then discusses six keys to a "big" magic show with big data: trying multiple times, addressing issues with over-counting, experimentation techniques like A/B testing, infrastructure for big data, tools and skills, and security, privacy and fraud protection. The document emphasizes the importance of an assistant to help the data scientist or data engineer with various tasks.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
The document discusses Test Driven Development (TDD) and Test Driven Design. It uses the analogy of building a lightsaber and later a Death Star to illustrate the TDD process and benefits. Some benefits mentioned are better test coverage, less debugging, and better design. The document provides tips for practicing TDD including planning ahead, defining boundaries, taking small steps to pass each test, and maintaining discipline. It emphasizes trying TDD in a team and considering Behavior Driven Development (BDD) as well.
Scaling Agile with LeSS (Large Scale Scrum)TEST Huddle
In this webinar, Elad will cover the principles that the #LeSS framework has to offer in order to enable bug organisations to become agile.
View webinar recording - https://huddle.eurostarsoftwaretesting.com/resource/agile-testing/scaling-agile-less-large-scale-scrum/
Creating Agile Test Strategies for Larger EnterprisesTEST Huddle
Having difficulty creating an agile test strategy for your company? Let Testing Excellence Award winner, Derk-Jan de Grood, show you how it’s done
View webinar recording here - http://huddle.eurostarsoftwaretesting.com/resource/agile-testing/creating-agile-test-strategies-larger-enterprises/
3 key takeaways
- Do you know the meaning of your organisation, system, product?
- Can you deliver the important risks right away?
- How can you communicate about the (process and product) risks your dealing with?
View Webinar recording: https://huddle.eurostarsoftwaretesting.com/resource/test-management/is-there-a-risk/
Are Your Tests Well-Travelled? Thoughts About Test CoverageTEST Huddle
This document summarizes a presentation on test coverage given by Dorothy Graham. It uses an analogy of travel to different locations to explain what test coverage means and some caveats. Coverage refers to the relationship between tests and the parts of a system being tested, but achieving 100% coverage does not mean everything is tested. There are four caveats discussed: coverage only measures one aspect of testing, a single test can achieve coverage, coverage does not indicate quality, and it only applies to the existing system not missing pieces. The key recommendation is to ask "coverage of what?" when the term is used rather than assuming more coverage is always better.
Growing a Company Test Community: Roles and Paths for TestersTEST Huddle
Over the past three years, our company’s test team has grown from three lonesome testers to a community of nine – with more planned. Since we don’t see testers as “click monkeys”, but as valuable and integrated project members who bring a specific skill set to the table, it’s important for us to choose testers well and to train them in various areas so that they can contribute, grow and see their own career path within testing.
To structure to our internal tester training program, we have been developing role descriptions, education paths and career options for our testers, which I’d like to share with you in this webinar.
View webinar - https://huddle.eurostarsoftwaretesting.com/resource/webinar/growing-company-test-community-roles-paths-testers/
It’s the same argument again and again. One side says “team members should all be able to do everything, and the programmers should do their testing and all testers should be writing code”. The other side says “No, that can’t possibly work – programmers don’t know how to test, they don’t have the right mindset”. And on and on it goes.
http://huddle.eurostarsoftwaretesting.com/resource/webinar/need-testers-agile-teams/
In this webinar, Dave Haeffner (Elemental Selenium, USA) discusses how to:
- Build an integrated feedback loop to automate test runs and find issues fast
- Setup your own infrastructure or connect to a cloud provider
-Dramatically improve test times with parallelization
https://huddle.eurostarsoftwaretesting.com/resource/webinar/use-selenium-successfully/
Testers & Teams on the Agile Fluency™ Journey TEST Huddle
The document discusses the Agile Fluency model, which aims to help teams and testers improve their agile skills and practices over time. It describes a pathway with increasing levels of fluency that provide more benefits, including delivering value, optimizing value, and innovating. Reaching higher levels requires investments in training, coaching, and changing team structures and roles. The model can help organizations determine what level of fluency they need and what investments are required for testing teams to operate at that level.
Practical Test Strategy Using HeuristicsTEST Huddle
Key Takeaways
- See what makes a good test strategy
- Learn how to make a thorough test strategy
- Identify what is the ‘Heuristic Test Strategy Model’ is
- Develop a solid test strategy that fits fast
- Discover how diversification can help you to create a test strategy
Key Takeaways:
- A diagramming method that helps discuss roles
- A one page analysis heuristic for roles
- Why roles matter on projects
https://huddle.eurostarsoftwaretesting.com/resource/people-skills/thinking-through-your-role/
Key Takeaways:
- What will this release contain
- What impact will it have on your test runs
- How can you preserve your existing investment in tests using the Selenium WebDriver APIs, and your even older RC tests
- Looking forward, when will the W3C spec be complete
- What can we expect from Selenium 4
https://huddle.eurostarsoftwaretesting.com/
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
2. Axioms –a Brief Introduction
Advancing Testing Using Axioms
First Equation of Testing
Test Strategy and Approach
Testing Improvement
A Skills Framework for Testers
Quantum Testing
Close
3. Formulated as a context-neutral set of rules for testing systems
They represent the critical thinking processes required to test any system
There are clear opportunities to advance the practice of testing using them
Testers Pocketbook: testers-pocketbook.com
Test Axioms Website test-axioms.com
4. •Test Axioms are not beginners guides
•They can help you to think critically about testing
•They expose flaws in other people‟s thinking and their arguments about testing
•They generate some useful by-products
•They help you to separate context from values
•Interesting research areas!
•First Equation of Testing, Testing Uncertainty Principle, Quantum Theory, Relativity, Exclusion Principle...
•You can tell I like physics
7. Summary:
Identify and engage the people or organisations that will use and benefit from the test evidence we are to provide
Consequence if ignored or violated:
There will be no mandate or any authority for testing. Reports of passes, fails or enquiries have no audience.
Questions:
Who are they?
Whose interests do they represent?
What evidence do they want?
What do they need it for?
When do they want it?
In what format?
How often?
8. Test Model
Test Basis
Oracle
Coverage
Prioritis-ationFallibilityDesign axioms
9. Summary:
Choose test models to derive tests that are meaningful to stakeholders. Recognise the models‟ limitations and the assumptions that the models make
Consequence if ignored or violated:
Tests design will be meaningless and not credible to stakeholders.
Questions
Are design models available to use as test models? Are they mandatory?
What test models could be used to derive tests from the Test Basis?
Which test models will be used?
Are test models to be documented or are they purely mental models?
What are the benefits of using these models?
What simplifying assumptions do these models make?
How will these models contribute to the delivery of evidence useful to the acceptance decision makers?
How will these models combine to provide sufficient evidence without excessive duplication?
How will the number of tests derived from models be bounded?
13. Separation of Axioms, context, values and thinking
Tools, methodologies, certification, maturity models promote approaches without reference to your context or values
No thinking is required!
Without a unifying test theory you have no objective way of assessing these products.
14. Given context, practitioners can promote different approaches based on their values
Valuesare preferences or beliefs
Pre-planned v exploratory
Predefined v custom process
Requirements-driven v goal-based
Standard documentation v face-to-face comms.
Some contexts precludecertain practices
“No best practices”
15. Separating axioms, context and values clarifies positions, for example:
„Structured‟ (certified?) test advocates have little (useful) to say about Agile contexts
Exploratory test advocates have little (useful) to say about contract/requirements-based acceptance
The disputes between these positions is more about valuesthan practices in context
Is a consultant recommendation best for the stakeholders or the consultant?
17. Test
Strategy
Risks
GoalsConstraints
Human resource
Environment
TimescalesProcess(lack of?)
ContractCulture
Opportunities
User involvement
Automation
De- Duplication
Early Testing
Skills
Communication
Axioms
Artefacts
18. 1.Test Plan Identifier
2.Introduction
3.Test Items
4.Features to be Tested
5.Features not to be Tested
6.Approach
7.Item Pass/Fail Criteria
8.Suspension Criteria and Resumption Requirements
9.Test Deliverables
10.Testing Tasks
11.Environmental Needs
12.Responsibilities
13.Staffing and Training Needs
14.Schedule
15.Risks and Contingencies
16.Approvals
Based on IEEE Standard 829-1998
19. Used as a strategy checklist
Scarily vague (don‟t go there)
Used as a documentation template/standard
Flexible, not prescriptive, but encourages copy and edit mentality (documents that no one reads)
But many manytesters seek guidance on
What to consider in a test strategy
Communicating their strategy to stakeholders and project participants
20. Items 1, 2 –Administration
Items 3+4+5 –Scope Management, Prioritisation
Item 6 –All the Axioms are relevant
Items 7+8 –Good-Enough, Value
Item 9 –Stakeholder, Value, Confidence
Item 10 –All the Axioms are Relevant
Item 11 –Environment
Item 12 –Stakeholder
Item 13 –All the Axioms are Relevant
Item 14 –All the Axioms are Relevant
Item 15 –Fallibility, Event
Item 16 –Stakeholder Axioms
21. 1.Stakeholder Objectives
Stakeholder management
Goal and risk management
Decisions to be made and how (acceptance)
How testing will provide confidence and be assessed
How scope will be determined
2.Design approach
Sources of knowledge (bases and oracles)
Sources of uncertainty
Models to be used for design and coverage
Prioritisation approach
3.Delivery approach
Test sequencing policy
Repeat test policies
Environment requirements
Information delivery approach
Incident management approach
Execution and end-game approach
4.Plan (high or low-level)
Scope
Tasks
Responsibilities
Schedule
Approvals
Risks and contingencies
26. Google search
“CMM” –22,300,000
“CMM Training” –48,200
“CMM improves quality” –74 (BUT really 11 –most of these have NOTHING to do with software)
A Gerrard Consulting client…
CMM level 3 and proud of it (chaotic, hero culture)
Hired us to assess their overall s/w process and make recommendations (quality, time to deliver is slipping)
40+ recommendations, only 7 adopted –they couldn‟t change
How on earth did they get through the CMM 3 audit?
27. Using process change to fix cultural or organisational problems is never going to work
Improving test in isolation is never going to work either
Need to look at changing context rather than values…
30. Axioms+ Context+ Values+ Thinking
=Approach
<-recognise
<-hard to change
<-could change?
<-just do some
<-your approach
31. Axioms represent the critical things to think about
Associated questions act as checklists to:
Assess your current approach
Identify gaps, inconsistencies in current approach
QA your new approach in the future
Axioms represent the WHAT
Your approach specifies HOW
32. Mission
Coalition
Vision
Communication
Action
Wins
Consolidation
Anchoring
Changes identified here
If you must use one, this is where your ‘process model’ comes into play
34. Summary:
Choose test models to derive tests that are meaningful to stakeholders. Recognise the models‟ limitations and the assumptions that the models make.
Consequence if ignored or violated:
Tests design will be meaningless and not credible to stakeholders.
Questions:
Are design models available to use as test models? Are they mandatory?
What test models couldbe used to derive tests from the Test Basis?
Which test models willbe used?
Are test models to be documented or are they purely mental models?
What are the benefits of using these models?
What simplifying assumptions do these models make?
How will these models contribute to the delivery of evidence useful to the acceptance decision makers?
How will these models combine to provide sufficient evidence without excessive duplication?
How will the number of tests derived from models be bounded?
35. A tester needs to understand:
Test models and how to use them
How to select test models from fallible sources of knowledge
How to design test models from fallible sources of knowledge
Significance, authority and precedence of test models
How to use models to communicate
The limitations of test models
Familiarity with common models
Is this all that current certification provides?
36. Intellectual skills and capabilities are more important than the clerical skills
Need to re-focus on:
Testing thought processes (Axioms)
Testing Stakeholder relationship management
Testing as an information provision service
Goal and risk-based testing
Real-world examples, not theory
Practical, hands-on, real-world training, exercises and coaching.
38. As tests are run, every individual test has some significance
Some tests expose failures but ultimately we want all tests to PASS
When all tests pass –the stakeholders are happy, aren‟t they?
Can we measure confidence by counting test?
Not really...
39. Coverage model:
A test could cover one or hundreds of functional conditions, ten thousand program statements or ten
Objective:
Criticality of the business goal it examples
Criticality of the risk it informs
Precedent:
The first end-to-end test pass is significant
The 100thvariation of a similar test is less significant
Functional dependence:
A test of shared functionality used thousands of times per hour could be much more important than a peripheral feature used once/day
Stakeholder:
Are customers tests more or less significant than supplier tests?
Context:
The same test run at different times in different environments can have different value.
40. Only a stakeholder can assign a value to a test (but that is very hard thing to do)
A tester cannot quantify value, but can define its significance
A test is significant (to stakeholders) if it:
Can be related to a meaningful test objective
Increases coverage with respect to a meaningful test model
Is considered in an acceptance decision (at any level)
Significance is a Boolean but could be 0 or 1
The number of insignificant tests should be zero.
41. Significance can only be assessed by testers if:
Our test objectives, models, coverage goals are meaningful (to stakeholders) or
Testers are authorised to create their own objectives, measures and coverage goals or
Testers are their own stakeholder
Testers need a close, trusting relationship with their stakeholdersor authorised autonomy
E.g. exploratory testing won‟t work if stakeholders do not allow autonomy
Testers should not „go it alone‟.
42. Test coverage models and goals that generate uniform distributions of tests are inefficient and uninformative
We need more and better test models
Models that are meaningful in context
Significance varies with context and can be used to explain why
e.g. some tests aren‟t useful as regression tests
How much testing is enough?
Can never be answered by coverage alone.
43. Axioms are context-neutral rules for testing
The Equation of Testing
Separates axioms, context, values and thinking
We can have sensible conversations about process
Axioms and associated questions provide context neutral checklists for test strategy, assessment/improvement and skills
Quantum Testing separates significance from value; can it answer the question, “how much testing is enough?”
44. Thank-You!
THE
TESTING
OF
testaxioms.com
testers-pocketbook.com
gerrardconsulting.com
uktmf.com