Antonia Bertolino a Brief Essay Software Testing Review

A Brief Essay on Software Testing -Antonia Bertolino, Eda Marchetti Presented by Gargi Chipalkatti

A Brief Essay on Software Testing -Antonia Bertolino, Eda Marchetti Presented by Gargi Chipalkatti (Software Engineering II - EEL 6883)

Testing • Testing refers to many different activities used to check a piece of

Testing • Testing refers to many different activities used to check a slice of software • Fifty-fifty after successful completion of an extensive testing, the software can still incorporate faults • Testing tin can never testify the absence of defects, information technology can only possibly reveal the presence of faults by provoking malfunctions

Definition - Testing • Definition – Software Testing consists of the dynamic verification of

Definition - Testing • Definition – Software Testing consists of the dynamic verification of the behavior of a program on a finite set of exam cases, suitably selected from the usually infinite executions domain, against the specified expected behavior. • Explanation – Dynamic – means that testing implies executing the programme on (valued) inputs – Finite – means that only a express number of exam cases can exist executed during the testing phase, chosen from the whole examination set, that tin generally be considered infinite – Selected – refers to the test techniques adopted for selecting the test cases (and testers must be enlightened that different selection criteria may yield vastly different effectiveness) – Expected – points out to the decision process adopted for establishing whether the observed outcomes of program execution are acceptable or not

Goals for Testing • Detection of

Goals for Testing • Detection of "bugs" in the software • Increase conviction in the proper functioning of the software • Assist with the evaluation of functional and nonfunctional properties (as well as performance characteristics) • Exposing potential pattern flaws • Exposing deviations from user'south requirements • Measuring the operational reliability • Producing estimates of software reliability

Tasks associated with Testing • Deriving an adequate suite of test cases, according to

Tasks associated with Testing • Deriving an adequate suite of test cases, according to a feasible and cost-effective test selection technique • The ability to launch the selected tests (in a controlled host environment, or worse in the tight target environment of an embedded system) • Deciding whether the test issue is acceptable or non (which is referred to as the test oracle problem) • Evaluating the impact of the failure and finding its direct crusade (the Error), and the indirect one (via Root Crusade Analysis) • Judging whether testing is sufficient and tin can be stopped • Measuring the effectiveness of the tests

Terms used in Testing • Definitions – Fault – The cause of a failure,

Terms used in Testing • Definitions – Mistake – The cause of a failure, e. g. , a missing or wrong piece of code – Mistake – An intermediate unstable land – Failure – The manifested inability of the plan to perform the function required, i. eastward. , a system malfunction evidenced by wrong output, aberrant termination or unmet time and space constraints. • Relationship – A fault may remain undetected long time, until some event activates it. – If an when the mistake propagates to the output, it somewhen causes the failure. Fault Error Failure – This concatenation can recursively iterate: a mistake in turn can be caused by the failure of some other interacting system.

Software Reliability • Software reliability is a probabilistic estimate, and measures the probability that

Software Reliability • Software reliability is a probabilistic estimate, and measures the probability that the software will execute without failure in a given environment for a given catamenia of time. • Thus, the value of software reliability depends on how frequently those inputs that cause a failure will be exercised by the concluding users. • The notion of reliability is specific to "a given environment", the tests must be drawn from an input distribution that approximates as closely as possible the future usage in operation, which is called the operational distribution.

Types of Test Techniques Static Analysis Techniques – – Based solely on the (manual

Types of Test Techniques Static Assay Techniques – – Based solely on the (manual or automated) examination of project documentation of software models and cod, and of other related information about requirements and design – Generally yields valid results, but may be weak in precision Dynamic Exam Techniques – – Exercise the software in order to expose possible failures – Behavioral and performance backdrop of the program are also observed – Yields more precise results, but just holding for the examined executions

Static Techniques • Can be applied at the requirements phase; at the design phase;

Static Techniques • Can be applied at the requirements phase; at the design phase; and during the implementation stage • Traditional Static Techniques – heavily manual, error-prone, fourth dimension consuming – Software inspection – The footstep-by-step analysis of the documents (deliverables) produced, against a compiled checklist of mutual and historical defects – Software reviews – The procedure by which different aspects of the work product are presented to projection personnel (managers, users, customer etc) and other interested stakeholders for comment or blessing – Code reading – The desktop analysis of the produced code for discovering typing errors that do not violate style or syntax – Algorithm assay and tracing – The procedure in which the complexity of algorithms employed and the worst example, average-example and probabilistic analysis evaluations can be derived • Static Analysis Techniques relying on utilize of Formal Methods

Dynamic Techniques • Dynamic Techniques – – Testing – Based on the execution of

Dynamic Techniques • Dynamic Techniques – – Testing – Based on the execution of the code on valued inputs (definition of the parameters and ecology conditions that characterize a organisation state must be included when necessary ) – Profiling – A program profile records the number of times some entities of involvement occur during a set of controlled executions

Test Levels • Unit Test – – A unit is the smallest testable piece

Test Levels • Unit of measurement Test – – A unit is the smallest testable piece of software, which may consist of hundreds or even simply a few lines of source code, and generally represents the event of the work of one programmer – The purpose is to ensure that the unit satisfies its functional specification and/or that its implemented construction matches the intended pattern construction • Integration Test – – Integration is the process by which software pieces or components are aggregated to create a larger component. – Specifically aimed at exposing the issues that can arise at this stage. – Strategies – Big-Blindside, Meridian-Downward, Bottom-Up

Test Levels Continued… • System Test – – System test involves the whole system

Exam Levels Connected… • Organisation Exam – – System test involves the whole system embedded in its actual hardware environment – The main goals of system Testing – • Discovering the failures that manifest themselves only at system level and hence were not detected during unit or integration testing • Increasing the confidence that the developed product correctly implements the required capabilities • Collecting information useful for deciding the release of the product. – Arrangement Testing includes testing for functioning, security, reliability, stress testing and recovery • Acceptance Test – – An extension of system test. Information technology is a test session conducted over the whole system, which mainly focuses on the usability requirements more than than on the compliance of the implementation against some specification. – Aim is to verify that the effort required from end-users to learn to use and fully exploit the arrangement functionalities is acceptable

Regression Test • Refers to the retesting of a unit, a combination of components

Regression Exam • Refers to the retesting of a unit, a combination of components or a whole system after modification, in lodge to ascertain that the change has not introduced new faults • Corrective and evolutive modifications may be performed quite often. To re-run after each change all previously executed exam cases would be prohibitively expensive. • Therefore various types of techniques take been developed to reduce regression testing costs and to make information technology more than constructive. • Selective regression test techniques assistance in selecting a (minimized) subset of the existing exam cases past examining the modifications (for instance at lawmaking level, using control catamenia and information flow assay).

Objectives of Testing • • • Acceptance/Qualification Testing Installation Testing Alpha Testing Beta Testing

Objectives of Testing • • • Acceptance/Qualification Testing Installation Testing Alpha Testing Beta Testing Reliability Achievement Conformance Testing/Functional Testing Regression Testing Performance Testing Usability Testing Test-Driven Development

Strategies for Test Case Selection • Test Criterion provides a decision procedure for selecting

Strategies for Test Case Selection • Examination Criterion provides a decision process for selecting the test cases • Random Testing – – The test inputs are picked purely randomly from the whole input domain according to a specified distribution, i. e. , later assigning to the inputs dissimilar "weights" (more properly probabilities) • Segmentation Testing – – The plan input domain is divided into sub-domains within which information technology is assumed that the program behaves the same, i. e. , for every betoken within a sub-domain the programme either succeeds or fails • Structural Testing – – Known as Code-based testing or White-box testing – By monitoring lawmaking coverage one tries to exercise thoroughly all "plan elements" – Lawmaking-based criteria should be more properly used as adequacy criteria • Specification-Based Testing – – Depending on how the program specifications are expressed, dissimilar techniques are possible - equivalence classes, boundary conditions, cause-effect graphs

Other Test Criteria • Criteria based on Tester's Intuition & Experience – – Ad-hoc

Other Test Criteria • Criteria based on Tester's Intuition & Experience – – Advertizement-hoc testing techniques in which tests are derived relying on the tester's skill, intuition, and feel with like programs – Exploratory testing – simultaneous learning, exam pattern, and examination execution; that is, the tests are not defined in advance in an established test plan, but are dynamically designed, executed, and modified • Fault-Based Testing – – Devise examination cases specifically aimed at revealing categories of likely or pre-defined faults, with dissimilar degrees of formalization – Error guessing – Examination cases are designed past testers trying to figure out the most plausible faults in a given program. A good source of information is the history – of faults discovered in earlier projects, as well as the tester's expertise – Mutation testing – The underlying supposition of mutation testing is, by looking for simple syntactic faults, more circuitous, merely existent, faults will be establish • Criteria based on Operational Usage – – The idea is to infer, from the observed test results, the future reliability of the software when in actual use

Test-First Programming • Traditional Test Process – Test Planning, Test Design, Test Execution and

Test-Offset Programming • Traditional Test Process – Exam Planning, Test Pattern, Test Execution and Test Results Evaluation • Examination-Start programming focuses on the derivation of (unit and acceptance) tests before coding • Principle of the approach is to make evolution more lightweight by keeping design unproblematic and reducing every bit much as possible the rules and the activities of traditional processes felt by developers every bit overwhelming and unproductive

Test Execution • Launching the Tests – – Forcing the execution of the test

Test Execution • Launching the Tests – – Forcing the execution of the examination cases (manually or automatically) derived according to one of the test criteria • Test Oracles – – A test is meaningful only if it is possible to decide about its outcome – An oracle is whatever (human being or mechanical) agent that decides whether the program behaved correctly on a given examination. – The oracle is specified to output a reject verdict if information technology observes a failure (or fifty-fifty an error, for smarter oracles), and approve otherwise. – In the outcome that the oracle cannot reach a decision while executing a test case, the test output is classified as inconclusive • Test Tools – – The usage of appropriate tools can alleviate the burden of clerical, dull operations, and make them less error-decumbent, while increasing testing efficiency and effectiveness – Test harness (drivers, stubs), Test generators, Capture/Replay, Oracle/file comparators/assertion checking, Coverage analyzer/Instrumented, Tracers, Reliability evaluation tools

Test Documentation • Documentation is an integral part of the formalization of the test

Exam Documentation • Documentation is an integral part of the formalization of the test process, which contributes to the coordination and command of the testing phase. • Documentation – – – – Test Plan Test Design Specification Test Case Specification Test Procedure Specification Test Log Test Incident or Trouble Written report

Test Management • Different management activities – Scheduling the timely completion of tasks –

Test Management • Different management activities – Scheduling the timely completion of tasks – Interpretation of the effort and the resources needed to execute the tasks – Quantification of the gamble associated with the tasks – Effort/Cost estimation – Quality control measures to be employed: several

Test Measures • Evaluation of the Program under Test – Linguistic measures – based

Test Measures • Evaluation of the Program under Test – Linguistic measures – based on proprieties of the program or of the specification text (Lines of Code (LOC), statements, number of unique operands or operators) – Structural measures – based on structural relations between objects in the program and contain command menses or data catamenia complexity (frequency with which modules telephone call each other) – Hybrid measures – may result from the combination – Fault density – measured by the ratio between the number of faults found and the size of the program – Life testing, reliability evaluation • Evaluation of the Test performed – Coverage/thoroughness measure – Effectiveness

Conclusions • Approaches overviewed include both traditional and modern techniques • Software Testing is

Conclusions • Approaches overviewed include both traditional and modern techniques • Software Testing is a complex activity • Resources need to be devoted to this activity for its success • A examination approach that guarantees a perfect product cannot be found

Thank You

Give thanks You

boardlenevers.blogspot.com

Source: https://slidetodoc.com/a-brief-essay-on-software-testing-antonia-bertolino-2/

0 Response to "Antonia Bertolino a Brief Essay Software Testing Review"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel