You are about to deploy your software program, but before you do, you must creat
ID: 3690801 • Letter: Y
Question
You are about to deploy your software program, but before you do, you must create a plan for the support, maintenance, and upgrades of your program.
Create a 3–5-page plan that includes the following information:
Support
The types of support you plan to offer your users
The type of support system you plan to implement
Include the things you need it to keep track of and why.
MaintenanceHow you plan to receive and track each of the following change requests:
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Upgrades
How often you plan to provide major (SP) and minor (patches) upgrades
The reasoning behind the details of your upgrade plan
How you will distribute each type of upgrade to your users
Testing
Discuss approach used to test the program and impacts on quality
Describe measures to test effectiveness of software
Explain how your plan to support, maintain and upgrade your program may impact testing
Explanation / Answer
Support :
The types of support you plan to offer your users
Annual Maintenance Plan Summary
% of Price
Plan Hours
Plan Days
# of Contacts
Standard Maintenance Plan
15%
8 A.M. - 8 P.M. ET
Monday - Friday
2
Silver Maintenance Plan
20%
8 A.M. - 8 P.M. ET
Monday - Friday
4
Gold Maintenance Plan
25%
24 Hours
Monday - Friday
8
Platinum Maintenance Plan
30%
24 Hours
ALL
12
ASP Basic Maintenance Plan
25%
24 Hours
Monday - Friday
8
ASP Platinum Maintenance Plan
Corrective Maintaince
Preventive maintenance is generally considered to include both condition-monitoring and life-extending tasks which are scheduled at regular intervals. Some tasks, such as temperature and vibration measurements, must be done while the equipment is operating and others, such as internal cleaning, must be done while the equipment is shut down.
There is another, often overlooked, type of preventive maintenance inspection which can not be scheduled at regular intervals. These inspections can and should be done in conjunction with corrective maintenance. Corrective maintenance is defined as maintenance work which involves the repair or replacement of components which have failed or broken down. For failure modes which lend themselves to condition monitoring, corrective maintenance should be the result of a regular inspection which identifies the failure in time for corrective maintenance to be planned and scheduled, then performed during a routine plant outage.
When corrective maintenance is done, the equipment should be inspected to identify the reason for the failure and to allow action to be taken to eliminate or reduce the frequency of future similar failures. These inspections should be included in the work plan.
A good example is the failure of packing in a process pump. Packing can be monitored by checking leakage and the location of the gland follower, so repacking should not normally be a fixed-time maintenance task. It should be done at frequencies which depend on the operating context.
During the process of repacking the pump, there are a number of simple inspections related to packing life which can be performed. These include:
Many of these inspections would not normally be done on a regular scheduled basis and can only be done during repacking.
In a well-managed maintenance system, inspections that should be done during corrective maintenance for a specific failure mode (such as packing failures) should be listed, recorded and used. So for any work order to repack a pump, the above check list should be attached to or included in the work order as a standard procedure. The standard should include measurements appropriate to the specific equipment, such as the allowable sleeve wear and the impeller clearance in the case of the process pump packing. A check list similar to the one for pump packing can be developed for many failure modes for common components, such as mechanical drives and hydraulic systems.
Integrating inspections that directly relate to failures into corrective maintenance work plans is a powerful tool to improve plant reliability.
Adaptive maintenance :
An integral part of software is the maintenance one, which requires an accurate maintenance plan to be prepared during the software development. It should specify how users will request modifications or report problems. The budget should include resource and cost estimates. A new decision should be addressed for the developing of every new system feature and its quality objectives. The software maintenance, which can last for 5–6 years (or even decades) after the development process, calls for an effective plan which can address the scope of software maintenance, the tailoring of the post delivery/deployment process, the designation of who will provide maintenance, and an estimate of the life-cycle costs. The selection of proper enforcement of standards is the challenging task right from early stage of software engineering which has not got definite importance by the concerned stakeholders.
Perfective Maintenance
Perfective maintenance mainly deals with implementing new or changed user requirements. Perfective maintenance involves making functional enhancements to the system in addition to the activities to increase the system's performance even when the changes have not been suggested by faults. This includes enhancing both the function and efficiency of the code and changing the functionalities of the system as per the users' changing needs.
Examples of perfective maintenance include modifying the payroll program to incorporate a new union settlement and adding a new report in the sales analysis system. Perfective maintenance accounts for 50%, that is, the largest of all the maintenance activities.
Upgrades
How often you plan to provide major (SP) and minor (patches) upgrades :
One of the most exciting — and at the same time, most frustrating — things about information technology is the rapidity with which changes occur and new hardware and software products emerge, with fantastic new "must have" features that make it easier for you and your employees to do your jobs.
There are two extreme approaches to upgrading:
These two strategies have something in common: they aren't really strategies at all. They're both reactive — the first driven by reaction to the marketing of new products, and the second driven by reaction to necessity. The smarter course is to develop a plan and timeline for upgrading, taking many factors into consideration — including scalability. A long-term plan must take into account future growth and changes anticipated by your company and the way those changes will affect your upgrade needs.
Categorize and prioritize
The first step in putting together your upgrade plan is to categorize and prioritize potential upgrades. Some categories of upgrade items include:
Of course, you don't have to upgrade all of the items in a particular category at the same time. For example, you might plan to upgrade the client computers in a particular department, such as the graphics department, based on those users' greater need for the features of new operating systems and applications. Or you might upgrade only certain servers, such as your mail servers, to take advantage of a new release of the mail server software that will make it easier to support your mobile users.
Thus you may want to create subcategories, such as "Client computers — accounting department," "Client computers — graphics department," "Client computers — upper management" and so forth. Similarly, software is a very broad category and you may upgrade operating systems and/or specific applications while continuing to use older versions of other applications.
Set priorities
Once you've identified the categories and subcategories, you can prioritize them based on the cost/benefits ratio of upgrading a particular category or subcategory. What items are serving as the bottlenecks that keep work from getting done as efficiently as it could with newer equipment and/or software? For example, if you're losing business from your e-commerce site because of the site's slow performance, speeding up the site is an upgrade priority. But you still must analyze the cause of the slow performance to determine whether you need to upgrade your Web server hardware, Web server software, or your network infrastructure (perhaps moving from a T-1 to a T-3 Internet connection, or spreading the load across a Web server farm instead of overloading a single server).
Of course, in real life your upgrade priorities may not always be based strictly on need. If the big boss wants the top of the line multi-processor workstation with 4 GB of memory and a high performance video card just to read email and compose the occasional Word document, there's a good chance he/she will get it. In general, though, don't over-upgrade. Plan to give your users the hardware and software that's required to most effectively perform their job tasks – and no more.
Consider dependencies
Upgrading some categories may be dependent on first upgrading other categories. For example, you may not be able to upgrade your productivity applications until you first upgrade the operating systems — and you might not be able to upgrade the operating systems until you've upgraded the hardware.
It can work the other way, too; if you upgrade the operating system, you might be forced to upgrade the productivity application because the old version doesn't run well (or at all) on the new operating system.
These dependencies affect your upgrade priorities and timelines.
Phased rollout
Once you've decided to upgrade a particular category or subcategory, you shouldn't jump in feet first and roll out the upgrade to every system or device in that category. What if the upgrade causes major problems that make systems or the network unusable? The most prudent strategy is to roll out each upgrade in phases. Test it first in a non-production environment. This gives you a chance to work the bugs out without any impact on employee productivity.
Next, select a pilot group to test the upgrades in the production environment. If the new hardware or software entails a learning curve, roll it out first to power users, those who are more technically savvy and thus better able to handle the new way of doing things without overwhelming your support staff. Once they've mastered it, they'll serve as a resource for helping other users make the transition when you roll it out to the rest of the department or the rest of the company.
Keeping it scalable
Your upgrade plan should be set out in writing, and you should get input from different departments and different levels to help you create a plan that will create the least disruption and proceed smoothly. You'll need to know about any plans for expansion (geographic and in terms of personnel), so you can include the additional locations and/or users in the upgrade plan. Likewise, you'll need to know if there are restructuring, consolidation or personnel cuts in the company's immediate future. It would be a waste of time and money to upgrade systems that will sit idle a few months down the road.
Upgrading can be costly and traumatic, but sooner or later it's inevitable. Proper planning, with scalability in mind, can make the difference between a smooth deployment of nifty new technologies and an upgrade disaster.
Testing apprach and quality :
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure the test-cases will detect errors which are introduced by mutating the source code.
The box approach[edit]
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
White-box testing[edit]
Main article: White-box testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Code coverage as a software metric can be reported as a percentage for:
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing[edit]
Main article: Black-box testing
Black box diagram
Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[23] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements.[24] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[25]
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[26] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Visual testing[edit]
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily nd the information she or he requires, and the information is expressed clearly.[27][28]
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed]
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug.[clarification needed][citation needed]
Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process.[citation needed] For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers.
Further information: Graphical user interface testing
Grey-box testing[edit]
Main article: Gray box testing
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[29][not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations.[citation needed] Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.
Measures to test effectiveness of software :
In software projects, it is most important to measure the quality, cost and effectiveness of the project and the processes. Without measuring these, project can’t be completed successfully.
In today’s article we will learn with examples and graphs – Software test metrics and measurements and how to use these in software testing process.
There is a famous statement: “We can’t control things which we can’t measure”.
Here controlling the projects means, how a project manager/lead can identify the deviations from the test plan ASAP in order to react in the perfect time. Generation of test metrics based on the project needs is very much important to achieve the quality of the software being tested.
What are Software Testing Metrics?
A Metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.
Metrics can be defined as “STANDARDS OF MEASUREMENT”.
Software Metrics are used to measure the quality of the project. Simply, Metric is a unit used for describing an attribute. Metric is a scale for measurement.
Suppose, in general, “Kilogram” is a metric for measuring the attribute “Weight”. Similarly, in software, “How many issues are found in thousand lines of code?”, here No. of issues is one measurement & No. of lines of code is another measurement. Metric is defined from these two measurements.
Test metrics example:
What is Software Test Measurement?
Measurement is the quantitative indication of extent, amount, dimension, capacity, or size of some attribute of a product or process.
Test measurement example: Total number of defects.
Please refer below diagram for clear understanding of the difference between Measurement & Metrics.
Why Test Metrics?
Generation of Software Test Metrics is the most important responsibility of the Software Test Lead/Manager.
Test Metrics are used to,
Importance of Software Testing Metrics:
As explained above, Test Metrics are the most important to measure the quality of the software.
Now, how can we measure the quality of the software by using Metrics?
Suppose, if a project does not have any metrics, then how the quality of the work done by a Test analyst will be measured?
For Example: A Test Analyst has to,
In above scenario, if metrics are not followed, then the work completed by the test analyst will be subjective i.e. the test reportwill not have the proper information to know the status of his work/project.
If Metrics are involved in the project, then the exact status of his/her work with proper numbers/data can be published.
I.e. in the Test report, we can publish:
1. How many test cases have been designed per requirement?
2. How many test cases are yet to design?
3. How many test cases are executed?
4. How many test cases are passed/failed/blocked?
5. How many test cases are not yet executed?
6. How many defects are identified & what is the severity of those defects?
7. How many test cases are failed due to one particular defect? etc.
Based on the project needs we can have more metrics than above mentioned list, to know the status of the project in detail.
Based on the above metrics, test lead/manager will get the understanding of the below mentioned key points.
a) %ge of work completed
b) %ge of work yet to be completed
c) Time to complete the remaining work
d) Whether the project is going as per the schedule or lagging? etc.
Based on the metrics, if the project is not going to complete as per the schedule, then the manager will raise the alarm to the client and other stake holders by providing the reasons for lagging to avoid the last minute surprises.
Metrics Life Cycle:
Types of Manual Test Metrics:
Testing Metrics are mainly divided into 2 categories.
Base Metrics:
Base Metrics are the Metrics which are derived from the data gathered by the Test Analyst during the test case development and execution.
This data will be tracked throughout the Test Life cycle. I.e. collecting the data like, Total no. of test cases developed for a project (or) no. of test cases need to be executed (or) no. of test cases passed/failed/blocked etc.
Calculated Metrics:
Calculated Metrics are derived from the data gathered in Base Metrics. These Metrics are generally tracked by the test lead/manager for Test Reporting purpose.
Examples of Software Testing Metrics:
Let’s take an example to calculate various test metrics used in software test reports:
Below is the table format for the data retrieved from the test analyst who is actually involved in testing:
Definitions and Formulas for Calculating Metrics:
#1) %ge Test cases Executed: This metric is used to obtain the execution status of the test cases in terms of %ge.
%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Executed = (65 / 100) * 100 = 65%
#2) %ge Test cases not executed: This metric is used to obtain the pending execution status of the test cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%
------------
#3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of the executed test cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of the executed test cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked %ge of the executed test cases. A detailed report can be submitted by specifying the actual reason of blocking the test cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%
#6) Defect Density = No. of Defects identified / size
(Here “Size” is considered as requirement. Hence here the Defect Density is calculated as number of defects identified per requirement. Similarly, Defect Density can be calculated as number of Defects identified per 100 lines of code [OR] No. of defects identified per module etc.)
So, from the above data,
Defect Density = (30 / 5) = 6
#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found during QA testing +No. of Defects found by End user)) * 100
DRE is used to identify the test effectiveness of the system.
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%
$8) Defect Leakage: Defect Leakage is the Metric which is used to identify the efficiency of the QA testing i.e., how many defects are missed / slipped during the QA testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Defect Leakage = (40 /100) * 100 = 40%
#9) Defects by Priority: This metric is used to identify the no. of defects identified based on the Severity / Priority of the defect which is used to decide the quality of the software.
%ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
%ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
%ge Medium Defects = No. of Medium Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
%ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
Support, maintain and upgrade of software cannot impact testing if each and evry thing is planned
30%
24 Hours
ALL
12
Annual Maintenance Plan Summary
% of Price
Plan Hours
Plan Days
# of Contacts
Standard Maintenance Plan
15%
8 A.M. - 8 P.M. ET
Monday - Friday
2
Silver Maintenance Plan
20%
8 A.M. - 8 P.M. ET
Monday - Friday
4
Gold Maintenance Plan
25%
24 Hours
Monday - Friday
8
Platinum Maintenance Plan
30%
24 Hours
ALL
12
ASP Basic Maintenance Plan
25%
24 Hours
Monday - Friday
8
ASP Platinum Maintenance Plan
Corrective Maintaince
Preventive maintenance is generally considered to include both condition-monitoring and life-extending tasks which are scheduled at regular intervals. Some tasks, such as temperature and vibration measurements, must be done while the equipment is operating and others, such as internal cleaning, must be done while the equipment is shut down.
There is another, often overlooked, type of preventive maintenance inspection which can not be scheduled at regular intervals. These inspections can and should be done in conjunction with corrective maintenance. Corrective maintenance is defined as maintenance work which involves the repair or replacement of components which have failed or broken down. For failure modes which lend themselves to condition monitoring, corrective maintenance should be the result of a regular inspection which identifies the failure in time for corrective maintenance to be planned and scheduled, then performed during a routine plant outage.
When corrective maintenance is done, the equipment should be inspected to identify the reason for the failure and to allow action to be taken to eliminate or reduce the frequency of future similar failures. These inspections should be included in the work plan.
A good example is the failure of packing in a process pump. Packing can be monitored by checking leakage and the location of the gland follower, so repacking should not normally be a fixed-time maintenance task. It should be done at frequencies which depend on the operating context.
During the process of repacking the pump, there are a number of simple inspections related to packing life which can be performed. These include:
- Using a padded pry bar to lift the pump shaft to check for a failed bearing
- Checking the condition of the shaft or sleeve for excessive wear or surface irregularities (using a small light that can be inserted deep into the stuffing box)
- Checking that the packing sequence is correct and that the lantern ring is in the right location
- Checking the throat bushing for wear
- Checking that the packing is cut to the correct length and properly seated
- Checking the packing used is the correct type
- Checking the clearance behind the impeller, especially if it has back pump-out vanes
- Checking shaft run-out
- Checking seal water pressure
- Confirming that seal and cooling water connections are correct
Many of these inspections would not normally be done on a regular scheduled basis and can only be done during repacking.
In a well-managed maintenance system, inspections that should be done during corrective maintenance for a specific failure mode (such as packing failures) should be listed, recorded and used. So for any work order to repack a pump, the above check list should be attached to or included in the work order as a standard procedure. The standard should include measurements appropriate to the specific equipment, such as the allowable sleeve wear and the impeller clearance in the case of the process pump packing. A check list similar to the one for pump packing can be developed for many failure modes for common components, such as mechanical drives and hydraulic systems.
Integrating inspections that directly relate to failures into corrective maintenance work plans is a powerful tool to improve plant reliability.
Adaptive maintenance :
An integral part of software is the maintenance one, which requires an accurate maintenance plan to be prepared during the software development. It should specify how users will request modifications or report problems. The budget should include resource and cost estimates. A new decision should be addressed for the developing of every new system feature and its quality objectives. The software maintenance, which can last for 5–6 years (or even decades) after the development process, calls for an effective plan which can address the scope of software maintenance, the tailoring of the post delivery/deployment process, the designation of who will provide maintenance, and an estimate of the life-cycle costs. The selection of proper enforcement of standards is the challenging task right from early stage of software engineering which has not got definite importance by the concerned stakeholders.
Perfective Maintenance
Perfective maintenance mainly deals with implementing new or changed user requirements. Perfective maintenance involves making functional enhancements to the system in addition to the activities to increase the system's performance even when the changes have not been suggested by faults. This includes enhancing both the function and efficiency of the code and changing the functionalities of the system as per the users' changing needs.
Examples of perfective maintenance include modifying the payroll program to incorporate a new union settlement and adding a new report in the sales analysis system. Perfective maintenance accounts for 50%, that is, the largest of all the maintenance activities.
Upgrades
How often you plan to provide major (SP) and minor (patches) upgrades :
One of the most exciting — and at the same time, most frustrating — things about information technology is the rapidity with which changes occur and new hardware and software products emerge, with fantastic new "must have" features that make it easier for you and your employees to do your jobs.
There are two extreme approaches to upgrading:
- Some companies, especially those in the tech industry themselves, desire to be on the cutting edge and always have the latest and greatest. These early adopters toss out their old systems and replace them with the new as soon as it's available.
- Some companies, especially those in more "traditional" or conservative fields, go by the "if it's not broke, don't fix it" philosophy. They tend to hang onto their old tried and true technologies and upgrade only when forced to do so because a new feature set is absolutely required to get the job done or, more often, because the old machines finally break down and the currently available replacements don't support the old software.
These two strategies have something in common: they aren't really strategies at all. They're both reactive — the first driven by reaction to the marketing of new products, and the second driven by reaction to necessity. The smarter course is to develop a plan and timeline for upgrading, taking many factors into consideration — including scalability. A long-term plan must take into account future growth and changes anticipated by your company and the way those changes will affect your upgrade needs.
Categorize and prioritize
The first step in putting together your upgrade plan is to categorize and prioritize potential upgrades. Some categories of upgrade items include:
- Client computer systems
- Peripherals (monitors, scanners, printers)
- Client software
- Server hardware
- Server software
- Security devices (firewalls, biometric and smart card readers, IDS appliances)
- Network devices (routers, switches, network printers)
- Network infrastructure (cabling, Internet connection)
Of course, you don't have to upgrade all of the items in a particular category at the same time. For example, you might plan to upgrade the client computers in a particular department, such as the graphics department, based on those users' greater need for the features of new operating systems and applications. Or you might upgrade only certain servers, such as your mail servers, to take advantage of a new release of the mail server software that will make it easier to support your mobile users.
Thus you may want to create subcategories, such as "Client computers — accounting department," "Client computers — graphics department," "Client computers — upper management" and so forth. Similarly, software is a very broad category and you may upgrade operating systems and/or specific applications while continuing to use older versions of other applications.
Set priorities
Once you've identified the categories and subcategories, you can prioritize them based on the cost/benefits ratio of upgrading a particular category or subcategory. What items are serving as the bottlenecks that keep work from getting done as efficiently as it could with newer equipment and/or software? For example, if you're losing business from your e-commerce site because of the site's slow performance, speeding up the site is an upgrade priority. But you still must analyze the cause of the slow performance to determine whether you need to upgrade your Web server hardware, Web server software, or your network infrastructure (perhaps moving from a T-1 to a T-3 Internet connection, or spreading the load across a Web server farm instead of overloading a single server).
Of course, in real life your upgrade priorities may not always be based strictly on need. If the big boss wants the top of the line multi-processor workstation with 4 GB of memory and a high performance video card just to read email and compose the occasional Word document, there's a good chance he/she will get it. In general, though, don't over-upgrade. Plan to give your users the hardware and software that's required to most effectively perform their job tasks – and no more.
Consider dependencies
Upgrading some categories may be dependent on first upgrading other categories. For example, you may not be able to upgrade your productivity applications until you first upgrade the operating systems — and you might not be able to upgrade the operating systems until you've upgraded the hardware.
It can work the other way, too; if you upgrade the operating system, you might be forced to upgrade the productivity application because the old version doesn't run well (or at all) on the new operating system.
These dependencies affect your upgrade priorities and timelines.
Phased rollout
Once you've decided to upgrade a particular category or subcategory, you shouldn't jump in feet first and roll out the upgrade to every system or device in that category. What if the upgrade causes major problems that make systems or the network unusable? The most prudent strategy is to roll out each upgrade in phases. Test it first in a non-production environment. This gives you a chance to work the bugs out without any impact on employee productivity.
Next, select a pilot group to test the upgrades in the production environment. If the new hardware or software entails a learning curve, roll it out first to power users, those who are more technically savvy and thus better able to handle the new way of doing things without overwhelming your support staff. Once they've mastered it, they'll serve as a resource for helping other users make the transition when you roll it out to the rest of the department or the rest of the company.
Keeping it scalable
Your upgrade plan should be set out in writing, and you should get input from different departments and different levels to help you create a plan that will create the least disruption and proceed smoothly. You'll need to know about any plans for expansion (geographic and in terms of personnel), so you can include the additional locations and/or users in the upgrade plan. Likewise, you'll need to know if there are restructuring, consolidation or personnel cuts in the company's immediate future. It would be a waste of time and money to upgrade systems that will sit idle a few months down the road.
Upgrading can be costly and traumatic, but sooner or later it's inevitable. Proper planning, with scalability in mind, can make the difference between a smooth deployment of nifty new technologies and an upgrade disaster.
Testing apprach and quality :
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure the test-cases will detect errors which are introduced by mutating the source code.
The box approach[edit]
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
White-box testing[edit]
Main article: White-box testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
- API testing – testing of the application using public and private APIs (application programming interfaces)
- Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
- Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
- Mutation testing methods
- Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Code coverage as a software metric can be reported as a percentage for:
- Function coverage, which reports on functions executed
- Statement coverage, which reports on the number of lines executed to complete the test
- Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing[edit]
Main article: Black-box testing
Black box diagram
Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[23] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements.[24] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[25]
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[26] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Visual testing[edit]
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily nd the information she or he requires, and the information is expressed clearly.[27][28]
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed]
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug.[clarification needed][citation needed]
Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process.[citation needed] For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers.
Further information: Graphical user interface testing
Grey-box testing[edit]
Main article: Gray box testing
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[29][not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations.[citation needed] Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.
Measures to test effectiveness of software :
In software projects, it is most important to measure the quality, cost and effectiveness of the project and the processes. Without measuring these, project can’t be completed successfully.
In today’s article we will learn with examples and graphs – Software test metrics and measurements and how to use these in software testing process.
There is a famous statement: “We can’t control things which we can’t measure”.
Here controlling the projects means, how a project manager/lead can identify the deviations from the test plan ASAP in order to react in the perfect time. Generation of test metrics based on the project needs is very much important to achieve the quality of the software being tested.
What are Software Testing Metrics?
A Metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.
Metrics can be defined as “STANDARDS OF MEASUREMENT”.
Software Metrics are used to measure the quality of the project. Simply, Metric is a unit used for describing an attribute. Metric is a scale for measurement.
Suppose, in general, “Kilogram” is a metric for measuring the attribute “Weight”. Similarly, in software, “How many issues are found in thousand lines of code?”, here No. of issues is one measurement & No. of lines of code is another measurement. Metric is defined from these two measurements.
Test metrics example:
- How many defects are existed within the module?
- How many test cases are executed per person?
- What is the Test coverage %?
What is Software Test Measurement?
Measurement is the quantitative indication of extent, amount, dimension, capacity, or size of some attribute of a product or process.
Test measurement example: Total number of defects.
Please refer below diagram for clear understanding of the difference between Measurement & Metrics.
Why Test Metrics?
Generation of Software Test Metrics is the most important responsibility of the Software Test Lead/Manager.
Test Metrics are used to,
- Take the decision for next phase of activities such as, estimate the cost & schedule of future projects.
- Understand the kind of improvement required to success the project
- Take decision on process or technology to be modified etc.
Importance of Software Testing Metrics:
As explained above, Test Metrics are the most important to measure the quality of the software.
Now, how can we measure the quality of the software by using Metrics?
Suppose, if a project does not have any metrics, then how the quality of the work done by a Test analyst will be measured?
For Example: A Test Analyst has to,
- Design the test cases for 5 requirements
- Execute the designed test cases
- Log the defects & need to fail the related test cases
- After the defect is resolved, need to re-test the defect & re-execute the corresponding failed test case.
In above scenario, if metrics are not followed, then the work completed by the test analyst will be subjective i.e. the test reportwill not have the proper information to know the status of his work/project.
If Metrics are involved in the project, then the exact status of his/her work with proper numbers/data can be published.
I.e. in the Test report, we can publish:
1. How many test cases have been designed per requirement?
2. How many test cases are yet to design?
3. How many test cases are executed?
4. How many test cases are passed/failed/blocked?
5. How many test cases are not yet executed?
6. How many defects are identified & what is the severity of those defects?
7. How many test cases are failed due to one particular defect? etc.
Based on the project needs we can have more metrics than above mentioned list, to know the status of the project in detail.
Based on the above metrics, test lead/manager will get the understanding of the below mentioned key points.
a) %ge of work completed
b) %ge of work yet to be completed
c) Time to complete the remaining work
d) Whether the project is going as per the schedule or lagging? etc.
Based on the metrics, if the project is not going to complete as per the schedule, then the manager will raise the alarm to the client and other stake holders by providing the reasons for lagging to avoid the last minute surprises.
Metrics Life Cycle:
Types of Manual Test Metrics:
Testing Metrics are mainly divided into 2 categories.
- Base Metrics
- Calculated Metrics
Base Metrics:
Base Metrics are the Metrics which are derived from the data gathered by the Test Analyst during the test case development and execution.
This data will be tracked throughout the Test Life cycle. I.e. collecting the data like, Total no. of test cases developed for a project (or) no. of test cases need to be executed (or) no. of test cases passed/failed/blocked etc.
Calculated Metrics:
Calculated Metrics are derived from the data gathered in Base Metrics. These Metrics are generally tracked by the test lead/manager for Test Reporting purpose.
Examples of Software Testing Metrics:
Let’s take an example to calculate various test metrics used in software test reports:
Below is the table format for the data retrieved from the test analyst who is actually involved in testing:
Definitions and Formulas for Calculating Metrics:
#1) %ge Test cases Executed: This metric is used to obtain the execution status of the test cases in terms of %ge.
%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Executed = (65 / 100) * 100 = 65%
#2) %ge Test cases not executed: This metric is used to obtain the pending execution status of the test cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%
------------
#3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of the executed test cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of the executed test cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked %ge of the executed test cases. A detailed report can be submitted by specifying the actual reason of blocking the test cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%
#6) Defect Density = No. of Defects identified / size
(Here “Size” is considered as requirement. Hence here the Defect Density is calculated as number of defects identified per requirement. Similarly, Defect Density can be calculated as number of Defects identified per 100 lines of code [OR] No. of defects identified per module etc.)
So, from the above data,
Defect Density = (30 / 5) = 6
#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found during QA testing +No. of Defects found by End user)) * 100
DRE is used to identify the test effectiveness of the system.
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%
$8) Defect Leakage: Defect Leakage is the Metric which is used to identify the efficiency of the QA testing i.e., how many defects are missed / slipped during the QA testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Defect Leakage = (40 /100) * 100 = 40%
#9) Defects by Priority: This metric is used to identify the no. of defects identified based on the Severity / Priority of the defect which is used to decide the quality of the software.
%ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
%ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
%ge Medium Defects = No. of Medium Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
%ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
Support, maintain and upgrade of software cannot impact testing if each and evry thing is planned
30%
24 Hours
ALL
12
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.