Quality management software engineering




















To use GQM paradigm, first we express the overall goals of the organization. Then, we generate the questions such that the answers are known so that we can determine whether the goals are being met. Later, analyze each question in terms of what measurement we need in order to answer each question. Typical goals are expressed in terms of productivity, quality, risk, customer satisfaction, etc.

Goals and questions are to be constructed in terms of their audience. Example : To characterize the product in order to learn it. Example : Examine the defects from the viewpoint of the customer. Example : The customers of this software are those who have no knowledge about the tools. According to the maturity level of the process given by SEI, the type of measurement and the measurement program will be different. Following are the different measurement programs that can be applied at each of the maturity level.

At this level, the inputs are ill- defined, while the outputs are expected. The transition from input to output is undefined and uncontrolled.

For this level of process maturity, baseline measurements are needed to provide a starting point for measuring. At this level, the inputs and outputs of the process, constraints, and resources are identifiable. A repeatable process can be described by the following diagram. The input measures can be the size and volatility of the requirements. The output may be measured in terms of system size, the resources in terms of staff effort, and the constraints in terms of cost and schedule.

At this level, intermediate activities are defined, and their inputs and outputs are known and understood. A simple example of the defined process is described in the following figure. The input to and the output from the intermediate activities can be examined, measured, and assessed. At this level, the feedback from the early project activities can be used to set priorities for the current activities and later for the project activities. We can measure the effectiveness of the process activities.

The measurement reflects the characteristics of the overall process and of the interaction among and across major activities. At this level, the measures from activities are used to improve the process by removing and adding process activities and changing the process structure dynamically in response to measurement feedback.

Thus, the process change can affect the organization and the project as well as the process. The process will act as sensors and monitors, and we can change the process significantly in response to warning signs.

At a given maturity level, we can collect the measurements for that level and all levels below it. Process maturity suggests to measure only what is visible. Thus, the combination of process maturity with GQM will provide most useful measures. At level 1 , the project is likely to have ill-defined requirements. At this level, the measurement of requirement characteristics is difficult.

At level 2 , the requirements are well-defined and the additional information such as the type of each requirement and the number of changes to each type can be collected. At level 3 , intermediate activities are defined with entry and exit criteria for each activity. The goal and question analysis will be the same, but the metric will vary with maturity.

The more mature the process, the richer will be the measurements. The GQM paradigm, in concert with the process maturity, has been used as the basis for several tools that assist managers in designing measurement programs. GQM helps to understand the need for measuring the attribute, and process maturity suggests whether we are capable of measuring it in a meaningful way.

Together they provide a context for measurement. Measures or measurement systems are used to asses an existing entity by numerically characterizing one or more of its attributes. A measure is valid if it accurately characterizes the attribute it claims to measure. Validating a software measurement system is the process of ensuring that the measure is a proper numerical characterization of the claimed attribute by showing that the representation condition is satisfied.

For validating a measurement system, we need both a formal model that describes entities and a numerical mapping that preserves the attribute that we are measuring. For example, if there are two programs P1 and P2, and we want to concatenate those programs, then we expect that any measure m of length to satisfy that,. If a program P1 has more length than program P2 , then any measure m should also satisfy,. The length of the program can be measured by counting the lines of code.

If this count satisfies the above relationships, we can say that the lines of code are a valid measure of the length. The formal requirement for validating a measure involves demonstrating that it characterizes the stated attribute in the sense of measurement theory. Prediction systems are used to predict some attribute of a future entity involving a mathematical model with associated prediction procedures. Validating prediction systems in a given environment is the process of establishing the accuracy of the prediction system by empirical means, i.

It involves experimentation and hypothesis testing. The degree of accuracy acceptable for validation depends upon whether the prediction system is deterministic or stochastic as well as the person doing the assessment. Some stochastic prediction systems are more stochastic than others. Examples of stochastic prediction systems are systems such as software cost estimation, effort estimation, schedule estimation, etc.

Hence, to validate a prediction system formally, we must decide how stochastic it is, then compare the performance of the prediction system with known data.

Software metrics is a standard of measure that contains many activities which involve some degree of measurement. It can be classified into three categories: product metrics, process metrics, and project metrics. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, and quality level. Process metrics can be used to improve software development and maintenance.

Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process. Project metrics describe the project characteristics and execution. Software measurement is a diverse collection of these activities that range from models predicting software project costs at a specific stage to measures of program structure. Effort is expressed as a function of one or more variables such as the size of the program, the capability of the developers and the level of reuse.

Cost and effort estimation models have been proposed to predict the project cost during early phases in the software life cycle. Productivity can be considered as a function of the value and the cost.

Each can be decomposed into different measurable size, functionality, time, money, etc. Different possible components of a productivity model can be expressed in the following diagram. The quality of any measurement program is clearly dependent on careful data collection. Data collected can be distilled into simple charts and graphs so that the managers can understand the progress and problem of the development.

Data collection is also essential for scientific investigation of relationships and trends. Quality models have been developed for the measurement of quality of the product without which productivity is meaningless. These quality models can be combined with productivity model for measuring the correct productivity.

These models are usually constructed in a tree-like fashion. The upper branches hold important high level quality factors such as reliability and usability. The notion of divide and conquer approach has been implemented as a standard approach to measuring software quality.

Most quality models include reliability as a component factor, however, the need to predict and measure reliability has led to a separate specialization in reliability modeling and prediction.

The basic problem in reliability theory is to predict when a system will eventually fail. It includes externally observable system performance characteristics such as response times and completion rates, and the internal working of the system such as the efficiency of algorithms.

It is another aspect of quality. Here we measure the structural attributes of representations of the software, which are available in advance of execution. Then we try to establish empirically predictive theories to support quality assurance, quality control, and quality prediction. This model can assess many different attributes of development including the use of tools, standard practices and more.

It is based on the key practices that every good contractor should be using. For managing the software project, measurement has a vital role.

For checking whether the project is on track, users and developers can rely on the measurement-based chart and graph. The standard set of measurements and reporting methods are especially important when the software is embedded in a product where the customers are not usually well-versed in software terminology. This depends on the experimental design, proper identification of factors likely to affect the outcome and appropriate measurement of factor attributes.

Software metrics is a standard of measure that contains many activities, which involves some degree of measurement. The success in the software measurement lies in the quality of the data collected and analyzed. Are they correct? Are they accurate? Are they appropriately precise? Are they consistent? Are they associated with a particular activity or time period?

Can they be replicated? Hence, the data should also be possible to replicate easily. For example: Weekly timesheet of the employees in an organization. Collection of data requires human observation and reporting. Managers, system analysts, programmers, testers, and users must record row data on forms.

Provide the results of data capture and analysis to the original providers promptly and in a useful form that will assist them in their work.

Once the set of metrics is clear and the set of components to be measured has been identified, devise a scheme for identifying each activity involved in the measurement process. Data collection planning must begin when project planning begins. Actual data collection takes place during many phases of development.

An example of a database structure is shown in the following figure. This database will store the details of different employees working in different departments of an organization. In the above diagram, each box is a table in the database, and the arrow denotes the many-to-one mapping from one table to another. The mappings define the constraints that preserve the logical consistency of the data. Once the database is designed and populated with data, we can make use of the data manipulation languages to extract the data for analysis.

After collecting relevant data, we have to analyze it in an appropriate way. There are three major items to consider for choosing the analysis technique. To analyze the data, we must also look at the larger population represented by the data as well as the distribution of that data.

Sampling is the process of selecting a set of data from a large population. Sample statistics describe and summarize the measures obtained from a group of experimental subjects.

Population parameters represent the values that would be obtained if all possible subjects were measured. The population or sample can be described by the measures of central tendency such as mean, median, and mode and measures of dispersion such as variance and standard deviation.

Many sets of data are distributed normally as shown in the following graph. As shown above, data will be evenly distributed about the mean. Other distributions also exist where the data is skewed so that there are more data points on one side of the mean than other.

For example: If most of the data is present on the left-hand side of the mean, then we can say that the distribution is skewed to the left.

To achieve each of these, the objective should be expressed formally in terms of the hypothesis, and the analysis must address the hypothesis directly. The investigation must be designed to explore the truth of a theory. The theory usually states that the use of a certain method, tool, or technique has a particular effect on the subjects, making it better in some way than another.

If there are more than two groups to compare, a general analysis of variance test called F-statistics can be used. If the data is non-normal, then the data can be analyzed using Kruskal-Wallis test by ranking it. Investigations are designed to determine the relationship among data points describing one variable or multiple variables.

There are three techniques to answer the questions about a relationship: box plots, scatter plots, and correlation analysis. Correlation analysis uses statistical methods to confirm whether there is a true relationship between two attributes.

For normally distributed values, use Pearson Correlation Coefficient to check whether or not the two variables are highly correlated. For non- normal data, rank the data and use the Spearman Rank Correlation Coefficient as a measure of association. Another measure for non-normal data is the Kendall robust correlation coefficient , which investigates the relationship among pairs of data points and can identify a partial correlation.

If the ranking contains a large number of tied values, a chi-squared test on a contingency table can be used to test the association between the variables. Similarly, linear regression can be used to generate an equation to describe the relationship between the variables. At the same time, the complexity of analysis can influence the design chosen.

For complex factorial designs with more than two factors, more sophisticated test of association and significance is needed. Statistical techniques can be used to account for the effect of one set of variables on others, or to compensate for the timing or learning effects. Internal product attributes describe the software products in a way that is dependent only on the product itself. The major reason for measuring internal product attributes is that, it will help monitor and control the products during development.

The main internal product attributes include size and structure. Size can be measured statically without having to execute them. The size of the product tells us about the effort needed to create it. Similarly, the structure of the product plays an important role in designing the maintenance of the product. There are three development products whose size measurement is useful for predicting the effort needed for prediction. They are specification, design, and code.

These documents usually combine text, graph, and special mathematical diagrams and symbols. Specification measurement can be used to predict the length of the design, which in turn is a predictor of code length. The diagrams in the documents have uniform syntax such as labelled digraphs, data-flow diagrams or Z schemas. Since specification and design documents consist of texts and diagrams, its length can be measured in terms of a pair of numbers representing the text length and the diagram length.

For these measurements, the atomic objects are to be defined for different types of diagrams and symbols. The atomic objects for data flow diagrams are processes, external entities, data stores, and data flows.

The atomic entities for algebraic specifications are sorts, functions, operations, and axioms. The atomic entities for Z schemas are the various lines appearing in the specification.

Code can be produced in different ways such as procedural language, object orientation, and visual programming. The most commonly used traditional measure of source code program length is the Lines of code LOC. Apart from the line of code, other alternatives such as the size and complexity suggested by Maurice Halsted can also be used for measuring the length.

He proposed three internal program attributes such as length, vocabulary, and volume that reflect different views of size. He began by defining a program P as a collection of tokens, classified by operators or operands. The basic metrics for these tokens were,. Where the unit of measurement E is elementary mental discriminations needed to understand P. Object-oriented development suggests new ways to measure length.

Pfleeger et al. The amount of functionality inherent in a product gives the measure of product size. There are so many different methods to measure the functionality of software products. Function point metrics provide a standardized method for measuring the various functions of a software application. Function point analysis is a standard method for measuring software development from the user's point of view.

FP Function Point is the most widespread functional type metrics suitable for quantifying a software application. It is based on five users identifiable logical "functions", which are divided into two data function types and three transactional function types.

For a given software application, each of these elements is quantified and weighted, counting its characteristic elements, such as file references or logical fields. A distinct final formula is used for each count type: Application, Development Project, or Enhancement Project. These are elementary processes in which derived data passes across the boundary from outside to inside.

In an example library database system, enter an existing patron's library card number. These are elementary processes in which derived data passes across the boundary from inside to outside. In an example library database system, display a list of books checked out to a patron.

These are elementary processes with both input and output components that result in data retrieval from one or more internal logical files and external interface files. In an example library database system, determine what books are currently checked out to a patron. These are user identifiable groups of logically related data that resides entirely within the applications boundary that are maintained through external inputs.

In an example library database system, the file of books in the library. These are user identifiable groups of logically related data that are used for reference purposes only, and which reside entirely outside the system.

In an example library database system, the file that contains transactions in the library's billing system. Based on the following table, an EI that references 2 files and 10 data elements would be ranked as average. Based on the following table, an ILF that contains 10 data elements and 5 fields would be ranked as high.

Weigh each GSC on a scale of 0 to 5 based on whether it has no influence to strong influence. It has two aspects. One aspect of complexity is efficiency. It measures any software product that can be modeled as an algorithm. For example: If an algorithm for solving all instances of a particular problem requires f n computations, then f n is asymptotically optimal, if for every other algorithm with complexity g that solves the problem f is O g.

Measurement of structural properties of a software is important for estimating the development effort as well as for the maintenance of the product. The structure of requirements, design, and code helps understand the difficulty that arises in converting one product to another, in testing a product, or in predicting the external software attributes from early internal product measures.

The control flow measures are usually modeled with directed graph, where each node or point corresponds to program statements, and each arc or directed edge indicates the flow of control from one statement to another. Theses graphs are called control-flow graph or directed graph. Data flow or information flow can be inter-modular flow of information within the modules or intra-modular flow of information between individual modules and the rest of the system.

Locally , the amount of structure in each data item will be measured. A graph-theoretic approach can be used to analyze and measure the properties of individual data structures. In that simple data types such as integers, characters, and Booleans are viewed as primes and the various operations that enable us to build more complex data structures are considered. Related Articles. Table of Contents. Improve Article.

Save Article. Like Article. It is the set of activities which ensure processes, procedures as well as standards are suitable for the project and implemented correctly. Software Quality Assurance is a process which works parallel to development of software.

It focuses on improving the process of development of software so that problems can be prevented before they become a major issue.

Software Quality Assurance is a kind of Umbrella activity that is applied throughout the software process. Software Quality Assurance has: A quality management approach Formal technical reviews Multi testing strategy Effective software engineering technology Measurement and reporting mechanism Major Software Quality Assurance Activities: SQA Management Plan: Make a plan for how you will carry out the sqa through out the project.

Think about which set of software engineering activities are the best for project. Evaluate the performance of the project on the basis of collected data on different check points. Multi testing Strategy: Do not depend on a single testing approach. Software quality Management is a term used to describe the management aspects of developing quality software.

Software quality Management begins with an idea for a product and continues through the design, testing and launch phases. A management process that is made up of a few different steps, SQM can be broken down most simply into three phases: Quality planning, assurance and control.

Quality planning involves the creation of goals and objectives for your software, as well as the creation of a strategic plan that will help you to successfully meet the objectives you lay out.



0コメント

  • 1000 / 1000