# Introduction he purpose of the R/T is to fix-up the adverse affects due to the addition / deletion of old / new features in the softwares. The test case prioritization (TCP) is essentially to schedule the execution in-order to maximize the objective functions or the goals leading to the rate of the fault detection in the software use / development works. For the same cause, understanding the needs of the various sources of variation that impact the usefulness of the software for R4-Reusuability, Retrievability, Revision and Retain is very essential. A very predicate capability with the new tool for the development with some-what better techniques highlighting the practical implications are to be explored. Then proper procedures with the statistical significances are to be adopted for the future developments with the graphical evidences. # II. # Literature Review The below is the literature review (informationgenerated) out of the journals that had been selected for the survey. ? This paper investigated into a hybrid technique combining modification, minimization and prioritization using a list of source code changes. ? The paper investigated into the branch coverage, Total statement coverage, Addl statement coverage, Total fault exposing potential (FEP), Addl fault exposing potential (FEP) prioritization. Complexity and Fault proneness of the requirements. ? The paper investigated into Java-based tool Java code coverage for the test coverage reporting which supports the testing related activities by recording the test coverage for variables code-elements and updates the coverage information when the code being tested is modified. ? The paper investigated into problems and choice of the fitness metric, characterization of landscape modality and determination of the most suitable search techniques to apply. Also two meta-heuristic search techniques -HILL CLIMBING and Genetic Algorithm together with three greedy algorithmsgreedy, addl greedy and optional greedy. ? The paper investigated into a new technique for the black-box RT to improve the effectiveness of fault detection when performing the RT in the black-box environment. ? The paper investigated into CIT-Combinatorial Interaction Testing which systematically samples all t-way combinations of input parameters. ? The paper investigated into enhanced Bayesian Network (BN) which integrates the different types of the information to estimate the probability of each test case finding bugs with an introduction to feedback mechanism and a new change information gathering strategy. ? The paper investigated into particle swarm optimisation (PSO) algorithm to prioritize the test cases automatically based on the modified softwares. ? The paper investigated into historical-Value based approach which is based on the historical information to estimate the current cost and the fault severity for the cost cognizant test case prioritization. Also functional coverage test case prioritization was discussed. ? The paper investigated into the solution using the six sigma methodology to support the quantitative analysis of the problem and evaluation of the developed solutions. ? The paper investigated into the several modelbased test prioritization heuristics resulting in the study suggesting that system models may improve the effectiveness of the P/T wrt early fault detection. ? The paper invetsigated into a Quota -Constraint test case prioritization for SCS's-Service Centered Systems and also proposed a quota-constraint strategy to maximize testing requirement coverage. ? The paper investigated into the rates of severe fault detection for both the regression testing and the non-regression testing. ? The paper investigated by extending the prioritization methods to parallel-scenarios and defines the P/T in such scenarios and applies the task scheduling method to algorithms to help partitioning a test suite into multiple prioritized subsets. ? The paper investigated a model for the system level TCP-Test case P/T from the software requirement specification to improve the user-satisfaction that can be cost effective to improve the rate of severe fault detection. ? The paper investigated into a system based modelling as a widely applicable technique to model -state based systems. And also compared the code based test prioritization to model based test prioritization. ? The paper investigated into whether the R/T are effective in reducing the residual defects across the system's lifetime. The proposed heuristics with the feedback techniques were effective in reducing the occurrence of the residual defects. ? The paper investigated into the heuristics techniques used were conventional code coverage and Bayesian network to determine the relative costbenefit of P/T wrt baseline technique .Introduction to the partial prioritization to lower the analysis costs. ? The paper investigated into the five search algorithms were chosen namely, total greedy, additional greedy, 2-Optional greedy, hill climbing & genetic algorithms. The ratio of the overlapping is the criteria for preferring the choices. ? The paper investigated into the five different location based services with the five different quantitative metrics, POI aware P/T were evidenced better than the random ordering or input-guided P/T. ? The paper investigated into the dependance analysis based on the TCP for analyzing the dependance relationship using the control and data flow information in WS-BPEL to describe the service composition. A weighted dependance propagation model to facilitate the prioritization process. ? The paper investigated into a method to measure the distance using the coverage information and the proposed method enabled ART to be applied all the kinds of the programs. This method reduced the number of the test cases for the detection of the first failure. ? The paper investigated into a CA model based testing approach supporting a supporting the blackbox testing approach. Minimization of the costs through the tracking of the model changes at the edit-time, recording change time-stamps and ability to combine specification based concerns with the model changes. ? The paper investigated into a model based test prioritization using the activity diagram to identify the difference between the original model and the modified model. This draws the paths for each of the test cases and identify the most promising paths .As compared to the code-based approach, the presented approach provides the most beneficial path from an activity diagram. ? The paper investigated into a cost cognizant test case based P/T based on the use of the historical records and a genetic algorithm was proposed .The proposed technique avoids the situations where the test-cases and fault severities are considered without analyzing the source code, improves the prioritization performance. ? The paper investigated into an optimization of the R/T activity by adopting a test case P/T called as Failure Pursuit Sampling. By using the test information available from the previous versions, the technology could be driven to the achieve the higher rates of the improved efficiency. ? The paper investigated into a TCP using the sequences in XML messages to reorder the R/T cases for composite web services against the tag based techniques . Sequence coverage from the input and output messages associated with the R/T suites were proposed. ? The paper investigated into the current manual processes as well as the effects of the proposed new methods. And was conducted at Sony Ericsson Mobile Communications. The success rate was comparable with the other techniques. ? The paper investigated into a quantitative evaluation indicating for the possibility to improve the efficiency, while qualitative evaluation supporting the general principles of history based testing. Construct validity, Internal validity, External validity and Reliability were checked. ? The paper investigated into a CBSS-Component Based Software System and the state changes were converted into CIG-Component Interaction Graph to describe the interrelation among the components. This used two criteria-maximum number of state changes and database access was occurred by the test cases for determining the TCP. ? The paper investigated into a new metric, APFDD was introduced .A comparison between theprioritized and non-prioritized test cases were done. The prioritized cases were more effective. ? The paper investigated into the challenges descending from the limited testability of the external services and to encompass the traditional R/T. Possible ways towards the online-offline testing, detection of changes in the services, test case selection, minimization and prioritization, definition of the test oracles were discussed. ? The paper investigated into the consideration of the cost based objectives, value based objectives with the MORTO optimization constraints. MORTO approach is long overdue. ? The paper investigated into an In-process and the most up-to date test suite to re-order the test cases. Dynamic prioritization could generate the up-to-date TCP. ? The paper investigated into the art efacts used in the model based test generation for the from the state machines. Allowed for reducing the test execution to 80% of regression in some scenarios. ? The paper investigated into a new approach using the information retrieval to match with the service change description with the code based portions exercised by the relevant test cases. Only specific combinations with the input/output channels were affected by a specific service change. ? The paper investigated into a TCP algorithm with a fitness function with the average block coverage to quantify the possibility of finding the errors. The algorithm based on the baseline testing was considered in finding the rate of the test sequence errors. ? The paper investigated into a database regression testing for the functional black box R/T for complex legacy data base applications was done .Full integration of the DART with the daily test operations of the projects and predictive testing. ? The paper investigated into a model for the R/T in SaaS to abstract the events and a case study to validate the approach. The failures that were uncovered from this methodology were not identified by the earlier methods. ? The paper investigated into the fault localisation problem involving the focus on the CIT techniques for the experiments on FLEX and MAKE. Provided a framework evaluated through the empirical studies. ? The paper investigated into a new equation for the historical effectiveness of the test for the historical effectiveness of the test cases in the fault detection. This new approach considered the time constraints for executing a fraction of the prioritized test suite. ? The paper investigated into an approach JUPTA for prioritizing the JUNIT test cases in the absence of the coverage information. JUPTA T and JUPTA A outperform the untreated orderings. ? The paper investigated into an examination of system configurable software driven not only by the fault detection but also by the cost of the configuration and set up time moving between different configurations. In the new light, the actual time to run the same number of the configurations varies greatly depending on the order in which they run. ? The paper investigated into a formulation for the new test case prioritization strategies using the tags embedded in XML message to reorder the R/T cases and to reveal the test cases use the interface specifications of the services. WSDL information facilitates the effective R/T. The empirical results showed that the techniques used are effective. ? The paper investigated into an eclipse IDE plug-in for managing the JUNIT test cases, to manipulate the test cases through the GUI was adopted. To use the coverage based techniques in the real world software development. ? The paper investigated into a set of ART prioritization guided by the white-box coverage information was proposed. The branch level techniques were comparable to the statement level and both of them proved to be more effective than the functional level techniques. Art-br-maxmin P/T is good candidate for the practitcal use. ? The paper investigated into a suite of metrics and initialised them demonstrate input-guided techniques and point-of-interest aware test case prioritizaion technique. The perfromance of the P-O-I aware techniques are more stable and cdist is the most effective and stable technique ? The paper investigated into case-retrieval, re-use, solution testing and learning and used the prioritization strategies included general, specific, general ignore ,additional general ignore, random prioritization and no prioritization. ? The paper investigated into the impact of the test oracles on the effectiveness of the testing and improvement in the rate of fault detection relative to both the random and structural coverage based P/T when applied to the faulty versions of three synchronous reactive systems. The results showcased a potential for oracle-centric P/T to improve on coverage -based approaches. ? The paper investigated into a two-level prioritization approach using FDG-Functionality Dependency Graph & IFG-Inter procedural control graph. ? The paper investigated into three hybrid combinations -Rank, Merge and Choice and demonstrated the usefulness in two ways. The timeaware prioritization techniques out-performed the other prioritization techniques. ? The paper investigated into a new methodology using a modular based test case prioritization as the same was found to be more effective than the overall program TCP. The major work was based on the fault coverage. ? The paper investigated into the earlier PFD-Page flow diagrams and PTT-Path test trees and showcased the reusability of the black-box generated test path for the white box testing of the websites. ? The paper investigated into a genetic algorithm for improving the prioritization of the test suites by a new fitness function considering the weights of the test cases, fault severity, fault rates and the number of structural coverage items covered by each test case. A fully automated TCP for the whole process was quite achievable. ? The paper investigated into the requirements based clustering approach to incorporate the traditional code analysis information. ? The paper investigated on a refactoring based approach for selecting and prioritizing regression test cases. ? The paper investigated into a TCP with the use of the model checkers and with the introduction of a new property based P/T .Several -critical embedded systems were illustrated and the techniques were based on the functional model of the programs. The model checkers do not pose any problems to the prioritization. ? The paper investigated into a history based TCP and source code information .This speaks about the version aware approach for the detection of the faults. ? The paper investigated into a unified view basic and extended for the generic strategies in TCP. There were many strategies which were effective between the total and the additional strategies than the either of those strategies. ? The paper investigated into an adaptive TCP which combines the test case prioritization process and test case execution process .The adaptive approach was more significant than the total approach and more competitive than the additional approach. ? The paper investigated into two heuristics methods and in-order to prioritize the variable strength interaction test suite. The random prioritization had the smallest NAPFD metric values. ? The paper investigates on the Fuzzy Expert system to aid in the decision making process for a particular software version and this method also proved to be effective in addressing the limitations addressed by the other P/T strategies. the number of TCP. A tool called as TRPAUTOREPAIR was implemented. ? The paper investigated into ROCKET-Prioritization for continuous regression testing of Industrial Video conferencing software and simultaneously the results revealed 30% more faults for the 20% of the test suites executed. ? The paper investigated a technique that is a hybrid of TCP based on the risk exposure to facilitate the achievement of the quality product. III. # Methodology Generally the sources of the information were divided into the primary sources of the information and the secondary sources of the information .For this survey, the sources of the information were collected from the IEEE journals. IV. # Analysis a) The information for the purpose of the seeded and the non-seeded based fault detection could be well classified into the code-based and non-code based, coverage and non-coverage based prioritization informations. b) The analysis softwares used were SAS and SPSS. c) The metrics that were used were as tabulated below: d) The TCP techniques that were discussed were as tabulated: -------------e) Other Strategies that were discussed were as mentioned below: f) The Algorithms that were discussed were as mentioned below: g) The tools that were used were as mentioned below: ? The major modelling systems that were discussed were as mentioned below: EFSM& EVOMO V. # Summarization & Discussion Any general or the version specific TCP could be very well carried-out with the aid of the survey analysis, provided that there exists a statistical significance in the form of the graphical evidences. The same also remains factual for both the controlled and the non-controlled TCP. But there should be an insight into the cost factors as well and the benefits of the various parameters that could be considered apart. The software testing amounts to almost 50% of the total development cost. The path testing itself could detect upto 65% of the errors in the software. # VI. # Conclusion This had been concluded from the survey of the ninety papers of IEEE that a fully automatic modular and historical information based TCP should be developed .And the research should orient towards the genetic algorithms with the major focus on the residual defects as well. Then the metric that could be used for the effectiveness testing could be taken as APFDD and in the end a fitness function could be incorporated. The tentative title for this academic research could be "An auto-TCP with the stat-comp regression testing ". More work could be carried out on the FUZZY EXPERT SYSTEMS (FES)-ADAPTIVE RANDOM TESTING (ART) for the cost effective decision-making for the incorporating the study on the residual defects for the regression testing. An Analytical Hierarchy Process (AHP) may be prioritized for prioritizing the testing process of regression. MORTO-Multi-Objective Regression Test Optimization could be incorporated for the proper results along with the fitness function. The philosophies, theory , axioms, principles, practices and adopted formulae are to be combined properly in the studies for the effective implications. At each and every step, the steps adopted for the development should possess both the industrial and the institutional applications for the structured, semi-structured and the un-structured problems and their solution, resolutions and the dissolutions that might be obtained from timeto-time. Finally before the implications of any type, care should be taken in such a way that 6W's and 2H are satisfied. Thus this work would try to produce a complement to the existing technique in-order-to produce a modern ones with an additional benefits to compare. ![APFD-Average percentage of the faults detected, APFD C -Average percentage of the faults detected per cost , APFDD-Average percentage of the faults dependency detected, NAPFD-Normal Average percentage of faults detected, APBC-Average percentage of block cover, APDC -Average percentage of Decision cover, APSC -Average percentage of statement coverage, WPFD-Weighted percentage of faults detected, TPFD -Total percentage of faults detected, TSFD-Total severity of faults detected, ASFD-Averageseverity of fault detected, AFMC-Average percentage of fault effected module cleared per test case].](image-2.png "[") 1APFDAPBCTPFDVarPcovAPFD CAPDCTSFDEntropyAPFDDAPSCASFDCdistNAPFDWPFDAFMCPdist 2No prior.Random Prior.Optimal prior.Total branch prior.Addl BranchFault exposing potentialAddl fault exposing prior.Total statement coveragePrior.prior.prior.Addl statementRandom prior.Addl Function coverageAddl Fault Index prior.coverage prior.prior.Total FI with FEP prior.Addl FI with FEP prior.Total diff prior.Addl diff prior.Total statementAddl statement coverageTotal method coverageAddl method coveragecoverage prior.prior.prior.prior.Srivasatva & ThiagarajanTotal CC prior.Total BN prior.Addl CC prior.prior.Addl BN prior.Untreated prior.Method Total prior.Method Addl prior.Total test ability basedAddl test ability basedPOI awareness structuralprior.prior.coverage prior. 3QCTBQCABTBAB[QCTB-Quota-Constrained Total Branch, QCAB-Quota-Constrained Total Branch, TB-TraditionalBranch AB-Additional Branch] 4GreedyAddl Greedy2-Optimal AlgorithmsHill ClimbingGenetic AlgorithmsAlgorithmsAlgorithms 5TIMATEITTEIDART[TIM-Testing Importance of the Module, ATEI-Average Test Effort Index,TTEI-Total effort Effort Index, DART-DatabaseRegression testing]h) Some of the major participating companies discussed were as mentioned below: 6Sony Ericsson Mobile CommunicationsSiemensSaaSE-BayGooglei) Some of the major softwares that were discussed were as mentioned below: 7SASSPSS[SAS-Statistical Analysis Software, SPSS-Statistical Package for Social Sciences] © 2015 Global Journals Inc. (US) 1 1 1 * A Study of Effective Regression Testing in Practice* W. Eric Wong, J. R. Horgan, Saul London, Hira Agrawal Bell Communications Research, 445 South Street Morristown, NJ 07960 * Test Case Prioritization: An Empirical Study Gregg Rothermel, Department of Computer Science Oregon State U.Corvallis, OR grother@cs.orst edu Roland H. Untch, Department of Computer Science