Software Components and Development Tools
Welcome to AJE Components...
Software Components & Development Tools
Downloads AJE Components Enquiry Form

The Directory
Contents
Products
Solutions
Screenshots
Inside Components
Overview
Reasons
Benefits
Pricing
Press and Corporate
News Centre
Newsletters
About Us
Contact Us
Feedback
Tell Someone
Knowledge Base
FAQ's
Glossary
Surveys
Support
Resources
Downloads
Articles
Reviews
Links
Information
Purchasing
Affiliates
Partners
Link 2 Us
Site Map
Translate Site Into:
Search Site:

 

Articles

Send to a Friend

Software Component Testing Strategies

Adrita Bhor

Dept. of Information and Computer Science

University of California, Irvine

Technical Report UCI-ICS-02-06, June 2001

Abstract

With the advent of Component based software systems, arose the problem of testing such systems. This paper presents these problems in detail along with a comparative and evaluative study in the existing approaches towards testing component based software. Some suggestions on how to improve the existing techniques have been mentioned in the end.

1. Introduction

Software development styles have changed a lot of times over the past few decades catering to the needs of the era, which they represented. With increasing pressures on time and money, the concept of component based software development originated. In this method, the software project is outsourced to other development organizations and finally, the third party components (Commercial-off-the-shelf or COTS) are integrated to form a software system.

A software component is defined as "a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties"[1].

A challenge towards efficient component development is that their granularity and mutual dependencies have to be controlled right from the early stages of the development life cycle. One of the greatest problems with the component technology is fault isolation of individual components in the system and coming up with an efficient test strategies for the integrated modules that use these third party components.

Software components enable practical reuse of software parts and amortization of investments over multiple applications. Each part or component is well defined and independently deployable. Composition is the key technique by which systems of software components are constructed [10].

Some of the component characteristics which are relevant during their testing [16]:

  • Component Observability: This defines the ease with which a component can be observed in terms of its operational behaviors, input parameters and outputs. The design and definition of a component interface thus plays a major role in determining the componentís observability.

  • Component Traceability: It is the capacity of the component to track the status of its attributes and behavior. The former is called behavior traceability where the component facilitates the tracking of its internal and external behaviors and the latter is called Trace controllability which is the ability of the component to facilitate the customization of its tracking functions.

  • Component Controllability: This shows the controlling ease on a componentís inputs/outputs, operations and behaviors.

  • Component Understandability: This shows how much component information is provided and how well it is presented.

The next section presents the various difficulties and challenges faced in testing component-based software. Following it is the detailed evaluation of some of the present testing strategies both in the academic research world and in the industrial realm. The analysis tries to categorize and compare the various techniques in a tabular way.

2. Testing software components

2.1 When to test a component

One of the first issues in testing software components is whether all that effort is required in the first place or not. When is it ideal to test a component in a system? If it is seen that the results of the component not working is greater than the efforts to test it, then plans should be made to test such a malfunctioning component [10].

2.2 Which components to test

When risk classification of the use cases is mapped onto components, we find that not all components need to be tested to the same coverage level [10].

  • Reusable components - Components intended for reuse should be tested over a wider range of values.

  • Domain components - Components that represent significant domain concepts should be tested both for correctness and for the faithfulness of the representation.

  • Commercial components - Components that will be sold as individual products should be tested not only as reusable components but also as potential sources of liability.

2.3 The ultimate goal of testing

Testing a software component is basically done to resolve the following issues:

  • Check whether the component meets its specification and fulfill its functional requirements.

  • Check whether the correct and complete structural and interaction requirements, specified before the development of the component, are reflected in the implemented software system.

2.4 Problems in testing software components

The focus now shifts to the most important problem of component software technology i.e. the problem of coming up with efficiently testing strategies for component integrated software systems.

2.4.1 Building reusable component tests

Current software development teams use an ad-hoc approach to create component test suites. Also it is difficult to come up with a uniform and consistent test suite technology to cater to the different requirements like different information formats, repository technologies, database schema and test access interfaces of the test tools for testing such diverse software components. With increasing use of software components, the tests used for these components should also be reused [25]. Development of systematic tools and methods are required to set up these reusable test suites and to organize, manage and store various component test resources like test data and test scripts.

2.4.2 Constructing testable components

The definition of an ideal software component says that the component is not only executable and deployable, but it is also testable using a standard set of component test facilities. Designing such components becomes difficult because such components should have specialized and well defined test architecture model and built-in test interfaces to support their interactions to the component test suites and test-beds.

2.4.3 Building a generic and reusable test bed

There is a lot of difficulty of developing a testing tool or a test bed technology that is capable to test the system, which has components that use more than one implementation languages and technologies.

2.4.4 Construct component test drivers and stubs

The traditional way of constructing test drivers and test stubs is to create them such that they work for a specific project. But with the advent of the component world and systems using reusable third party components, such traditional constructions will not work. This is because they are inefficient to cope with the diverse software components and their customizable functions.

2.4.5 The great divide

One of the first ways to look at the different issues in component testing is to divide the component domain into the component producer and the component user or consumer [21]. Both these parties have different knowledge, understanding and visibility of the component. The component developer has the whole source code of the component whereas the component user frequently looks for more information to effectively evaluate, analyze, deploy, test and customize the component.

Testing for the component producer becomes extremely complicated because of very varied applicability domain of the component. The more reusable a component is, the wider will be its range of applicability. Therefore the testing needs to be done in a context independent manner. It is also called the Framework Design Problem [19] which is to abstract the acquired domain knowledge to engineer plug-compatible components for new applications and test them effectively. Assumptions are made to get around this problem of not knowing the future execution context of the component. These assumptions since not very explicit and methodological, lead to cause architectural mismatch for COTS component users [27]. This is more a methodological issue than a technical issue. Finally, the component producer should build in mechanisms in the component so that the faults related to the components in the user application can be revealed in an easy way.

From the component userís perspective the biggest problem is the absence of source code for testing the component in the system. Any of the traditional testing techniques like the data flow testing, control-dependence calculations, or alias analysis techniques require the source code of the software system under test. The second issue is that even if the source code of the component is available, the component and the user application have chances that they are both implemented in different languages. Finally, in order to obtain the highest test coverage, the component user should be able to identify the precise portion of component functionality to be used in the application, which is again a difficult task. The Adequacy Criterion of a test suite as defined in [20] will not be met in circumstances where such identification is not done prior to testing.

2.4.6 System Testing versus Unit Testing

Finally, it is worth mentioning that, unlike the traditional software systems, any extent of unit testing on the part of the component producer will not really help in deciding the final working of the same component in the userís system [2]. Mostly this is because of the variability of the userís application domain and lack of foresight on the part of the component producer about the working of the component with different functional customizations. At the system level, important interactions between the components have to be considered. Therefore a need to develop a very strong system integration test plan on the part of the component user is absolutely necessary. Integration into the system, only by considering the individual component reliability provided by the component producer, is not enough.

Additional issues with System Testing are Redundant Testing and Fault-tolerant Testing [22]. In Redundant testing, the test adequacy criteria during the unit testing of components again get used at the system level testing of the same components. A lot of time is wasted in testing the same things over and over. In Fault Tolerance Testing, the fault handling code (usually written with the component) rarely gets executed in the test-bed as the faults rarely get triggered. This is also called Optimistic Inaccuracy [26]. Since the ability of the system to perform well depends on its effective handling of fault tolerance, ways have to be developed so that fault tolerant code in the component is always tested.

3. The various Testing strategies

This section deals with the various component-testing strategies as suggested both by the academic research and the industrial research world.

3.1 Certification of components

A good certification methodology [3] gives the component the reliability that it deserves. This process tries to certify that the component is able to meet the developerís needs, it is of high quality with known impacts on a given system.

Method Description:

The following three to four quality assessment techniques are followed in order to determine whether a component is suitable for a system:

  • Step1: Black box component testing

This kind of testing is concerned with selection of test cases without actually considering the softwareís syntax. Black box testing requires an executable component, an input and an oracle for proper testing procedure. The oracle is a technique to determine if a failure has occurred by examining the output for each input. Black Box Testing believes in matching expected results with actual results - not on the internal working of the under test. It is also known as functional testing or interface testing.

The methodology proposed uses black-box testing based on the systemís operational profile. The operational profile is a distribution of test cases when the software is put to use to determine the quality of components that can execute on their own.

  • Step2: System level fault injection techniques

This is a method of testing the whole system by introducing bugs/faluts in them. This method does not actually find bugs in the systems but actually tries to show how badly the system will behave in case a certain component fails.

The certification methodology followed uses a fault injection technique called the Interface Propagation Analysis(IPA). IPA needs to know what component failure modes to inject and what system failure modes to expect. Accordingly, IPA perturbs or corrupts the states propagated through the interfaces between the components. A small software routine called perturbation function replaces the original state with a corrupt state at system execution. System failure modes include faulty system output data, faulty global system data and corrupted data flow between components.

  • Step3: Operational system testing (OST)

This method is complimentary to the system level fault injection technique because OST checks the system tolerance and how well the system functions when an off-the-shelf component is introduced. It executes the original states without any modifications to the output information.

  • Step4: Defense building step

Wrappers: Concept of software wrappers arise with the need to use components based on their availability and not on their reliability. Software wrappers work by limiting the functionality of the component in certain desirable ways. Wrappers do not modify the source code of the component but indirectly limit its functionality in anon-invasive manner. Two types of wrappers are present: One which checks and limits the inputs to the components and other which captures and checks the output before it is released to the system.

Method Evaluation:

  • Since Black Box testing is not at the system level, a very insightful test suite, which considers the probable behaviors of the component in the system level, needs to be generated for certifying reliability.

  • Black ĖBox testing sometimes fails to execute a large portion of the code, which then gives rise to poor fault detection mechanism (described in section 2.6) during the testing of the system.

  • Serious problems like Trojan Horses or Race conditions might not detected by Black-box Testing.

  • The cost of developing accurate oracles, generating inputs and building test drivers is very high.

  • System level fault injection technique can be categorized as a worst case testing technique because it concentrates on the robustness of the system by injecting faults.

  • Operational system testing is good when it detects an error in the system, thereby showing the system tolerance. But there might be plenty of cases where Trojan Horses etc. will not be detected. An enormous amount of system level testing will be required in order to make the system handle real component failures. This produces Optimistic inaccuracy [26] and thus gives no assurance for the problem of Fault Detection as mentioned in Section 2.6.

  • Wrappers are not foolproof. Illegal inputs can sometimes fool a wrapper. Human error in designing or implementing the wrapper reduces its value.

3.2 The Component Metadata way

The approach [5] provides a framework which uses Summary Information [21] (called Component Metadata) to analyze and test components. The metadata is based on different kinds of information depending upon the specific context and needs. There is a unique format and a unique tag for each kind of metadata provided. The component producer embeds this summary information in the software component.

Method Description

The following steps are followed in this approach:

  • The component provider is supposed to gather metadata for the components using analysis techniques. This metadata is then used by the component users who then donít need any source code. The component user is allowed to query the information from the component.

  • The input space of each operation is divided into a set of sub-domains and the summary information is associated with each sub-domain

  • The inputs of the operation are mapped using a developed tool and the summary information of the component is used after querying to test the component in the system.

Method Evaluation

The metadata describes both the static and the dynamic aspects of the component.

  • Increases the precision of the program analyses.

  • Metadata can change according to the particular functionality required by the component user showing the flexibility of the stored information approach.

  • Metadata provides full allowance for customization of components.

  • This approach provides suitable query facilities for the user so that testing can be carried out is a systematic and convenient manner.

  • This approach requires the development of a standard notation for the metadata. To bring a standardized format for attachment of metadata, which will be followed by all third party component producers, is a difficult task.

  • Scalability of such an approach has not been mentioned in the paper. Till now it has only been tested with small programs.

    1. CTB Ė Component Test Bench

The component test bench (CTB) [6] framework addresses the issue of building testing tools for component based software Engineering (CBSE). It allows the component developer to design and run verification tests. The CTB avoids the need to write code for test harnesses, it provides a generic test harness in the test pattern verifier module: developers specify tests which are stored in standard XML documents and run by test pattern verifier.

Method Description

  • A test specification is prepared, which describes the component implementations, their implementation interfaces and the test sets that are appropriate for an interface.

  • A sequence of steps called the test operation, is made for the execution of an individual test. These test operations target a method in the component so in other words a test operation is a sequence of method calls.

  • Test operations are labeled with the version of the specification for which they were generated.

  • The test operations are executed using their tool called IRTB (Instrumented runtime system) or they can be run on any standard virtual machines for Java or by compiling and executing in C or C++.

  • The results obtained are then analyzed into categories like Specified, Strong Accept, Weak Accept, Pending, Intermediate and Unknown.

Method evaluation

  • This approach provides a means for developers to generate tests in the first place, for users to verify that components function correctly in some target environment and for both developers and users to run regression test when components are updated. It provides a mixture of techniques for the generation of tests viz. manual, computer aided and automatic which increases the flexibility of the system.

  • By using XML for test specification and Java as the test pattern verifier, the authors ensure a high degree of portability. So the specifications will not be tool specific.

  • By integrating the test generation and test execution process CTB allows them to use the specification to create basic test cases and allows analysis of written code to identify further tests at the same time.

  • The system allows incremental development. The users are allowed to use the symbolic executor and store the test operation data without running test operations.

  • The symbolic Execution technique used in CTB, is not new and has a lot of inherent weaknesses especially its inability to work with array indexing in methods and is very slow.

  • The test operation allows a wide variety of test environments to be created in a standard way.

  • Version management of tests is available, which shows a systematic approach.

3.4 UML based test model for Component Integration Testing

This test model [7] uses UML Sequence and Collaboration Diagrams to extract the faults existing between the component interfaces interacting with each other in the system. It links the UML based development process to the test process. The UML test models consist of the Node, which represents the integration target and the Message Flow, which shows the interaction between the nodes.

Method Description

The UML Development phase uses Rational Objectory Process [28] in four phases of inception, elaboration, construction and transition to build the UML models. The component integration testing is carried out in the following steps:

  • Building of the UML test model based on the one flow of event. First the sequence diagram for normal and abnormal flow of events is extracted. Then the Collaboration Diagram for each flow of event is generated in case of concurrent events. Then the collaboration and sequence diagrams are divided into ASF (Automatic System Function) units based on the message transition patterns in the components.

  • The Complete UML Test Model is drawn wit the Node and the Message Flow from the Automatic Message Function.

  • Test Cases are selected by application of the test case selection criteria on the UML test model.

Method Evaluation

  • This approach is based on UML Collaboration and Sequence Diagrams so the process is built using standard notations, which are used extensively. This overcomes the issue of learning new notations and languages to understand the approach suggested.

  • The testing technique can be automated but has not yet been implemented

  • This approach assumes that all the components in the integration target have already been tested individually and thus considers them as black-boxes.

  • The selection of the test criteria is not based on the Test Adequacy Criteria[20]. Doing this would have resulted in better and more relevant test cases.

  • The preparation time for this technique is less according to the authors and it is easy to understand.

3.5 Component Interaction Testing (CIT)

This approach [8] captures the assumptions made by each component about how the other components should react with it. These assumptions are captured as formal test requirements that specify the selection of test cases.

Method Description

The following steps are followed in this approach to test the components:

  • Creation of the formal mathematical models for the component interactions and formal test requirements for all components. Test requirements specify the subset of possible sequences of interactions with the component. The mathematical model allows concurrency and synchronous communications.

  • Performance of Unit Testing and creation of unit tests cases from the test requirements.

  • Selection components to be integrated

  • Creation of the composed test requirements and the integration test cases from the composed test requirements.

  • Continue integrating components until entire system is integrated.

Method Evaluation

  • Since the mathematical model is based on the FC2 format [29], is allows different verification tools to interoperate.

  • The CIT approach thoroughly exercises complex component interactions without duplicating the unit tests. This gets rid of the redundant testing problem mentioned in Section 2.6.

  • The method currently focuses on the control-oriented interactions; other types of interaction errors are not tested. So it can still be not considered as a completely reliable testing approach.

  • Scalability is another issue with this approach. Theoretically the model can scale from unit testing to system testing but as the size of the model increases , it becomes more intractable (the state explosion problem).

  • Interacting components can be integrated in any order. This increases the flexibility of this model.

  • Configuration used during the system level testing can be stored and reused for regression testing.

3.6 Built-in-tests in components (BIT)

Built-in test (BIT) [9] is a new kind of software component, which is explicitly described in software source code as member functions for enhancing software maintainability. Based on the BIT, maintainable software is proposed for operation in two modes: a normal and a maintenance mode. In the normal mode, the software has the same behavior as the conventional system; in the maintenance mode, the BITs can be activated by calling the BITs as member functions at corresponding components.

Method Description

  • The standard functions of constructors and destructors in a component are extended to incorporate the reusable BITSs in an object (Test Generation).

  • BITS are inactive in normal mode and can be activated in the Test/Maintenance mode.

  • When BITS are executed, testing results are automatically reported as:

  • TestResult1 = BIT1 OK or

  • TestResult2 = BIT2 FAILED

Method Evaluation

  • BIT tries to enhance the concept of self-containedness in software system.

  • BIT have a wide range of applicability including enhancing software maintainability, reengineering legacy systems for maintainability, and insuring run-time consistency maintenance. Based on the self-contained testing/maintenance mechanisms of BITs, all corrective, adaptive, perfective, preventive and reengineering maintenance of software can be simplified significantly.

  • BITs fit into an object via C++, Java or any other object oriented Language compilers.

  • The same test-built-in can be extended to class and system objects so they are basically hierarchical and increase scalability of the technique.

  • BITS are reusable in the maintenance phase. The inherited tests are instant and self-testable.

3.7 Parallel Architecture for Component Testing (PACT)

This approach [11] is not a testing technique but is rather a software architecture that defines the structure of the test components. The objective of this architecture is to minimize the effort required to builds and maintain the individual test cases. The foundation of PACT is a set of abstract classes that provide basic functionality that can be inherited into each concrete test class. The services provided by the abstract classes include common exception handlers and common input and output facilities.

Method Description

PACT specifies two basic patterns for the testing software:

  • There is a class in the test software for every component in the production software judged to be sufficiently significant to be tested independently.

  • For a new production class, its test class inherits from the test class of the production class superclass. This is the parallel nature of the architecture.

Each test case is divided into three parts. The first segment ensures that the Object Under Test (OUT) is in the appropriate state to begin the test. The second segment is the sequence of messages that constitute the test. The final segment verifies the result and/or logs the information for later examination.

The test script for this test case is very simple. It constructs the object under test (OUT), administers the test and then cleans up by deleting the OUT.

The steps followed in PACT are:

  • Create a test class for each testable component

  • Test a method in the context of a class

  • Create a baseline test suite

  • Sequence test cases by using test scripts

  • Group test cases

  • Record message outcome for further analysis

  • Handle exceptions

  • Verify results

Method Evaluation

  • In PACT, components are tested at different levels of coverage.

  • The inheritance relationship between test classes and production classes through facilitates the reuse of individual test cases.

  • In Pact there is separation of production and test code. This way, both the production code and the executable code remain smaller and less complex. This also becomes necessary when these two pieces of code are written by different groups. But it also poses problems of synchronization between the two codes.

  • In an iterative development process, the tests should be easy to apply repeatedly across the development iterations. They must also be easy to maintain as the production code changes. The inheritance relationship in an object-oriented language supports the development of code with these characteristics.

3.8 Specifying and Testing components using Assertion Definition Language (ADL)

It is an approach [12] towards unit testing of software components. This approach uses the specification language ADL, that is particularly well-suited for testing, to formally document the intended behavior of software components. Another related language, TDD, is used to systematically describe the test data on which the software components will be tested.

Method Description

In this approach , the program is run with many different test inputs in a systematic manner. Correct behavior is determined by examining the results of the program or function in terms of the specification that describes its behavior. The following components are required for unit testing:

  • A function to be tested.

  • Test-data on which to execute the function. In this case, test-data is specified through the Test Data Description (TDD) file.

  • A means of determining whether or not the function executed correctly. In this case, assertion-checking functions generated from the ADL specifications handle this task.

Method Evaluation

  • In this approach, the program is run with many different test inputs in a systematic manner. Correct behavior is determined by examining the results of the program or function in terms of the specification that describes its behavior.

  • The test data selection reduces the redundancy in testing process.

  • The validation process is automated.

3.9 Testing Software Components using Component Interaction Graph (CIG)

This approach [13] detects faults residing in the component interfaces and the failures encountered in the interactions among components. A component-based system is said to be "adequately tested" when every interface/exception, every invocation/raise of the interface/exception, every context sensitive execution path and content sensitive execution path is exercised at least once. When all interfaces and events are covered, it provides the confidence in the basic inter-actions among components. Components have two types of indirect dependences: Control dependence and Data dependence. The control dependence relationships explain the interactions of a component-based system from the control flow prospective. Data dependence shows the flow of data and can provide additional information in generating test cases and detecting faults.

Method description

  • One interface when invoked generates an event.

  • When one event is generated, it invokes an interface of a component

  • When one event is generated, it will trigger another event in the same or a different component.

  • A Component Interaction Graph (CIG) is used to depict the above relationships among interfaces and events and their direct interactions.

  • An algorithm takes in the constructed CIG as input and outputs a set of paths P, which need to be tested. This set of paths is generated using the Depth First Search.

Method evaluation

  • The technique utilizes both static and dynamic information to design test cases.

  • The technique proposed provides several criteria for determining test adequacy. Test case selections based on all-interfaces, all-events criteria are simple and efficient, however they can only provide a certain level of reliability. So to further improve the quality of the system, all-context-sensitive/ some-context-sensitive-dependences criteria was used.

  • The CIG has been built to handle three component technologies viz. COM/DCOM, CORBA and Enterprise Java Beans (EJB).

  • The method can be applied on all types of component-based systems and does not rely on the knowledge of source code.

    1. Retrospectors

In this approach, Retrospectors are used with the component to test them efficiently. Retrospectors record the execution and history of the component and make testing information available to the software testers. Retro class in components are similar to the introspector Class in Java Beans. Retro component have three modes Ė design time, test time and run time.

Method Description

  • The Retrospector can either be written or can be automatically created by attaching a specification called Retro-Spec to the component.

  • After the Retro Component is built, a custom test generator can be used to test the functional and implementation details in the Retro Component.

Method Evaluation

Automatic generation of Retrospector requires the component to be Retro Spec compatible. But on the other hand the component producer will not have to write the Retrospector.

  • The users of the retro component have access to the source code coverage analysis methods even though the source code of the component is not available.

  • Retro-component can be used with any software component model thus increasing its interoperability.

  • Retro components are light weight in the final execution code.

  • All Retro components are customizable thus giving flexibility to the component user. The component tester can add his/her own "Meaningful test case specified by the tester" entry in the Rerospector thereby using only those functions that are required by the system.

  • The usage patterns in the Retrospectors provide valuable design information to the software developers using the component.

  • The recommendation of the test cases allows the software component user to develop efficient test cases.

  • Retrospectors can remain active after the software deployment and keep collecting the actual usage, real interactions with real software users. So, it helps out in carrying perpetual testing

4. Conclusions

Component testing strategies have been developed from both the component producerís side as well as the component testerís side. Application of efficient analysis techniques both for individual component reliability and also for the reliability of the whole system has been done. After the evaluation of around a dozen Component Testing Strategies, there are still some concerns that need to be addressed.

  • Firstly, efficient testing strategies need to be made for testing domain specific component software and developed tests can be stored to be reused later.
  • Secondly, if metadata is considered to be a potential solution to the problem of component testing then Metadata standard creation will need a lot of cooperation and coordination among the various third party component producers around.
  • Thirdly, reliability of components can be improved by improving the languages used to implement them (like Java has popularized the use of a garbage collector).
  • Lastly, apart from automation of test cases, there is a need for sequencing and prioritization of test cases.
  • Additional techniques like providing extensive Component User Manuals and Application Interface Specifications can be considered too.
  • The range of test scenarios should be more comprehensive for making the components cater to a wide range of usage patterns

Reference

[1] Clemens Szyperski, Component Software- Beyond Object Oriented Programming, Addison Wesley, 1997

[2] E.J.Weyuker, Testing Component-Based Software: A Cautionary Tale, IEEE Software, Vol. 15, No. 5, September/October 1998

[3] Jefferey M.Vaos, Certifying Off-the Shelf-Components, IEEE Computer, June 1998

[4] Jefferey M.Vaos, A Defensive Approach to Certifying COTS Software, Technical Report, Reliable Software Technologies Corporation, August 1997

[5] Allessandro Orso, Mary Jean Harrold, David Rosenblum, Component Metadata for Software Engineering Tasks, In Proc. 2nd International Workshop on Engineering Distributed Objects, Davis, CA, November 2000.

[6] Gary A. Bundell, Gareth Lee, John Morris, Kris Parker, A Software Component Verification Tool, In the Proceedings of International Conference on Software Methods and Tools, 2000. SMT, 2000

[7] Hoijin Yoon, Byoungju Choi, Jin-Ok Jeon, A UML Based Test Model for Component Integration Test, Workshop on Software Architecture and component (WSAC), Japan, 1999

[8] Wayne Liu and P. Dasiewicz, Formal Test Requirements for Component Interactions, IEEE Canadian Conference on Electrical and Computer Engineering, 1999

[9] Yingxu Wang, Graham King, Hakan Wickburg, A method for Built-in Tests in Component-based Software Maintenance, Proceedings of the Third European Conference on Software Maintenance and Reengineering, 1999

[10] John D. McGregor, Component Testing,JOOP Column,1997

[11] John D. McGregor and Anuradha Kare, Parallel Architecture for Component Testing, In Proceedings of the Ninth International Quality Week, 1996.

[12] Sriram Sankar, Roger Hayes, Specifying and Testing Software Components using ADL, Technical Report, Sun Microsystems, April 1994

[13] Ye Wu, Dai Pan, Mei-Hwa Chen, Testing Component Based Software, submitted to International Conference on Software Engineering, Toronto, 2001

[14] Chang Liu and Debra Richardson, Software Components with Retrospectors, Int. Workshop on the Role of Soft. Arch. in Testing and Ana., July 1998

[15] Jerry Gao, Testing Component Based Software, Technical Report, San Jose State University

[16] Jerry Gao, Component Testability and Component Testing Challenges, Proceedings of Starí99, SQE, 1999

[16] William T. Councill, Third Party Testing and Quality of Software Components, IEEE Software, Volume: 16 Issue: 4 , July-Aug. 1999

[17] Craig H. Wittenberg, Progress in Testing Component Based Software, Proceedings of the International Symposium on Software Testing and Analysis, 2000

[18] Kirk D. Thompson, COM Based Test Foundation Framework, IEEE Systems Readiness Technology Conference, 1999

[19] Oscar Nierstrasz, Simon Gibbs and dennis Tsichritzis, Component Oriented Software Development, Communications of the ACM, vol. 35, no. 9, September 1992

[20] David S. Rosenblum, Adequate Testing of Component Based Software, Department of Information and Computer Science, University of California, Irvine, Technical Report UCI-ICS-97-34, Aug. 1997

[21] Mary Jean Harrold, Donglin Liang, Saurabh Sinha, An Approach To Analyzing and Testing Component Based Software, Proceedings of the First International ICSE Workshop on Testing Distributed Component-Based Systems, Los Angeles, CA, May 1999

[22] Sudipto Ghosh, Aditya P. Mathur, Issues in Testing Distributed Component Based Systems, First ICSE Workshop "Testing Distributed Component-Based Systems, May 17, 1999

[23] Saileshwar Krishnamurthy, Aditya P. Mathur, On Estimation of Reliability of a software System using Reliabilities of its components,

[24] David Chapell, The Next Wave, White paper, Rational Technologies, Inc.

[25] Christoph C. Micheal, Reusing Tests of Reusable Software Components, Reliable Software Technologies Corporation

[26] Micheal Young , Richard N. Taylor, Rethinking the Taxonomy of Fault Detection Techniques, Proceedings of the 11th International Conference on Software Engineering, May 1989.

[27] David Garlan, Robert Allen,John Ockerbloom, Architectural Mismatch: Why Reuse Is So Hard, IEEE Software 12(6): 17-26, November 1995

[28] Philippe Kruchten, "Rational Unified Process", Addison Wesley, 1998

[29] Eric Madelaine and Robert de Simone, FC2:Reference Manual Version 1.1, INRIA, Sophia-Antipolis (FRANCE), July 1993


Return to Articles

 

Copyright 2017 - AJE Components - All Rights Reserved
|
|
|
Contents
|
Site Map
|
|
|
|
|
Licensing
|
Software Piracy
Software Components and Development Tools