One requirement for a software engineering technology testbed would be to have an experience base of prior experiences, both positive and negative, for each technology, providing software engineers an indication of how well the technology worked on a representative software system. The information going into the experience base would contain information such as, but not limited to, the effectiveness of the technology to finding defects, what type of defects it found, training time to learn the technology, and a description of the technology. By analyzing the information, a software engineer would be able to gauge how well the technology will work on their project and evaluate alternative software engineering technologies. At times, a practitioner may not know if two or more technologies are complimentary or not. The technologies may find the same set of defects. With an experience base, practitioners can decide if two or more technologies are complimentary or not. In addition, researchers who use the testbed to evaluate their technology would be able to add their experiences/results to the experience base for practitioners to view.
Another critical factor according to Redwine and Riddle is conceptual integrity. By using the software engineering technology testbed, researchers will be able to demonstrate that the technology is well developed by applying the technology on a representative software system and being able to find (seeded) detects. If the technology is unable to find the seeded defects or significant additional defects in the representative system, then the researcher will need to develop/mature the technology some more before being used by the technical community.
Replaced/Superseded by document(s)
A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was
designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into
project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its
investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the
fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering
technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefited from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the
testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and
transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology test bed’s requirements and architecture were suitable for evaluating
technologies and accelerating their maturation and transition into project use.