GMU Software Engineering Seminar Series

 

***********************************************

Date: Wed, 03/25/2009

Time: 12 – 1pm

Location: 430A ST2

***********************************************

Title: An Experimental Comparison of Four Unit Test Criteria: Mutation, Edge-Pair, All-uses and Prime Path Coverage

Speaker: Nan Li

Abstract
In this talk, I present the results from a comparison of four unit level software testing criteria. Mutation testing, prime path coverage, edge-pair coverage, and all-uses testing were compared on two bases: the number of seeded faults found and the number of tests needed to satisfy the criteria. The comparison used Java classes and hand-seeded faults. Tests were designed and generated mostly by hand with help from tools that compute test requirements and mu-Java. I also present a secondary measure, a cost benefit ratio, computed as the number of tests needed to detect each fault. I also discuss some specific faults that were not found and present analysis for why not.

 

Bio
Nan Li is a PhD student in Computer Science Department, Volgenau School of Information Technology and Engineering. His current research mainly focuses on software testing and he is also interested in other fields of software engineering.

 

************************************************

 

Title: Comparison of Unit-Level Automated Test Generation Tools

 

Speaker: Shuang Wang

 

Abstract
Data from projects worldwide show that many software projects fail and most are completed late or over budget. Unit testing is a simple but effective technique to improve software in aspects of quality, flexibility, and time to market. However, testing each unit by hand is very expensive, possibly prohibitively so. Automation is essential to support unit testing and as unit testing is achieving more attention, developers are using automated unit testing tools more often. However, developers have very little information about which tools are effective. This experiment compares three well-known public-accessible unit test tools, JCrasher, TestGen4j, and JUB. We apply these tools to a variety of Java programs and evaluate them based on their mutation scores. As a comparison, we manually created two additional sets of tests. One test set contained random values with the same number of tests the three test tools created, and the other contained values to satisfy edge coverage.

 

Bio
Shuang Wang is a PhD student and a teaching assistant in Computer Science Department, Volgenau School of Information Technology and Engineering, George Mason University. Her current interests include software testing, Web Application testing, and Mutation testing. Her advisor is Dr. Jeff Offutt.