
180 Park Ave - Building 103
Florham Park, NJ
Thomas Ostrand is a Principal Member of Technical Staff in the Information and Software Systems Research department of AT&T Labs. His current research interests include software test generation and evaluation, defect analysis and prediction, and tools for software development and testing.
Prior to joining AT&T, Tom was a member of the U.S. lab of Siemens Corporate Research in Princeton, NJ, and the software engineering lab of Sperry Univac in Blue Bell, PA.
He was also a member of the Computer Science Department of Rutgers University in New Brunswick, NJ.
Tom is a senior member of the ACM, and is a past Member-at-Large of the ACM/SIGSOFT Executive Committee. He has served as Program Chair and Steering Committee member of the International Symposium on Software Testing and Analysis, and of the PROMISE Conference on Predictive Models in Software Engineering. He is currently an associate editor of the Empirical Software Engineering journal, and is a past associate editor of IEEE Transactions on Software Engineering.
Tom will serve as a co-chair of the Industrial Program of the International Conference on Software Testing (ICST), to be held in Montreal, Canada in April 2012.

Does Measuring Code Change Improve Fault Prediction?
Robert Bell, Thomas Ostrand, Elaine Weyuker
7th International Conference on Predictive Models in Software Engineering (Promise2011),
2011.
[PDF]
[BIB]
{Several studies have examined code churn as a variable for predicting faults in
large software systems. High churn is usually associated with more faults appearing in
code that has been changed frequently.
We investigate the extent to which faults can be predicted by the degree of churn alone,
whether other code characteristics occur together with churn, and which combinations of churn
and other characteristics provide the best predictions.
We also investigate different types of churn, including both additions to and deletions from code,
as well as overall amount of change to code.
We have mined the version control database of a large software system to collect churn and other
software measures from 18 successive releases of the system.
We examine the frequency of faults plotted against various code characteristics, and
evaluate a diverse set of prediction models based on many different combinations of
independent variables, including both absolute and relative churn.
Churn measures based on counts of lines added, deleted, and modified
are very effective for fault prediction.
Individually, counts of adds and modifications outperform counts of deletes,
while the sum of all three counts was most effective.
However, these counts did not improve prediction accuracy relative to a
model that included a simple count of the number of times that a file had
been changed in the prior release.
Including a measure of change in the prior release is an essential
component of our fault prediction method.
Various measures seem to work roughly equivalently.
}

Assessing the Impact of Using Fault-Prediction in Industry
Elaine Weyuker, Thomas Ostrand, Robert Bell
Testing: Academic & Industrial Conference (TAIC 2011),
2011.
[PDF]
[BIB]
IEEE Copyright
This version of the work is reprinted here with permission of IEEE for your personal use. Not for redistribution. The definitive version was published in Testing: Academic & Industrial Conference (TAIC 2011) , 2011-03-25
{Does the use of fault prediction models to help focus software testing
resources and other development efforts to improve software reliability
lead to discovery of different faults in the next release, or simply an improved
process for finding the same faults that would be found if the models
were not used?
In this short paper, we describe the challenges involved in estimating
effects for this sort of intervention and discuss ways to empirically
answer that question and ways to assess any changes, if present.
We present several experimental design options
and discuss the pros and cons of each.}
Software Testing Research and Software Engineering Education
Thomas Ostrand, Elaine Weyuker
Workshop on the Future of Software Engineering Research,
2010.
[PDF]
[BIB]
ACM Copyright
(c) ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Workshop on the Future of Software Engineering Research , 2010-11-07
{Software testing research has not kept up with modern software system designs
and applications, and software engineering education falls short of providing
students with the type of knowledge and training that other engineering
specialties require.
Testing researchers should pay more attention to areas that are currently
relevant for practicing software developers, such as embedded systems, mobile
devices, safety-critical systems and other modern paradigms, in order to
provide usable results and techniques for practitioners.
We identify a number of skills that every software engineering student and
faculty should have learned, and also propose that education for future
software engineers should include significant exposure to real systems,
preferably through hands-on training via internships at software-producing firms.
}

Programmer-based Fault Prediction
Thomas Ostrand, Elaine Weyuker, Robert Bell
Proc. PROMISE2010,
6th International Conference on Predictive Models in Software Engineering,
2010.
[PDF]
[BIB]
ACM Copyright
(c) ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in The 6th International Conference on Predictive Models in Software Engineering , 2010-09-12
{We investigate whether files in a large system that are modified by an
individual developer consistently contain either more or fewer faults
than the average of all files in the system.
The goal of the investigation is to determine whether information
about which particular developer modified a file is able to improve
defect predictions.
We also continue an earlier study to evaluate the use of counts of the
number of developers who modified a file as predictors of the file's
future faultiness.
The results from this preliminary study indicate that adding
information to a model about which particular developer modified a
file is not likely to improve defect predictions.
The study is limited to a single large system, and its results may not
hold more widely.
The bug ratio is only one way of measuring the 'fault-proneness' of an
individual programmer's coding, and we intend to investigate other
ways of evaluating bug introduction by individuals. }

Finding Fault: Developing an Automated System for Predicting Which Files Will Contain Defects
Thomas Ostrand, Elaine Weyuker
Making Software: What Really Works, and Why We Believe It O'Reilly Media, Inc. ,
2010.
[BIB]
{}