August 22, 2013 - In an opinion article in Frontiers in Brain Imaging Methods, Arno Klein, PhD and colleagues proposed a set of guidelines designed to reduce unintentional errors in the computational analysis of images such as those generated from PET and MRI scans.
The use of computational software packages is standard practice in the analysis of complex data obtained from imaging studies. As the authors point out, the research community has benefited from the proliferation of computational software packages and the incorporation of innovative algorithms into them. But the variety of software tools and constant innovation come at a price. Absent sufficient safeguards, potential biases and errors related to the use of software can unknowingly affect study outcomes.
Because of the difficulties inherent in comparing software packages, the scientific community has come to rely on published reports in which new algorithms are compared with existing programs. The authors suggest that to avoid “the possibility of instrumentation bias” these evaluation studies must be carefully designed and published to maximize clarity and reproducibility. They offer a set of practical guidelines (“Do not implement your own version of an algorithm if one is available from the original authors.” “Perform computations on publicly available data.” “Specify all processing steps from the raw to the final processed images.” etc.) designed to enable others to reproduce the results of evaluation studies and make informed judgments about them.
The article is titled "Instrumentation bias in the use and evaluation of scientific software: Recommendations for reproducible practices in the computational sciences."