Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach
J Cereb Blood Flow Metab. 2016 Dec 1. pii: 0271678X16681311. [Epub ahead of print]
|Authors/Editors:||Friedländer F, Bohmann F, Brunkhorst M, Chae JH, Devraj K, Köhler Y, Kraft P, Kuhn H, Lucaciu A, Luger S, Pfeilschifter W, Sadler R, Liesz A, Scholtyschik K, Stolz L, Vutukuri R, Brunkhorst R.|
Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.