Publications

2019

Simmross-Wattenberg F, Rodriguez-Cayetano M, Royuela-Del-Val J, Martin-Gonzalez E, Moya-Saez E, Martin-Fernandez M, Alberola-López C. OpenCLIPER: An OpenCL-Based C++ Framework for Overhead-Reduced Medical Image Processing and Reconstruction on Heterogeneous Devices. IEEE J Biomed Health Inform. 2019;23(4):1702–1709. doi:10.1109/JBHI.2018.2869421
Medical image processing is often limited by the computational cost of the involved algorithms. Whereas dedicated computing devices (GPUs in particular) exist and do provide significant efficiency boosts, they have an extra cost of use in terms of housekeeping tasks (device selection and initialization, data streaming, synchronization with the CPU, and others), which may hinder developers from using them. This paper describes an OpenCL-based framework that is capable of handling dedicated computing devices seamlessly and that allows the developer to concentrate on image processing tasks. The framework handles automatically device discovery and initialization, data transfers to and from the device and the file system and kernel loading and compiling. Data structures need to be defined only once independently of the computing device; code is unique, consequently, for every device, including the host CPU. Pinned memory/buffer mapping is used to achieve maximum performance in data transfers. Code fragments included in the paper show how the computing device is almost immediately and effortlessly available to the users algorithms, so they can focus on productive work. Code required for device selection and initialization, data loading and streaming and kernel compilation is minimal and systematic. Algorithms can be thought of as mathematical operators (called processes), with input, output and parameters, and they may be chained one after another easily and efficiently. Also for efficiency, processes can have their initialization work split from their core workload, so process chains and loops do not incur in performance penalties. Algorithm code is independent of the device type targeted.
Karayumak SC, Bouix S, Ning L, James A, Crow T, Shenton M, Kubicki M, Rathi Y. Retrospective harmonization of multi-site diffusion MRI data acquired with different acquisition parameters. Neuroimage. 2019;184:180–200. doi:10.1016/j.neuroimage.2018.08.073
A joint and integrated analysis of multi-site diffusion MRI (dMRI) datasets can dramatically increase the statistical power of neuroimaging studies and enable comparative studies pertaining to several brain disorders. However, dMRI data sets acquired on multiple scanners cannot be naively pooled for joint analysis due to scanner specific nonlinear effects as well as differences in acquisition parameters. Consequently, for joint analysis, the dMRI data has to be harmonized, which involves removing scanner-specific differences from the raw dMRI signal. In this work, we propose a dMRI harmonization method that is capable of removing scanner-specific effects, while accounting for minor differences in acquisition parameters such as b-value, spatial resolution and number of gradient directions. We validate our algorithm on dMRI data acquired from two sites: Philadelphia Neurodevelopmental Cohort (PNC) with 800 healthy adolescents (ages 8-22 years) and Brigham and Women’s Hospital (BWH) with 70 healthy subjects (ages 14-54 years). In particular, we show that gender and age-related maturation differences in different age groups are preserved after harmonization, as measured using effect sizes (small, medium and large), irrespective of the test sample size. Since we use matched control subjects from different scanners to estimate scanner-specific effects, our goal in this work is also to determine the minimum number of well-matched subjects needed from each site to achieve best harmonization results. Our results indicate that at-least 16 to 18 well-matched healthy controls from each site are needed to reliably capture scanner related differences. The proposed method can thus be used for retrospective harmonization of raw dMRI data across sites despite differences in acquisition parameters, while preserving inter-subject anatomical variability.
Schilling KG, Nath V, Hansen C, Parvathaneni P, Blaber J, Gao Y, Neher P, Aydogan DB, Shi Y, Ocampo-Pineda M, et al. Limits to anatomical accuracy of diffusion tractography using modern approaches. Neuroimage. 2019;185:1–11. doi:10.1016/j.neuroimage.2018.10.029
Diffusion MRI fiber tractography is widely used to probe the structural connectivity of the brain, with a range of applications in both clinical and basic neuroscience. Despite widespread use, tractography has well-known pitfalls that limits the anatomical accuracy of this technique. Numerous modern methods have been developed to address these shortcomings through advances in acquisition, modeling, and computation. To test whether these advances improve tractography accuracy, we organized the 3-D Validation of Tractography with Experimental MRI (3D-VoTEM) challenge at the ISBI 2018 conference. We made available three unique independent tractography validation datasets - a physical phantom and two ex vivo brain specimens - resulting in 176 distinct submissions from 9 research groups. By comparing results over a wide range of fiber complexities and algorithmic strategies, this challenge provides a more comprehensive assessment of tractography’s inherent limitations than has been reported previously. The central results were consistent across all sub-challenges in that, despite advances in tractography methods, the anatomical accuracy of tractography has not dramatically improved in recent years. Taken together, our results independently confirm findings from decades of tractography validation studies, demonstrate inherent limitations in reconstructing white matter pathways using diffusion MRI data alone, and highlight the need for alternative or combinatorial strategies to accurately map the fiber pathways of the brain.
Liu C, Özarslan E. Multimodal integration of diffusion MRI for better characterization of tissue biology. NMR Biomed. 2019;32(4):e3939. doi:10.1002/nbm.3939
The contrast in diffusion-weighted MR images is due to variations of diffusion properties within the examined specimen. Certain microstructural information on the underlying tissues can be inferred through quantitative analyses of the diffusion-sensitized MR signals. In the first part of the paper, we review two types of approach for characterizing diffusion MRI signals: Bloch’s equations with diffusion terms, and statistical descriptions. Specifically, we discuss expansions in terms of cumulants and orthogonal basis functions, the confinement tensor formalism and tensor distribution models. Further insights into the tissue properties may be obtained by integrating diffusion MRI with other techniques, which is the subject of the second part of the paper. We review examples involving magnetic susceptibility, structural tensors, internal field gradients, transverse relaxation and functional MRI. Integrating information provided by other imaging modalities (MR based or otherwise) could be a key to improve our understanding of how diffusion MRI relates to physiology and biology.
Friman O, Bell M, Djärv T, Hvarfner A, Jäderling G. National Early Warning Score vs Rapid Response Team criteria-Prevalence, misclassification, and outcome. Acta Anaesthesiol Scand. 2019;63(2):215–221. doi:10.1111/aas.13245
PURPOSE: The purpose of this study was to examine the prevalence of deviating vital parameters in general ward patients using rapid response team (RRT) criteria and National Early Warning Score (NEWS), assess exam duration, correct calculation and classification of risk score as well as mortality and adverse events. METHODS: Point prevalence study of vital parameters according to NEWS and RRT criteria of all adult patients admitted to general wards at a Scandinavian university hospital with a mature RRT. PRIMARY OUTCOME: prevalence of at-risk patients fulfilling at least one RRT criteria, total NEWS of 7 or greater or a single NEWS parameter of 3 (red NEWS). SECONDARY OUTCOMES: mortality in-hospital and within 30 days or adverse events within 24 hours.
Bhatt SP, Washko GR, Hoffman EA, Newell JD, Bodduluri S, Diaz AA, Galban CJ, Silverman EK, epar R ul SJ e E, Lynch DA. Imaging Advances in Chronic Obstructive Pulmonary Disease. Insights from the Genetic Epidemiology of Chronic Obstructive Pulmonary Disease (COPDGene) Study. Am J Respir Crit Care Med. 2019;199(3):286–301. doi:10.1164/rccm.201807-1351SO
The Genetic Epidemiology of Chronic Obstructive Pulmonary Disease (COPDGene) study, which began in 2007, is an ongoing multicenter observational cohort study of more than 10,000 current and former smokers. The study is aimed at understanding the etiology, progression, and heterogeneity of chronic obstructive pulmonary disease (COPD). In addition to genetic analysis, the participants have been extensively characterized by clinical questionnaires, spirometry, volumetric inspiratory and expiratory computed tomography, and longitudinal follow-up, including follow-up computed tomography at 5 years after enrollment. The purpose of this state-of-the-art review is to summarize the major advances in our understanding of COPD resulting from the imaging findings in the COPDGene study. Imaging features that are associated with adverse clinical outcomes include early interstitial lung abnormalities, visual presence and pattern of emphysema, the ratio of pulmonary artery to ascending aortic diameter, quantitative evaluation of emphysema, airway wall thickness, and expiratory gas trapping. COPD is characterized by the early involvement of the small conducting airways, and the addition of expiratory scans has enabled measurement of small airway disease. Computational advances have enabled indirect measurement of nonemphysematous gas trapping. These metrics have provided insights into the pathogenesis and prognosis of COPD and have aided early identification of disease. Important quantifiable extrapulmonary findings include coronary artery calcification, cardiac morphology, intrathoracic and extrathoracic fat, and osteoporosis. Current active research includes identification of novel quantitative measures for emphysema and airway disease, evaluation of dose reduction techniques, and use of deep learning for phenotyping COPD.
Ning L, Makris N, Camprodon JA, Rathi Y. Limits and reproducibility of resting-state functional MRI definition of DLPFC targets for neuromodulation. Brain Stimul. 2019;12(1):129–138. doi:10.1016/j.brs.2018.10.004
BACKGROUND: Transcranial magnetic stimulation (TMS) is a noninvasive neuromodulation technique with therapeutic applications for the treatment of major depressive disorder (MDD). The standard protocol uses high frequency stimulation over the left dorsolateral prefrontal cortex (DLPFC) identified in a heuristic manner leading to moderate clinical efficacy. A proposed strategy to increase the anatomical precision in targeting, based on resting-state functional MRI (rsfMRI), identifies the subregion within the DLPFC having the strongest anticorrelated functional connectivity with the subgenual cortex (SGC) for each individual subject. OBJECTIVE: In this work, we comprehensively test the reliability and reproducibility of this targeting method for different scan lengths on 100 subjects from the Human Connectome Project (HCP) where each subject had a four 15-min rsfMRI scan on 2 different days. METHODS: We quantified the inter-scan and inter-day distance between the rsfMRI-guided DLPFC targets for each subject controlling for a number of expected sources of noise using volumetric as well as surface analyses. RESULTS: Our results show that the average inter-day distance (with fMRI scans lasting 30 min on each day) is 25% less variable than the inter-scan distance, which uses 50% less data. Specifically, the inter-scan distance was more than 37 mm, while for the longer-scan, the inter-day distance had lower variability at 25 mm. Finally, we tested the same rsfMRI strategy using the nucleus accumbens (NAc) as a control region relevant to MDD but less susceptible to artifacts, using both volume and surface rsfMRI data. The results showed similar variability to the SGC-DLPFC functional connectivity. Moreover, our results suggest that a smoothing kernel with 12 mm full-width half maximum (FWHM) lead to more stable and reliable target estimates. CONCLUSION: Our work provides a quantitative assessment of the topographic precision of this targeting method, describing an anatomical variability that may surpass the spatial resolution of some forms of focal TMS as it is commonly applied, and provides recommendations for improved accuracy.
Eklund A, Knutsson H, Nichols TE. Cluster failure revisited: Impact of first level design and physiological noise on cluster false positive rates. Hum Brain Mapp. 2019;40(7):2017–32. doi:10.1002/hbm.24350
Methodological research rarely generates a broad interest, yet our work on the validity of cluster inference methods for functional magnetic resonance imaging (fMRI) created intense discussion on both the minutia of our approach and its implications for the discipline. In the present work, we take on various critiques of our work and further explore the limitations of our original work. We address issues about the particular event-related designs we used, considering multiple event types and randomization of events between subjects. We consider the lack of validity found with one-sample permutation (sign flipping) tests, investigating a number of approaches to improve the false positive control of this widely used procedure. We found that the combination of a two-sided test and cleaning the data using ICA FIX resulted in nominal false positive rates for all data sets, meaning that data cleaning is not only important for resting state fMRI, but also for task fMRI. Finally, we discuss the implications of our work on the fMRI literature as a whole, estimating that at least 10% of the fMRI studies have used the most problematic cluster inference method (p = .01 cluster defining threshold), and how individual studies can be interpreted in light of our findings. These additional results underscore our original conclusions, on the importance of data sharing and thorough evaluation of statistical methods on realistic null data.
Savadjiev P, Chong J, Dohan A, Vakalopoulou M, Reinhold C, Paragios N, Gallix B. Demystification of AI-driven medical image interpretation: past, present and future. Eur Radiol. 2019;29(3):1616–1624. doi:10.1007/s00330-018-5674-x
The recent explosion of ’big data’ has ushered in a new era of artificial intelligence (AI) algorithms in every sphere of technological activity, including medicine, and in particular radiology. However, the recent success of AI in certain flagship applications has, to some extent, masked decades-long advances in computational technology development for medical image analysis. In this article, we provide an overview of the history of AI methods for radiological image analysis in order to provide a context for the latest developments. We review the functioning, strengths and limitations of more classical methods as well as of the more recent deep learning techniques. We discuss the unique characteristics of medical data and medical science that set medicine apart from other technological domains in order to highlight not only the potential of AI in radiology but also the very real and often overlooked constraints that may limit the applicability of certain AI methods. Finally, we provide a comprehensive perspective on the potential impact of AI on radiology and on how to evaluate it not only from a technical point of view but also from a clinical one, so that patients can ultimately benefit from it. KEY POINTS: • Artificial intelligence (AI) research in medical imaging has a long history • The functioning, strengths and limitations of more classical AI methods is reviewed, together with that of more recent deep learning methods. • A perspective is provided on the potential impact of AI on radiology and on its evaluation from both technical and clinical points of view.
Zhang F, Wu Y, Norton I, Rathi Y, Golby AJ, O’Donnell LJ. Test-retest reproducibility of white matter parcellation using diffusion MRI tractography fiber clustering. Hum Brain Mapp. 2019;40(10):3041–3057. doi:10.1002/hbm.24579
There are two popular approaches for automated white matter parcellation using diffusion MRI tractography, including fiber clustering strategies that group white matter fibers according to their geometric trajectories and cortical-parcellation-based strategies that focus on the structural connectivity among different brain regions of interest. While multiple studies have assessed test-retest reproducibility of automated white matter parcellations using cortical-parcellation-based strategies, there are no existing studies of test-retest reproducibility of fiber clustering parcellation. In this work, we perform what we believe is the first study of fiber clustering white matter parcellation test-retest reproducibility. The assessment is performed on three test-retest diffusion MRI datasets including a total of 255 subjects across genders, a broad age range (5-82 years), health conditions (autism, Parkinson’s disease and healthy subjects), and imaging acquisition protocols (three different sites). A comprehensive evaluation is conducted for a fiber clustering method that leverages an anatomically curated fiber clustering white matter atlas, with comparison to a popular cortical-parcellation-based method. The two methods are compared for the two main white matter parcellation applications of dividing the entire white matter into parcels (i.e., whole brain white matter parcellation) and identifying particular anatomical fiber tracts (i.e., anatomical fiber tract parcellation). Test-retest reproducibility is measured using both geometric and diffusion features, including volumetric overlap (wDice) and relative difference of fractional anisotropy. Our experimental results in general indicate that the fiber clustering method produced more reproducible white matter parcellations than the cortical-parcellation-based method.