At RSNA 2013, Dr. Frandics Chan of Stanford University presented the 1st results that allowed radiologists to image “3D objects” directly, in open 3D space, as opposed to creating 3D information in their brain, by studying an array of 2D images . The results, based on Interactive Mixed Reality (IMR) showed an improvement in workflow with higher sensitivity and comparable specificity and diagnostic accuracy as compared to 2D, or 2.5D (2D views of 3D objects). The combination of depth perception and motor inputs to interact with patient specific data, significantly improved the “intuition” of the user and hence produced increased benefits in both clinical efficacy and workflow – hence, patient outcomes.
Verification of the Stanford observations were provided by Hricak, et al. in 2016, “Meaningful data and insights embedded within medical images often are undetectable via routine visual analysis, resulting in valuable information being overlooked … Continue reading.