Dr. Nick Theodore, Co-Director for the Carnegie Center for Surgical Innovation

August 12, 2017

The I-STAR Lab welcomes Dr. Nick Theodore as Co-Director for the Carnegie Center for Surgical Innovation. Dr. Theodore is the Donlin M. Long Professor of Neurosurgery at Johns Hopkins University and directs the Johns Hopkins Neurosurgical Spine Center. An internationally recognized expert and innovator in minimally invasive spine surgery and surgical robotics, Dr. Theodore has authored ~200 scientific articles and holds numerous patents for breakthrough devices and procedures for novel treatments of brain and spinal cord injury. He is also an active mentor of surgical trainees and biomedical engineers, including projects at the Carnegie Center, I-STAR Lab, and CBID Program. Research underway includes the development of methods for high-quality, low-dose 3D imaging in the OR, novel surgical guidance methods, advanced, surgical robotics, intraoperative assessment of spinal alignment, and “big data” approaches to improving patient outcomes in spine surgery. Dr. Theodore joins Dr. Siewerdsen in leading the Carnegie Center mission for multi-disciplinary, collaborative research, education, and translation of breakthrough innovations in surgery. Welcome, Nick!

Image Registration Performance and Image Quality: Ketcha’s Model Provides a Link

July 22, 2017

Intuitively, the task of registering two images (for example, aligning a preoperative CT image with an intraoperative radiograph or cone-beam CT) must depend on the quality of the images. And it stands to reason that the accuracy of registration will improve with the quality of those images. But what is the connection – exactly – and what are the image quality factors that govern registration accuracy? Spatial resolution? Noise? And are the limits in visual image quality (for example, a low-dose image for which a feature is no longer visible) the same as the lower limits in registration performance?

These questions are at the heart of a new paper by Michael Ketcha and co-authors at the I-STAR Lab in Biomedical Engineering at Johns Hopkins University, yielding a theoretical model that links image registration performance with image quality. Models for each have been established in previous work, but the connection between the two has not been well formulated. For example, Michael Fitzpatrick and colleagues established a statistical framework for understanding Target Registration Error (TRE), governed by the Fiducial Localization Error (FLE), Fiducial Registration Error (FRE), and the spatial distribution of fiducials with respect to a target point. Meanwhile, Ian Cunningham and colleagues produced a cascaded systems model for image quality describing the propagation of signal and noise – providing the basis for image quality models describing the tradeoffs among spatial resolution, noise, and dose in flat-panel x-ray detectors, tomosynthesis, and cone-beam CT. Such theoretical models have been invaluable to the development of new imaging and image guidance systems over the last two decades, but the connection between the two – HOW IMAGE QUALITY AFFECTS REGISTRATION ACCURACY – has remained largely unanswered.

Michael Ketcha’s paper published in IEEE-TMI in July 2017 derives the Cramer-Rao lower bound (CRLB) for registration accuracy in a manner that reveals the underlying dependencies on spatial resolution and image noise. By analyzing the CRLB as a function of dose, the work sheds light on the low-dose limits of image registration in a manner that could help reduce dose in image-guided interventions, where the task is often one of registration rather than visual detection.

The analysis considers the CRLB as the inverse of the Fisher Information Matrix (FIM) and derives the relationship on two main factors. First is the image noise, which depends on dose and may be different in the two images. Second is the power of (sum of squared) image gradients, which is governed by the contrast and frequency content of the subject. The FIM is thereby related to factors of image noise, resolution, and dose in a manner that permits analysis of the CRLB for a variety of scenarios – including registration of low-contrast soft tissues, high-contrast bone structures, and the effect of image smoothing to improve registration performance.

The work is analogous to widespread efforts to identify low-dose limits of visual detectability via models of imaging task. In image-guided interventions, however, the task of registration is often as important (or more important) as the task of visualization, allowing preoperative images and planning information to be accurately aligned with the patient at the time of treatment. Previous experiments by Uneri et al. showed that registration algorithms can perform well at dose levels below that which would normally be considered to yield a visually acceptable image — effects that are borne out by Ketcha’s analysis.

Motion Correction for High-Resolution Cone-Beam CT: Paper by Sisniega et al.

June 3, 2017

A paper published by Dr. Alejandro Sisniega (Research Associate, Department of Biomedical Engineering) and colleagues at the I-STAR Lab describes a new method for correcting patient motion in cone-beam CT (CBCT). Because CBCT often involves scan times >10 sec (for example, 20-30 sec common in extremity imaging and up to 60 sec in image-guided procedures), patient motion during the scan can result in significant degradation of image quality.

Even a few mm of motion can confound the visibility of subtle image features. A variety of methods have been reported in recent years to correct motion artifacts. Dr. Sisniega’s approach involves a purely image-based solution that does not require external motion tracking devices or prior images of the patient. Instead, the patient motion trajectory is derived directly from the image data using a 3D “auto-focus” method that optimizes sharpness of the resulting 3D image. Sisniega evaluated a number of possible sharpness metrics – including total variation and entropy – and showed gradient variance to perform best overall.

The method uses one or more volumes of interest (VOIs) within which motion can be assumed to follow a rigid trajectory – for example a bone structure – and can support multiple VOIs to independently solve for patient motion across the entire image, even in the presence of complex deformation. For example, in CBCT of the extremities, the method was shown to perform well in images of the knee using 2 VOIs – one for the distal femur and one for the proximal tibia (and optionally, a third for the patella). The method was rigorously evaluated in phantom studies on a CBCT benchtop, showing the ability to recover spatial resolution both in small motions (~0.5 – 1 mm perturbations) and large motions (>10 mm motion during the scan). The algorithm was then tested in clinical studies on an extremity CBCT system in the Department of Radiology and Johns Hopkins Hospital. Cases exhibiting significant motion artifacts were identified in retrospective review, and the algorithm was shown to reliably eliminate artifacts and recover spatial resolution sufficient for visualizing the joint space, subchondral trabecular bone, and surrounding soft-tissue features, including tendons, ligaments, and cartilage.

The motion correction algorithm is now proving its merit in applications within and beyond musculoskeletal extremity imaging, including CBCT of head trauma and C-arm CBCT, which can also involve long scan times and challenging motion artifacts. In addition to restoring spatial resolution in CBCT of bone morphology, ongoing work shows the algorithm to be important in recovering low-contrast visibility of soft tissues as well. Dr. Sisniega is extending the method to handle complex deformation of soft-tissue structures in the abdomen – tackling one of the major challenges to CBCT image quality in image-guided interventions.

Full details of the algorithm and experimental studies can be found in the paper published in the journal of Physics in Medicine and Biology (2017 May 7;62(9):3712-3734. doi: 10.1088/1361-6560/aa6869).

Imaging for Safer Surgery – Michael Ketcha’s Algorithm for Targeting the Deformed Spine

May 20, 2017

A recent paper by Michael Ketcha and coauthors at the I-STAR Lab reports a method for accurately targeting vertebrae in surgery under conditions of strong spinal deformation. Previous research showed a method by which target vertebrae defined in preoperative CT or MRI can be accurately localized in intraoperative radiographs via the “LevelCheck” algorithm for 3D-2D image registration. While LevelCheck was shown to provide accurate localization over a broad range of clinical conditions, the underlying registration model is rigid, meaning that it does not account for strong changes in spinal curvature occurring between the preoperative image and the intraoperative scene. Such deformation can be considerable, for example, in scenarios where preoperative images are acquired with the patient in a prone position, but intraoperative images are acquired with the patient laying supine – and sometimes kyphosed or lordosed to improve surgical access. Ketcha’s algorithm extends the utility of the LevelCheck algorithm to such scenarios by developing a “multi-scale” registration process – called msLevelCheck. The multi-scale method begins with a (rigid) LevelCheck initialization and proceeds in a region-of-interest pyramid to successively smaller segments and, in the final stage, about individual vertebrae. The resulting effect is a deformable transformation of vertebral labels from the preoperative 3D image to the intraoperative 2D image. Ketcha’s paper shows the algorithm to be accurate and robust in laboratory phantom studies across a broad range of spinal curvature and includes the first clinical testing of the msLevelCheck approach in images of actual spine surgery patients.

Previous research to address such deformation relies on segmentation of structures in the 3D preoperative image – a potentially time-consuming process that introduces additional workflow and potential source of error – to effect a “piece-wise rigid” registration of individual segmented structures. The msLevelCheck approach operates without such segmentation, operating instead directly on image intensities and gradients in an increasingly “local” registration through the multi-scale process to effect a global deformation of the vertebral labels. The algorithm was shown to accurately label vertebrae within a few mm of expert-defined reference labels, offering a potentially useful tool for safer spine surgery.

Read the full paper here.
M D Ketcha, T De Silva, A Uneri, M W Jacobson, J Goerres, G Kleinszig, S Vogt, J-P Wolinsky, and J H Siewerdsen, “Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery,” Phys Med Biol 62: 4604–4622 (2017).

Alisa Brown – a STAR at the I-STAR Lab

May 13, 2017

Alisa Brown was awarded a STAR (Summer Training and Research) Program scholarship for her research on the development of new rigid-body marker designs for surgical tracking and navigation. Alisa began research at the I-STAR Lab in 2016 using 3D printing to produce new marker tools for image-guided neurosurgery and orthopaedic surgery. Her research includes the development of a large, open-source library of marker designs to facilitate research in image-guided surgery, particularly for systems involving multiple tracked tools – for example, surgical pointers, endoscope, ultrasound probe, C-arm, and/or patient reference marker. Alisa is a rising senior in the Department of Biomedical Engineering at Johns Hopkins University.