Motion Correction for High-Resolution Cone-Beam CT: Paper by Sisniega et al.

June 3, 2017

A paper published by Dr. Alejandro Sisniega (Research Associate, Department of Biomedical Engineering) and colleagues at the I-STAR Lab describes a new method for correcting patient motion in cone-beam CT (CBCT). Because CBCT often involves scan times >10 sec (for example, 20-30 sec common in extremity imaging and up to 60 sec in image-guided procedures), patient motion during the scan can result in significant degradation of image quality.

Even a few mm of motion can confound the visibility of subtle image features. A variety of methods have been reported in recent years to correct motion artifacts. Dr. Sisniega’s approach involves a purely image-based solution that does not require external motion tracking devices or prior images of the patient. Instead, the patient motion trajectory is derived directly from the image data using a 3D “auto-focus” method that optimizes sharpness of the resulting 3D image. Sisniega evaluated a number of possible sharpness metrics – including total variation and entropy – and showed gradient variance to perform best overall.

The method uses one or more volumes of interest (VOIs) within which motion can be assumed to follow a rigid trajectory – for example a bone structure – and can support multiple VOIs to independently solve for patient motion across the entire image, even in the presence of complex deformation. For example, in CBCT of the extremities, the method was shown to perform well in images of the knee using 2 VOIs – one for the distal femur and one for the proximal tibia (and optionally, a third for the patella). The method was rigorously evaluated in phantom studies on a CBCT benchtop, showing the ability to recover spatial resolution both in small motions (~0.5 – 1 mm perturbations) and large motions (>10 mm motion during the scan). The algorithm was then tested in clinical studies on an extremity CBCT system in the Department of Radiology and Johns Hopkins Hospital. Cases exhibiting significant motion artifacts were identified in retrospective review, and the algorithm was shown to reliably eliminate artifacts and recover spatial resolution sufficient for visualizing the joint space, subchondral trabecular bone, and surrounding soft-tissue features, including tendons, ligaments, and cartilage.

The motion correction algorithm is now proving its merit in applications within and beyond musculoskeletal extremity imaging, including CBCT of head trauma and C-arm CBCT, which can also involve long scan times and challenging motion artifacts. In addition to restoring spatial resolution in CBCT of bone morphology, ongoing work shows the algorithm to be important in recovering low-contrast visibility of soft tissues as well. Dr. Sisniega is extending the method to handle complex deformation of soft-tissue structures in the abdomen – tackling one of the major challenges to CBCT image quality in image-guided interventions.

Full details of the algorithm and experimental studies can be found in the paper published in the journal of Physics in Medicine and Biology (2017 May 7;62(9):3712-3734. doi: 10.1088/1361-6560/aa6869).

Imaging for Safer Surgery – Michael Ketcha’s Algorithm for Targeting the Deformed Spine

May 20, 2017

A recent paper by Michael Ketcha and coauthors at the I-STAR Lab reports a method for accurately targeting vertebrae in surgery under conditions of strong spinal deformation. Previous research showed a method by which target vertebrae defined in preoperative CT or MRI can be accurately localized in intraoperative radiographs via the “LevelCheck” algorithm for 3D-2D image registration. While LevelCheck was shown to provide accurate localization over a broad range of clinical conditions, the underlying registration model is rigid, meaning that it does not account for strong changes in spinal curvature occurring between the preoperative image and the intraoperative scene. Such deformation can be considerable, for example, in scenarios where preoperative images are acquired with the patient in a prone position, but intraoperative images are acquired with the patient laying supine – and sometimes kyphosed or lordosed to improve surgical access. Ketcha’s algorithm extends the utility of the LevelCheck algorithm to such scenarios by developing a “multi-scale” registration process – called msLevelCheck. The multi-scale method begins with a (rigid) LevelCheck initialization and proceeds in a region-of-interest pyramid to successively smaller segments and, in the final stage, about individual vertebrae. The resulting effect is a deformable transformation of vertebral labels from the preoperative 3D image to the intraoperative 2D image. Ketcha’s paper shows the algorithm to be accurate and robust in laboratory phantom studies across a broad range of spinal curvature and includes the first clinical testing of the msLevelCheck approach in images of actual spine surgery patients.

Previous research to address such deformation relies on segmentation of structures in the 3D preoperative image – a potentially time-consuming process that introduces additional workflow and potential source of error – to effect a “piece-wise rigid” registration of individual segmented structures. The msLevelCheck approach operates without such segmentation, operating instead directly on image intensities and gradients in an increasingly “local” registration through the multi-scale process to effect a global deformation of the vertebral labels. The algorithm was shown to accurately label vertebrae within a few mm of expert-defined reference labels, offering a potentially useful tool for safer spine surgery.

Read the full paper here.
M D Ketcha, T De Silva, A Uneri, M W Jacobson, J Goerres, G Kleinszig, S Vogt, J-P Wolinsky, and J H Siewerdsen, “Multi-stage 3D–2D registration for correction of anatomical deformation in image-guided spine surgery,” Phys Med Biol 62: 4604–4622 (2017).

Alisa Brown – a STAR at the I-STAR Lab

May 13, 2017

Alisa Brown was awarded a STAR (Summer Training and Research) Program scholarship for her research on the development of new rigid-body marker designs for surgical tracking and navigation. Alisa began research at the I-STAR Lab in 2016 using 3D printing to produce new marker tools for image-guided neurosurgery and orthopaedic surgery. Her research includes the development of a large, open-source library of marker designs to facilitate research in image-guided surgery, particularly for systems involving multiple tracked tools – for example, surgical pointers, endoscope, ultrasound probe, C-arm, and/or patient reference marker. Alisa is a rising senior in the Department of Biomedical Engineering at Johns Hopkins University.

Zbijewski Leads New Program for Imaging of Bone Health

April 28, 2017

No bones about it: Dr. Wojciech Zbijewski‘s research is breaking new ground in imaging technology and advancing the clinical understanding of conditions affecting the bones and joints. Dr. Z (“Wojtek”) and his team are developing new imaging methods that break conventional barriers to spatial resolution, give quantitative characterization of bone morphology, and shed new light on diseases such as osteoarthritis, rheumatoid arthritis, and osteoporosis. Underlying such advances are new methods for 3D imaging at spatial resolution beyond that of conventional computed tomography (CT).

Wojtek and colleagues are combining high-resolution CMOS detectors for cone-beam CT with advanced model-based 3D image reconstruction methods in an NIH R01 project that aims to resolve subtle changes in subchondral trabecular bone morphology as a sign of early-stage osteoarthritis (OA). Conventional methods detect OA only at later stages of cartilage degeneration and bone erosion, when treatment options are limited and often require joint replacement. Collaborators include Dr. Xu Cao (Orthopaedic Surgery) and Dr. Shadpour Demehri (Radiology).

Other NIH-funded research in collaboration with Dr. Carol Morris (Orthopaedic Surgery and Radiation Oncology) aims to quantify changes in bone quality following radiation therapy to identify early signs of fracture risk.

In collaboration with US Army Natick Soldier Research, Development, and Engineering Center (NSRDEC), Wojtek’s team is developing tools for quantitative image analysis of joint morphology in correlation with factors of injury risk (for example, ACL injury) – tools that have in turn driven new methods for automatic characterization of joint morphology for a variety of musculoskeletal (MSK) radiology applications.

In collaboration with Carestream Health, the team includes close collaboration with Dr. Shadpour Demehri (Hopkins Radiology), Dr. Greg Osgood (Hopkins Orthopaedic Surgery), and Dr. Lew Schon (Union Memorial Orthopaedic Surgery) to understand patterns of traumatic injury repair and fracture healing – pushing the limits of cone-beam CT spatial resolution and quantitative capability.

Dr. Zbijewski is faculty in the Department of Biomedical Engineering at Johns Hopkins University, with laboratories based at the Johns Hopkins Hospital – I-STAR Laboratory and Carnegie Center for Surgical Innovation.

Ultrasound + Cone-Beam CT Guidance: Paper by Eugenio Marinetto in CMIG

April 22, 2017

A paper published in the journal of Computerized Medical Imaging and Graphics (CMIG) reports the integration of C-arm cone-beam CT with a low-cost ultrasound imaging probe for needle interventions such as biopsy, tumor ablation, and pain management. The research reports a rigorous characterization of imaging performance for the ultrasound probe (Interson Vascular Access probe), including spatial resolution and contrast-to-noise ratio measured as a function of frequency and depth of field. The work also integrates the ultrasound probe via the PLUS Library for ultrasound-guided interventions, using a 3D-printed geometric calibration phantom and Polaris Vicra tracking system. The accuracy of image registration between ultrasound and cone-beam CT was ~2-3 mm at the needle tip, with anticipated improvement to be gained through enhancement of ultasound image quality. The work also demonstrates the potential for multi-modality (ultrasound-CBCT) deformable image registration using normalized mutual information (NMI), cross-correlation (NCC), or modality-insensitive neighborhood descriptors (MIND) similarity metrics. The research was supported by NIH, industry partnership with Siemens Healthcare, and a collaborative PhD student exchange program with the University Hospital Gregorio Marañón and University Carlos III de Madrid, first-authored by Dr. Eugenio Marinetto as part of his doctoral dissertation on advanced image-guided interventions.