The I-STAR Labs

Imaging for Surgery, Therapy, and Radiology

Johns Hopkins University

Imaging Physics

3D Image Reconstruction

Image Registration

Many of the new imaging technologies and algorithms developed at I-STAR are based on physical models of imaging performance that quantify the imaging chain in terms of its spatial resolution and noise characteristics and bridge such characteristics to the performance of a particular imaging task. Such models provide a rigorous foundation for the development of new technologies and accelerate their translation to first clinical use. Physical models of system spatial resolution (modulation transfer function, MTF), noise-power spectrum (NPS), and task-based imaging performance (detectability index) have been developed for novel 3D imaging systems, photon counting detectors, and spectral imaging systems.

Novel 3D image reconstruction methods are under development to improve image quality and reduce radiation dose in CT and cone-beam CT. Model-based image reconstruction (MBIR) methods developed at I-STAR include: advanced system models for the resolution and noise characteristics of the imaging system; incorporation of prior information - including previous scans of the patient and/or knowledge of devices within the patient; and "task-driven" reconstruction methods that tune image quality according to the spatial frequency content of structures of interest. Algorithms include a general penalized likelihood estimation (PLE) framework, prior-image deformably registered  reconstructon (PIRPLE), and "known-component" reconstruction (KC-Recon). Research is underway to accelerate such methods for runtime consistent with clinical use and translation to first clinical studies.

Multi-modality image registration is an important aspect of image-guided intervention as well as emerging data-intensive methods for extracting multiparametric information from large image datasets. The I-STAR Lab develops and applies new multi-dimensional, multi-modality image registration methods well suited to particular clinical applications and tasks. Examples include: a generalized Demons framework for deformable registration in cone-beam CT, free-form registration that accounts for the motion of rigid bodies within deformable tissue and the "mismatch" of information between images (for example, intraoperative images containing an implant and preoperative images that do not); and 3D-2D registration methods for target localization and decision support in the OR (for example, the LevelCheck algorithm for spine surgery). Extending these methods to large image datasets enables novel biomedical data science for predictive modeling .

Novel Imaging Systems

Deep Learning and Image Analysis

Image-Guided Surgical Robotics

Among the most challenging and exciting topics in medical imaging research is the development of advanced imaging technologies for guiding diagnostic and therapeutic procedures. With applications ranging from radiation therapy to interventional procedures and surgery, the importance of bringing high-performance imaging into the arena of therapy is clear. The development of such novel technologies also raises a number of critical research questions, ranging from issues of basic image science to integration with the therapeutic procedure.

The research in surgical data science develops algorithms to automatically compute image analytics from perioperative images of patients undergoing medical interventions.  A data intensive framework called SpineCloud incorporates a number of image registration and segmentation tools to extract high-level features from image data. Example analytics include: (i) automatic segmentation of spine anatomy from 3D images; (ii) 3D2D registration methods for mapping information from CT or MRI to x-ray radiographs; (iii) 3D localization of interventional devices from 2D radiographic views;  (iv) automatic definition of reference trajectories / acceptance windows for spinal instrumentation. (v) automatic extraction of local and global spinal curvature measurements. The framework also include machine learning algorithms that model and predict patient outcomes based on data and image analytics.

New imaging and image registration methods developed at the I‑STAR lab open new avenues for precision and accuracy in image-guided robotics. Unlike many existing robotic solutions that rely on conventional surgical tracking and rigid registration to preoperative imaging, intraoperative image-guidance offers a simplified workflow that is resilient to changes imparted by tissue deformation. Intraoperative imaging with quality sufficient to visualize, target, and drive accurate rigid/deformable registration can take fuller advantage of high-precision robotics, with added potential benefits to patient safety through eliminating hand tremor, enforcing remote center-of-motion at ports of entry, force limiting safeguards, elimination of fulcrum effect, micro-scaling of movements, and path planning/optimization. General-purpose prototype robotic platforms are used in cadaver studies to evaluate geometric accuracy, workflow, and establish a testbed for translation to future clinical studies.

Image-Guided Interventions

Diagnostic Imaging

Quantitative Imaging

Investigators in the I-STAR Lab have worked on the development of novel imaging systems now standard-of-care in image-guided interventions - including cone-beam CT guidance of radiotherapy and mobile C-arms capable of high-quality 3D imaging in the OR. Such research spans a vibrant scope beyond the development of new hardware systems, including registration of multi-modality image data, integration with surgical navigation systems, and ensuring safe, streamlined implementation within clinical workflow.

The I-STAR Lab is a development and proving ground for novel CT and cone-beam CT prototype scanners. Systems under development include dedicated scanners for quantitative, high-resolution musculoskeletal / orthopaedic imaging, mobile scanners for imaging of traumatic brain injury and stroke at the point of care, as well as emerging detector technologies for photon counting, spectral imaging, and improved spatial resolution. Research includes system design and modeling, optimization, 3D image reconstruction algorithms, and translation to first clinical studies.

The focus of this research area is on novel technologies to derive accurate anatomical and physiological measurements from medical images. We are particularly interested in development of new quantitative biomarkers for orthopedics. This application spans a broad range of spatial scales, features, and processes, ranging from assessment of bone microstructure (~100 um), to measurements of bone mineral density, to evaluation of joint alignment under physiological load. Our team works on optimization of imaging systems and algorithms to achieve accurate quantification in all those tasks. For example, we have developed an ultra-high resolution imaging chain for an orthopedic CT system to enable in-vivo measurements of bone microstructure. Another major area of interest involves automated methods to extract quantitative information from images, including anatomical measurements, 3D joint space mapping, and shape analysis.