Image-Guided Surgery and Therapy

The BIRC research facility and Program in Image-Guided Interventions (IGI) involve over 50 research staff and students, and have significant infrastructure support ($16M) from CFI, ORDCF, and the ORF. This program stimulated the establishment of a Graduate Program in Biomedical Engineering at Western combining the strengths of four faculties by a grant from the Whitaker Foundation. The program continues to grow through collaborations with scientists and clinicians in medical imaging, radiation oncology, neurosurgery, cardiology and cardio- thoracic surgery, urology, abdominal surgery, robotics, biomedical and electrical engineering, and human-machine interface design. (There is another section on Image-Guided Surgery and Therapy under the Cancer section starting on Page 21.)

The program develops and tests revolutionary surgical techniques to treat disease while causing less harm to the patient in a range of procedures from heart valve replacement to tumour removal in various organs. By collaborations with Canadian Surgical Technologies and Advanced Robotics (CSTAR), our researchers also are developing new procedure simulations to assist in surgical planning and in the training of surgeons using minimally invasive surgical systems such as the da Vinci robot (Intuitive Surgical Inc.) by designing novel mechatronic devices (hybrid systems incorporating mechanical, electrical, and software components).

In the past, progress towards implementing minimally invasive interventions has been hampered by the lack of adequate imaging for navigation to the operative site and a lack of effective instruments to administer the therapy. The application of improved imaging technologies along with the intelligent use of Virtual reality techniques, can remove much of the trauma associated with surgical access to therapeutic interventions, by using the guidance of high-quality, real-time imaging modalities.

Image-guided Neurosurgery

Epilepsy

Unlike other neurological diseases where common brain structures are found to be involved among patients, people suffering of temporal lobe epilepsy constitute a very heterogeneous group. The epilepsy research team is developing novel algorithms to identify pathologies that are unique to each individual using the latest developments in magnetic resonance imaging (MRI). This is very important in particular for patients who do not present any evidence of brain lesions or abnormalities in their imaging scans. In the near future, these techniques will streamline clinical assessment contributing to a less invasive, patient-specific surgical plan.

The correct lateralization of seizures (left or right hemisphere) is one of the many challenges faced by clinicians every day in the treatment of the disease. Using state-of-the-art machine learning methods, that analyze patient image databases, the epilepsy team is creating non-invasive tools that can determine the hemisphere of seizure onset. These tools have the potential to reduce the time that the patient needs to stay in the hospital for monitoring and other diagnostic tests.

The use of MRI to compare individual patients against a set of control volunteers looking for abnormalities that can be correlated to the lateralization and localization of the networks involved in the on-set and propagation of seizures.

Surgical Planning

Planning is an essential aspect of any surgical intervention with the goal of reducing intra- and post-operative complications while preserving healthy tissues and their functional status. Planning often requires continuous and simultaneous spatial understanding of pre-operatively acquired medical images and mental transformation of such information to the patient coordinate system. In the case of neurosurgical interventions, traditional approaches in planning tend to focus on the former, providing means for visualizing medical images, but lacking in mechanisms for facilitating mental transformation. Thus, interventionalists often rely on their previous experience and intuition as their sole guide to perform mental transformation, leading to longer operation time or increased likelihood of error under additional cognitive demands.

We have introduced a mixed augmented/virtual reality system that facilitates planning by alleviating the difficulty of mental transformation. This system is designed and evaluated with human factors explicitly in mind, leading to a more user-friendly and less cognitively demanding system compared to traditional planning environments.

In this system, neurosurgeons are equipped with special augmented-reality glasses by which internal structures taken from preoperative images can be visualized on a head phantom or patient’s skull. This method of visualization and interaction allows surgeons to intuitively plan their intervention with minimum cognitive demand, which in turn, lead to reduced intervention time and improved patient outcome.

Image-Guided Intra-Cardiac Surgery

Image Guidance Platform for Beating Heart Mitral and Aortic Valve Repair

Building on their past accomplishments in image guidance for off-pump mitral valve repair, this team is continuing development of augmented reality guidance technologies.

One of the main components of this work is the process of automatically finding and tracking the mitral valve in ultrasound images during the surgical procedure. This technology will greatly improve the overall safety of procedures such as mitral valve repair.

Left: identifying the mitral valve directly in ultrasound. Right: using pre-operative images (CT) to identify the mitral valve in the operating room.

Cardiac Phantoms

Over 50,000 transcatheter aortic valve replacements in 40 countries have been performed in the past ten years. Devices such as the MitraClip and NeoChord have in the past 5 years been approved for use in humans in Europe. Over the past year, we have developed a physical simulator of the left ventricle and its valves.

This device is currently being used to accelerate research into new image guidance strategies for beating heart valve therapies.

Left: Dynamic heart phantom. Right: Ultrasound of heart phantom.

Simulation System

Surgical simulation brings together cutting-edge 3D computer graphics and amazingly realistic physics modeling in order to create lifelike, immersive virtual surgical environments. Within these environments, users are able to fully interact with the objects residing in the scene such as surgical tools, tissues, and organs. Through the use of haptic, or force-feedback devices, they are also able to “feel” and experience the tactile sensation of objects coming into contact with one another. Recent advances in computational hardware and algorithm development have contributed to yet another facet of the technology: the modeling of surgical techniques. Procedures such as suturing, cauterizing, cutting, clipping are all possible now within the simulated environment. When combined, these elements are able to create a powerful tool for clinical training and scientific research.

Ongoing work within the BIRC research facility in conjunction with SenseGraphics AB (Stockholm, Sweden) and NeoChord Inc. (Eden Prairie, MN, USA) is leading to the development of a simulator to mimic a novel technique for minimally invasive repair of mitral valve regurgitation. The prototype simulator features a complex, realistically textured, anatomically accurate, 3D model of a beating human heart derived from computed tomography (CT) that is used to synthesize virtual trans-esophageal echocardiography (TEE) ultrasound (US) images in real-time. The user is able to control the ultrasound probe and interact with the simulator via GUI or with a haptic device, which can be moved interactively along a virtual esophagus. The ultrasound plane is displayed in a classical 2D view familiar to clinicians but can also be displayed in the context of the 3D heart model to aid in teaching and training. A second haptics device is used to control a custom tool used in the mitral valve repair procedure where capturing a valve leaflet while in motion is required and incorporates the computational physics modeling of several complex surgical processes. 

Recent developments have generated interest in the use of the system as a TEE training device for echocardiographers. Part of the future strategy will include adapting the simulator in order to advantage of this unique opportunity to increase the overall net benefits of this innovative system.

Simulation of a mitral valve repair procedure displaying a virtual NeoChord device (foreground)

Image-Guided Needle Interventions

Needle interventions are a common component in many medical procedures ranging from diagnostics to anesthesia to surgical access. Examples of such procedures include lumbar puncture, epidural anesthesia, and central venous catheter. Image guidance for accurate needle placement is essential to improving the success and minimizing the adverse events of these interventions.

For example, epidural injection of spinal anesthetic is among the most challenging tasks in anesthesia. If advanced too far, the needle could damage the spinal cord causing temporary or permanent neurological complications. Currently, these interventions are performed blindly, that is, based solely on the experience and intuition of the anesthetist without real-time image guidance. Another common needle intervention is facet joint injection. Since many are fluoroscopy-guided, radiation exposure to the patient and the physician is a major consideration. As a result, there has been interest in using ultrasound to provide image guidance for both epidural and facet injections mainly because of its real-time display, accessibility, and lack of ionizing radiation. In addition, tracking technologies, such as magnetic tracking systems, allow for rapid, precise measurements of tool location within a three-dimensional (3D) environment. By tracking the ultrasound probe and needle, the surgical scene can be displayed in a virtual-reality environment, improving visualization of the needle and interpretation of ultrasound images.

More recently, this program, in collaboration with Sunnybrook Research Institute, has developed a novel approach to needle interventions by incorporating a small single-element ultrasound transducer and a magnetic sensor at the tip of the needle. This approach will allow for better detection of deep-seated anatomy such as the epidural space and the facet joint. The single-element transducer will provide information about the distance of the target from the needle tip as well as the boundaries between different tissue types in front of the needle. The position and orientation of the needle will be obtained from the magnetic sensor housed inside the needle as it is being inserted. This information can be displayed in a virtual reality environment to enhance navigation.

We have also developed an ultrasound-guided, navigated needling system for central venous catheter for clinical use.  By magnetically tracking the surgical needle and ultrasound transducer, and display the 3D rendition of the surgical scene in an Augmented Reality environment, the anesthetist is able to visualize the relative location of critical structures (such as carotid artery) and needle, thus insert the needle into jugular vein in a safe manner. This system has obtained Health Canada approval and is currently undergoing human trial at the University Hospital. 

Ultrasound-guided epidural needle intervention using a single-element ultrasound transducer embedded in the needle tip. The green dotted line shows the needle trajectory and the red circle indicates the estimated location of the epidural space calculated based on the received data by the transducer. This information is presented to the anaesthesiologist in a virtual reality environment to improve visualization and needle placement.

Ultrasound-guided epidural needle intervention using a single-element ultrasound transducer embedded in the needle tip. The green dotted line shows the needle trajectory and the red circle indicates the estimated location of the epidural space calculated based on the received data by the transducer. This information is presented to the anaesthesiologist in a virtual reality environment to improve visualization and needle placement.

Image-Guided Minimally Invasive Lung Cancer Treatment

Removal of a lobe of the lung (lobectomy) or a lobular segment (segmentectomy) is the treatment of choice for lung cancer, with wedge resection and ablative therapies are being performed on patients with poor pulmonary function. These therapies aim to preserve as much healthy parenchyma as possible while surgically removing or damaging the cancerous tissue. Reducing the trauma of open surgery, these procedures are now being performed minimally invasively under VATS (Video Assisted Thoracic Surgery) approach where surgeons’ direct vision is replaced by a video camera and his direct touch by long slender instruments, making reliable and accurate resections or ablations difficult.

Accurate localization is the key to complete removal or ablation of tumours, but due to the large deformation of the lung from its preoperative state to the intraoperative, collapsed state, localization of occult tumours is challenging. In addition, intraoperative ultrasound has some limitations in application to the lung due to artifacts.

These limitations are being addressed by combining preoperative imaging with intraoperative sensing modalities. This is accomplished by developing techniques to combine preoperative imaging with intraoperative ultrasound and novel minimally invasive approaches for tissue palpation, based on tactile sensing and robotic/mechatronic assistance. Fused data can be visualized by endoscopic video using sophisticated algorithms providing intuitive display for the surgeon to accurately localize and destroy cancerous tissue.

(a) Fused US and video, (b) bronchial tree segmented in a preoperative CT volume, (c) bronchial tree detected in a collapsed lung 3D US volume.

Image-guided Prostate Resection

Prostate cancer is the most frequently diagnosed malignancy in males in North America. Accurate treatment techniques for prostate cancer are crucial to diminish disease-specific mortality in patients and simultaneously enhance the oncological and functional outcomes. Image-guided prostate interventional techniques utilize pre- and intra-operative images to assist surgeons to distinguish subtle surface structures and to augment the awareness of the important surrounding of the organ. They usually overlay pre- and intra-operative images onto endoscopic camera views to provide an augmented-reality procedure for intuitively inspecting target or suspicious regions.

With the current estimate that 80% prostatectomies are robotically performed in North America, image-guided prostate interventions are now being developed for image-guided robotic prostate interventions, which will combine multimodal information processing with sensing and robotics to generate a super human-machine interaction team. Based on such a team with robotic stereoscopic vision, surgeons will obtain better oncological outcomes and surgical accuracy than with current image-guided prostate interventions.

In the development of image-guided robotic prostate interventions, this BIRC team is working to solve a fundamental challenge of precise coordinated interaction that synchronizes various image and surgical tool spaces. In particular, they focus on techniques to spare the neurovascular bundles during Robotic Radical Prostatectomy using several interesting new approaches: (1) surgical video augmentation: specularity removal, video defoggers, motion magnification, and 3D surface reconstruction, (2) initial registration: multimodal image fusion, (3) tracking: maintaining the registration using organ deformation and camera tracking, and (4) “firefly” (optical fluorescence contrast agent) -guided robotic prostate resection to optically enhance critical vessels.

Motion Magnification

In many image guided surgical procedures, sparing vessels, nerves and other sensitive tissue is a major challenge. Pulsation can provide a valuable cue when trying to identify some tissue, such as arteries, but is often too subtle to be seen in endoscopic video.  However, it is possible to process the video to detect and enhance the very subtle changes due to pulsatile motion. Several applications to image guided interventions are being explored.

Ultrasound guided needle intervention on a phantom model. The dural pulsation is shown as a heat map (blue-red) indicating the strength of pulsation. The live ultrasound image including dural pulsation is displayed in relation to the preoperative spine model (green)

Robotic Prostatectomy

In robotic and laparoscopic prostatectomy, sparing the neurovascular bundles is important for improving patient outcome and quality of life. However, these bundles, which each consist of an artery and nerve, are difficult to see. Enhancing the subtle pulsation of these bundles improves their detection. Other important vessels and even very minor arterial bleeding can also be enhanced.

Endoscopic Third Ventriculostomy

Endoscopic Third Ventriculostomy is a neurosurgical procedure used to treat hydrocephalus that involves making a small hole in the floor of the third ventricle. The basilar artery lies just under the floor and severe complications occur if it is injured. In most cases the basilar artery is clearly visible but in a significant number of patients the ventricular floor is opaque and the artery cannot be seen. In these cases, it may still be possible to detect the pulsation of the basilar artery allowing the procedure to continue instead of having to abort the ventriculostomy and install a shunt.

Spine Needle Intervention

These techniques are not limited to endoscopic procedures. In epidural and spinal anesthesia, ultrasound is often used to determine the ideal needle trajectory and depth. However, interpreting the ultrasound images is difficult, especially locating the dura which is essential for determining needle depth. Due to vascularization in the spinal chord, the dura pulses providing a cue to the anesthesiologist but is only seen in aproximately 30% of cases. Nearly impreceptable pulsation, that would otherwise be missed, can be automatically detected and displayed in an augmented reality environment.

Medical Image Registration and Segmentation

The program has been working on collaborative, interdisciplinary projects, improving medical image registration and segmentation through the use of mathematical optimization and computer vision.

Registration of two MR images into the same 'space' - Used in neurosurgical diagnostics and planning.

These techniques have allowed for the faster and more accurate image processing needed to bring pre-operative medical images into image-guided planning and surgical systems. The program has worked closely with the Computer Vision Group to develop and evaluate novel image processing algorithms including:

·  inter-modality deformable registration in MRI, CT, and 3D ultrasound

·  intra-modality deformable registration in MRI and CT for diagnostics

·  vessel tree segmentation and modeling in contrast enhanced CT and dual-energy CT

·  optimization-based segmentation automatically incorporating anatomical knowledge for heart, brain, prostate, vascular, and lung imaging

The program is working towards ensuring that the underlying image processing required to incorporate pre- and peri-operative are robust, fast, and capable of being translated into clinical practice for a variety of image-guided interventions.

Investigators
Dan Bainbridge, MD, Anaesthesia
Su Ganapathy MD, Anaesthesia
Robert Bartha, PhD, Magnetic Resonance Imaging
Jorge Burneo MD, Neurology
Michael Chu, MD, Cardiac Surgery
Sandrine DeRibaupierre, Neurosurgery
Maria Drangova PhD, Cardiac Imaging
Roy Eagleson, Visualization, Psychophysics
Aaron Fenster, PhD, 3D Ultrasound Physics
Gerard Guiraudon, MD, Cardiac Surgery
Bob Kiaii, MD, Cardiac Surgery
Richard Malthaner, MD, Surgical Oncology
David McCarty MD, Cardiology
Charles McKenzie, PhD, MRI
Seyed Mirsattari, MD, PhD, Neurology
Michael Naish, PhD, Mechatronics
Andrew Parrent, MD, Neurosurgery
Stephen Pautler, MD, Surgery
Rajni Patel PhD, Mechatronics/Robotics, Haptics
Terry Peters, PhD, Image-guided Interventions
David Steven, MD, Neurosurgery
James White, MD, Cardiology

Collaborators
Lixu Gu PhD, Cardiac Image-guidance. Shanghai JiaoTong University, and Hebei University, China
Stuart Foster PhD, Ultrasound, Sunnybrook Research Institute, Toronto, ON
Purang Abolmaesumi, PhD, IGST, Image Processing, University of British Columbia, Vancouver.
Gabor Fichtinger, PhD, Real-time Image Guidance, Queens University, Kingston, ON
Noby Hata PhD, IGT, Visualization, Harvard University, Boston, MA
Kevin Cleary, PhD, Image-guided Interventions, National Children's Hospital Washington DC.