Annual Reviews home
0
Skip to content
  • For Librarians & Agents
  • For Authors
  • Knowable Magazine
  • Institutional Login
  • Login
  • Register
  • Activate
  • 0 Cart
  • Help
Annual Reviews home
  • JOURNALS A-Z
    • Analytical Chemistry
    • Animal Biosciences
    • Anthropology
    • Astronomy and Astrophysics
    • Biochemistry
    • Biomedical Data Science
    • Biomedical Engineering
    • Biophysics
    • Cancer Biology
    • Cell and Developmental Biology
    • Chemical and Biomolecular Engineering
    • Clinical Psychology
    • Computer Science
    • Condensed Matter Physics
    • Control, Robotics, and Autonomous Systems
    • Criminology
    • Developmental Psychology
    • Earth and Planetary Sciences
    • Ecology, Evolution, and Systematics
    • Economics
    • Entomology
    • Environment and Resources
    • Financial Economics
    • Fluid Mechanics
    • Food Science and Technology
    • Genetics
    • Genomics and Human Genetics
    • Immunology
    • Law and Social Science
    • Linguistics
    • Marine Science
    • Materials Research
    • Medicine
    • Microbiology
    • Neuroscience
    • Nuclear and Particle Science
    • Nutrition
    • Organizational Psychology and Organizational Behavior
    • Pathology: Mechanisms of Disease
    • Pharmacology and Toxicology
    • Physical Chemistry
    • Physiology
    • Phytopathology
    • Plant Biology
    • Political Science
    • Psychology
    • Public Health
    • Resource Economics
    • Sociology
    • Statistics and Its Application
    • Virology
    • Vision Science
    • Article Collections
    • Events
    • Shot of Science
  • JOURNAL INFO
    • Copyright & Permissions
    • Add To Your Course Reader
    • Expected Publication Dates
    • Impact Factor Rankings
    • Access Metadata
    • RSS Feeds
  • PRICING & SUBSCRIPTIONS
    • General Ordering Info
    • Online Activation Instructions
    • Personal Pricing
    • Institutional Pricing
    • Society Partnerships
  •     S2O    
  •     GIVE    
  • ABOUT
    • What We Do
    • Founder & History
    • Our Team
    • Careers
    • Press Center
    • Events
    • News
    • Global Access
    • DEI
    • Directory
    • Help/FAQs
    • Contact Us
  • Home >
  • Annual Review of Biomedical Engineering >
  • Volume 20, 2018 >
  • Luo, pp 221-251
  • Save
  • Email
  • Share

Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications

  • Home
  • Annual Review of Biomedical Engineering
  • Volume 20, 2018
  • Luo, pp 221-251
  • Facebook
  • Twitter
  • LinkedIn
Download PDF

Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications

Annual Review of Biomedical Engineering

Vol. 20:221-251 (Volume publication date June 2018)
First published as a Review in Advance on March 5, 2018
https://doi.org/10.1146/annurev-bioeng-062117-120917

Xiongbiao Luo,1 Kensaku Mori,2 and Terry M. Peters3

1Department of Computer Science, Fujian Key Laboratory of Computing and Sensing for Smart City, Xiamen University, Xiamen 361005, China; email: [email protected]

2Department of Intelligent Systems, Graduate School of Informatics, Nagoya University, Nagoya 464-8601, Japan; email: [email protected]

3Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada; email: [email protected]

Download PDF Article Metrics
  • Permissions
  • Reprints

  • Download Citation
  • Citation Alerts
Sections
  • Abstract
  • Keywords
  • INTRODUCTION
  • SURGICAL BIG DATA
  • METHODOLOGY
  • CLINICAL APPLICATIONS
  • ENDOSCOPIC ADVANCES
  • CONCLUSIONS
  • SUMMARY POINTS
  • FUTURE ISSUES
  • disclosure statement
  • acknowledgments
  • literature cited

Abstract

Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.

Keywords

endoscopy, big data, image-guided intervention, surgical robotics, surgical navigation, augmented reality, artificial intelligence, deep learning, endoscopic vision, image registration, 3D printing

1. INTRODUCTION

As described by Marks & Dunkin (1), endoscopy has evolved over many generations since 1806, when Philipp Bozzini created a light conductor to visually inspect the bladder and rectum by using a concave mirror that reflected the light of a candle. Marks & Dunkin provide a brief history of endoscopy, from a simple tube with lenses and a light source to today's methods incorporating various surgical endoscopic platforms and technology.

In 1877, more than 70 years after Bozzini's invention, Maximilian Nitze devised a cystoscope/photo-endoscope that combined lenses and electric light to examine the bladder. In 1881, Johann von Mikulicz-Radecki performed a stomach examination using a gastroscope, which was the first instrument to have an integrated miniature light bulb at its distal tip (2). In 1887, using this instrument, Gustav Killian performed the first bronchoscopy, enhancing the illumination by using a small head mirror. In 1901, using Nitze's cystoscope and without pneumoperitoneum, Hans Jacobaeus performed the first celioscopy; Jacobaeus has been credited with many publications on endoscopic explorations of the abdomen and the thorax, including papers on laparoscopy in 1912 and on thoracoscopy in 1910 (3).

Endoscopy was restricted to a small number of enthusiasts until 1932, when Georg Wolf, a German manufacturer of rigid endoscopes, produced a semiflexible gastroscope. In 1945, Karl Storz began manufacturing endoscopic devices for ear, nose, and throat surgeons (4). In 1952, the British surgeon Harold Hopkins used cold light to illuminate the endoscopic field, given that warm light can damage the canals and cavities of the body; he also invented the fibroscope (5). In 1957, inspired by Hopkins's endoscope, Basil Hirschowitz, an academic gastroenterologist best known in the field for having invented an improved optical fiber, created the first flexible fiber optic endoscope (6), in collaboration with Larry Curtiss and C. Wilbur Peters. Thousands of glass fibers integrated with flexible endoscopes are essential for endoscopic field illumination and image transmission for visualizing surgical procedures. Following advances in camera and video technologies, digital video endoscopy was created through the use of charge-coupled device image sensors in 1986, which promoted the development of modern endoscopy.

1.1. Motivation for Endoscopic Imaging

Modern endoscopy generally consists of intraoperative imaging systems and endoscopes (Figure 1). An endoscopic imaging system, such as the VISERA ELITE Platform (Olympus Corporation, Japan), usually contains a monitor, a video system center, and various light sources for different examination purposes. A typical endoscope comprises a light guide connector and tube, control body, insertion tube, and bending section with internal instrument channels. The endoscope's distal tip is generally integrated with working channels, light guides with illumination fibers, and video cameras with objective lenses. Various types of endoscopes (e.g., bronchoscope, gastroscope, colonoscope, laparoscope) can be inserted into the body through different natural orifices (e.g., mouth, nose, anus).

figure
Figure 1 

Endoscopic imaging is motivated by interventional diagnosis and treatment of a variety of different diseases and abnormalities that may otherwise go undetected. It provides on-site visualization of the operating field to assist surgeons in manipulating endoscopes and other surgical instruments to regions of interest during endoscopic interventions (Figure 2).

figure
Figure 2 

1.1.1. Endoscopic diagnosis.

Biopsy is a diagnostic test typically associated with cancer detection and staging. It is commonly performed by surgeons or interventional radiologists to extract sample cells or tissue for pathological assessment in order to help diagnose or identify areas of concern and determine whether or not tissues are cancerous. Types of biopsies available for clinical applications include needle biopsy, skin biopsy, and bone biopsy. Most of these procedures employ a sharp tool to remove a small amount of tissue from an area of concern. The surgeon usually selects the biopsy type in accordance with the condition and the area of the patient that requires closer examination.

Endoscopic biopsies are transluminal operations employed to reach target cells or tissues inside the body in order to collect samples from tubular anatomical structures, such as the bladder, colon, or airway tree. Such biopsies use endoscopes’ working channels to accommodate and guide surgical instruments to the approximate areas of concern, where sample tissues are removed from the body. Surgeons commonly perform endoscopic biopsies to diagnose or determine the presence or extent of various diseases, such as lung, esophageal, colorectal, or breast cancer.

Although endoscopic biopsies are widely used in clinical applications, their diagnostic yield depends on the precise localization of the areas of concern from which samples are acquired. In most endoscopic biopsies, suspicious tissues cannot be observed in endoscopic imaging, because they are usually hidden beyond the surfaces of tubular structures, suggesting that surgeons can use only preoperative anatomical information and their surgical knowledge skills to blindly puncture suspicious tissues in the operating room.

1.1.2. Endoscopic treatment.

Once a diagnosis of cancer is confirmed, staging and treatment will follow. Various primary strategies of cancer treatment generally involve surgery, radiotherapy, and targeted therapy, which can be employed separately or in combination. Endoscopic treatments are involved with both surgery and therapy. Whereas endoscopic surgery, also commonly known as minimally invasive surgery, typically refers to a surgical resection operation that directly removes cancerous tumors from the body, endoscopic therapy uses radiation or drug injection to kill tumor tissue in situ. Endoscopic therapy commonly employs laser ablation, which destroys problematic regions including precancerous and cancerous tissue at high temperatures in a matter of seconds.

Treatment options are usually determined by the types and stages of cancer. For cancers that are not diffuse, endoscopic surgery is the most common noninvasive or minimally invasive treatment; for example, surgical removal is most curative for colorectal cancer. Lung cancer is generally categorized into small cell or non–small cell for the purposes of treatment. While patients diagnosed with advanced-stage non–small cell lung cancer are usually treated with chemotherapy or targeted drugs, endoscopic resection and laser ablation are usually the treatments of choice for early-stage non–small cell lung cancers.

Regardless of whether the surgeon chooses surgical resection or laser ablation for treatment, the problematic tissue must be precisely targeted to obtain the optimal treatment outcome. The targeted region should be accurately associated with the localization of surgical instruments inside the patient. Accurate positioning of targeted regions and surgical instruments is therefore crucial for surgeons to perform successful endoscopic treatment. However, it remains a challenge to spatially and temporarily determine those positions in various endoscopic procedures, particularly when the targets are outside the field of view of a rigid or flexible endoscope.

1.1.3. Remarks.

Endoscopic imaging is a widely used modality that enables screening, surveillance, diagnosis, and treatment of a wide range of diseases and disorders in a noninvasive or minimally invasive way. Endoscopic imaging enables endoscopic diagnosis and treatment such as biopsy, tumor resection, and laser ablation to be easily performed when anatomical targets and surgical tools inside the body can be visualized in endoscopic images. However, in most cases, such targets cannot be observed in endoscopic views or surgical fields during interventional endoscopy. This raises two fundamental issues: (a) where to go and (b) how to get there using various endoscopic procedures or interventions, where surgeons expect clear and intuitive visualization with precise and real-time localization of areas of concern and surgical instruments. These issues provide the motivation for researchers to develop various advanced endoscopic navigation (AEN) systems.

1.2. Navigation in Endoscopy

AEN represents various surgical concepts and approaches that use computer and information technologies for surgical planning and for guiding or performing interventional diagnosis and treatment. In endoscopic diagnosis and treatment, navigation is defined by two major questions posed from different clinical applications: Where are the anatomical targets, and how do surgeons safely and quickly reach them? This definition implies that navigation can accurately identify the position of anatomical targets and simultaneously enable surgeons to automatically learn where they are and how they should orient surgical instruments associated with the targets during endoscopic interventions.

AEN is being developed to address these important questions concerning surgical instruments and anatomical targets, and it is also a leading factor in the development of robot-assisted surgery. While AEN systems are promising, they have to process a variety of big data in different modalities, as discussed in Section 2.

2. SURGICAL BIG DATA

Medical diagnosis and treatment procedures commonly involve various modalities of surgical data. During the past decade, the volume of surgical data has increased tremendously, bringing us to the era of surgical big data. AEN systems involve surgical big data that are generally classified into three main categories: (a) preoperative imaging, (b) intraoperative imaging, and (c) external sensing. Each of these categories is discussed in the following subsections.

2.1. Preoperative Imaging

Preoperative imaging is a diagnostic technology that employs various specialized scanners to acquire digital data of anatomical structures in the body. Preoperative images are used to visually diagnose diseases and abnormalities, such as suspicious tissue or tumor changes, prior to an intervention. Similar scanners are also used to collect postoperative images to evaluate surgical performance and outcomes following treatment.

Various preoperative modalities are commonly used in diagnosis and surgical planning. Computed tomography (CT) and magnetic resonance (MR) imaging techniques are frequently employed in surgical navigation systems. Diffusion tensor imaging (DTI) is a relatively new modality that uses the diffusion of water molecules to enhance contrast in MR images so as to visualize the location, orientation, and anisotropy of the brain's white matter tracts (7). More recently, diffusion spectrum imaging (DSI) was developed to address the challenge of DTI-based tractography's inability to directly image multiple fiber orientations within a single voxel (8). Since DSI helps describe regions of white matter pathways, surgeons anticipate that it could eventually be employed to guide neuroendoscopy for accurate brain tumor resection (9).

Positron emission tomography (PET) imaging is rapidly increasing in popularity and has had a significant impact on patient management and survival outcomes, such as improving surgical treatment for lung cancer with avoiding inadvertent injury and guiding surgical resection for colorectal cancer (10, 11). Whereas CT and MR are structural modalities, the integration of PET with CT or MR allows anatomic and metabolic information to be measured simultaneously. PET-CT- and PET-MR-guided interventions are increasingly being employed in cancer diagnosis and treatment.

2.2. Intraoperative Imaging

Intraoperative imaging allows surgeons to capture real-time views of the organ being operated on, as well as its anatomical surroundings, and enables more precise targeting during interventional procedures. Intraoperative imaging modalities are widely used to examine anatomical structures either on the surface of an organ or beneath it. Common intraoperative imaging modalities are discussed in the following subsections.

2.2.1. Endoscopic imaging.

Optical endoscopic imaging is indispensable for most minimally invasive surgical procedures and provides surgeons with continuous and direct real-time visualization of the surgical field, as well as intuitive manipulation of surgical tools. However, it also suffers from several bottlenecks, such as a relatively limited light source and field of view, rendering it incapable of inspecting anatomical structures outside tubular organs such as the esophagus or colon. Moreover, it is unable to visualize many useful details, such as neurovascular bundles and bleeding regions on the organ surface.

2.2.2. Cone beam computed tomography.

Cone beam computed tomography (CBCT), as its name implies, consists of a cone-shaped X-ray beam, generated by the X-ray source and detector (image intensifier or flat panel detector), that rotates around a field of interest and captures a cylindrical volume of data (12). While conventional CT forms a fan-shaped beam and is usually used for preoperative diagnostic imaging, CBCT is often employed in the operating room, particularly in craniofacial or maxillofacial imaging in dental surgery (13). Real-time and accurate registration between CBCT and endoscopic videos can guide skull base surgery (14), and CBCT integrated with angiographic imaging is a powerful technique for intraoperative localization of cerebral arteriovenous malformations (15). Currently, several commercial systems of CBCT, such as DynaCT (Siemens Medical Solutions, Germany), XperCT (Philips Medical Systems, Netherlands), and Innova CT (GE Healthcare, United States), are available for use in clinical applications.

2.2.3. Endoscopic ultrasound.

Ultrasound (US) imaging uses high-frequency sound waves to image soft tissue, enabling physicians to evaluate, diagnose, and treat medical conditions without the risk associated with exposure to ionizing radiation. A major advantage of this technique is its ability to capture images in real time, showing the motion of the organs as well as blood flowing through the blood vessels. Recently, ultrafast US, a new technology with frame rates typically faster than 1,000 frames/s, has been used for deep superresolution vascular imaging (16).

Endoscopic US (EUS) enables surgeons to image the interior structures and surroundings of anatomical organs during interventional endoscopy. The endoscope uses either a US transducer fixed to its distal tip (e.g., endobronchial US) or an ultrathin radial ultrasonic probe through its working channel (Figure 3). EUS is a fast-developing surgical area associated with advances in technology, resolution, and instrumentation, and it is increasingly being extended to applications in specialties such as laparoscopic resection for gastric tumors (17) and diagnosis and staging of lung cancer (18).

figure
Figure 3 

2.2.4. Intraoperative magnetic resonance imaging.

Intraoperative MR imaging is a relatively new modality that allows surgeons to monitor a surgical site using MR during surgery. Such a modality is most often employed in conjunction with neurosurgery, particularly with endoscopic transsphenoidal surgery for pituitary adenoma resection (19, 20), where it provides the surgeon with the ability to confirm fenestrations and biopsies, detect complications, and redefine anatomical changes during the operation. More recently, a study combining intraoperative MR imaging with neuronavigation demonstrated that such an imaging technique can improve the surgical outcome of endoscopic transsphenoidal surgery (21).

2.2.5. Optical coherence tomography.

Optical coherence tomography (OCT) is a new imaging technology that uses low-coherence interferometry to capture two-dimensional (2D) micrometer-resolution images of optical scattering from internal tissue microstructures (22, 23). It provides a depth-resolved, noninvasive, nondestructive imaging modality similar to US imaging (24). OCT is commonly used for diagnosis and surgery of eye disease (25–27). 2D OCT images can be reconstructed to provide three-dimensional (3D) visualization (Figure 4). OCT is currently being applied to various clinical fields to examine tubular anatomical structures using different transluminal tools such as endoscopes, needles, and other imaging probes (28). In addition, OCT is being extended to noninvasive depth-resolved functional imaging that offers spectroscopic, polarization-sensitive, blood flow, and physiological tissue information. These extensions have the potential to improve image contrast while simultaneously enhancing pathologies by using localized metabolic properties or physiological states (28). A thorough survey of OCT and its medical applications is available elsewhere (29).

figure
Figure 4 

2.2.6. Single-photon emission computed tomography.

Single-photon emission computed tomography (SPECT) is a functional imaging technique that uses γ cameras or probes to detect γ-rays emitted by an injected radioactive substance or tracer to acquire multiple 2D projections from various angles (30). On the basis of tomographic reconstruction algorithms, the multiple projections are reconstructed into 3D images (31). SPECT provides functional information similar to that obtained from PET about blood flow to tissues and metabolism, but it enables real-time in vivo imaging of several γ radioactive compounds in the body during intervention.

SPECT imaging is used in many clinical situations (32–34), particularly for coronary disease (35–37). More recently, SPECT has been employed to create a commercialized image-guided system, declipseSPECT (SurgicEye GmbH, Germany), that provides 3D breast imaging, navigation, and control of complete resection. During laparoscopy, the declipseSPECT system uses freehand SPECT technology to generate 3D images of radioactively marked structures and provides surgeons with an intraoperative 3D imaging system for precise, minimally invasive sentinel lymph node biopsy.

2.3. External Sensing

External sensing refers to the use of external devices that usually do not form part of an endoscope to track surgical instruments. On the basis of real-time sensing or tracking information, the six-degrees-of-freedom (6DoF) position and orientation of surgical tools used in intervention can be associated with preoperative computational anatomical models. Currently, various external sensing techniques (see Section 3.2.2) are available for surgical navigation.

3. METHODOLOGY

AEN is generally recognized as an innovative concept, a measurement toolbox, and an information container that provides surgeons with the appropriate information at the right place and the right time during interventional endoscopy. Undoubtedly, surgical big data play a vital role in exploring a variety of AEN systems. Whereas preoperative image processing is employed to create computational models of patient anatomy, intraoperative data analysis provides surgeons with direct visualization of the surgical field. The general principles of various AEN systems are computational anatomy, surgical navigation, intuitive visualization, and interactive software (Figure 5). These principles are discussed in the following subsections.

figure
Figure 5 

3.1. Computational Anatomy

Computational anatomy is a relatively new discipline that uses various imaging modalities, particularly preoperative imaging, to comprehensively describe human anatomy in a digital format (38) and create precise virtual models of anatomical structures and organs. The development of accurate virtualized models is paramount for surgical navigation systems, since such models provide maps and target localization during surgery. Generally, computational anatomy associated with various modalities and algorithms aims to answer the question of where the anatomical targets are prior to intervention.

Conventionally, surgeons form mental images of the organ and anatomical structures of areas of concern from a 3D volume or 2D images prior to surgery, and they are trained to interpret these images in relation to the 3D surgical field during the procedure. Segmentation and registration are indispensable computational anatomy techniques to process and analyze medical images and reduce the surgeon's cognitive load. Segmentation detects and extracts target boundaries or regions of interest within 2D slices or 3D volumes in multiple modalities and is commonly classified into manual, interactive, and automatic approaches. During the last two decades, many segmentation algorithms have been developed (39, 40). Registration spatially aligns reference and target images from either the same modality or different modalities so that relevant information in each modality can be optimally integrated or compared (41). A recent retrospective view of medical image registration over the past two decades is provided by Viergever et al. (42).

While computational anatomy promotes diagnostic, preoperative planning (Figure 6), and surgical simulation (Figure 7), it remains challenging to explore advanced methodologies for precise and robust retrieval of anatomical structure information from different types of medical images, which is the virtual equivalent of dissecting a real human body. This challenge arises mainly from clinical variations pertaining to patient differences, partial volume effects, and various levels of imaging resolution. Recently, Schork (43) reported a new concept of personalized (or precision) medicine. Moreover, machine intelligence and learning technologies could provide a powerful tool to address the challenges in computational anatomy (44, 45). Additionally, every year many new medical image computing approaches are published in two flagship journals, Medical Image Analysis and IEEE Transactions on Medical Imaging.

figure
Figure 6 
figure
Figure 7 

3.2. Surgical Navigation

In image-guided endoscopic surgery, the guidance of an instrument toward a desired target is typically defined as surgical navigation. In this respect, surgical navigation or active surgical guidance is the most important element of AEN systems. Surgical navigation can be described as a combination of computational anatomy, tracking algorithms or devices, image data confluence, and specialized instruments to assist and guide surgeons during intervention. It provides accurate real-time positioning of in vivo anatomical structures and organs as well as surgical instruments overlaid on preoperative images in the operating room. The general principles of surgical navigation methods are discussed below.

3.2.1. Vision-based tracking.

Vision-based tracking is a navigation method used to register 2D endoscopic video images to preoperative 3D data in real time. The 3D data are usually rendered as 2D virtual images that correspond to various endoscopic camera 6DoF poses, including position and orientation parameters, in the coordinate system of the 3D preoperative computational anatomy (Figure 8). We also refer to this tracking method as video–volume (2D–3D) registration, which can be formulated by an optimization procedure:

1.
equation 1

where AMiC is the optimal camera pose predicted at the ith endoscopic image Ii and indicates the transformation from endoscopic camera C to computational anatomy A, is the similarity function, and Ij(·) is a 2D virtual rendering procedure that corresponds to camera pose AMjC at the jth iteration in optimization. Although can also be defined as a dissimilarity function, Equation 1 then becomes a minimization procedure to estimate AMiC.

figure
Figure 8 

Many authors have discussed video-based tracking. Deguchi et al. (46) proposed a selective image similarity measure to register endoscopic video sequences and preoperative volumes, which was improved by Luo et al. (47), while Merritt et al. (48) reported an interactive CT–video registration technique to continuously guide bronchoscopic intervention. Mirota et al. (14) used high-accuracy 3D image–based registration to align endoscopic video and CBCT images, and a video–volume registration method was developed by Luo & Mori (49) by use of discriminative structural similarity measure to track endoscope motion tracking. Shen et al. (50) explored a depth reconstruction method to achieve a similar goal, and Zhang et al. (51) employed a 3D graph–based optimization method for simultaneous registration of position and orientation during intravascular US intervention.

3.2.2. External tracking.

External tracking refers to the use of devices and systems to localize surgical instruments in real time during endoscopic surgery. This tracking technology typically uses external position sensors, such as electromagnetic (EM) sensors that are attached to the surgical tools to measure their movement, to determine AMiC as follows:

2.
equation 2

where AME, EMiS, and SMC are transformation matrices describing the spatial relationships between computational anatomy A, external tracking system E, external sensor S, and endoscopic camera C, and calculated by the methods discussed in the following subsections.

3.2.2.1. Calibration.

AEN systems generally employ a combination of endoscopes, EUS probes, and external trackers with position sensors. These integrated devices provide different sensing information in different coordinate systems. In order to relate this information to a common reference frame, calibration must be performed prior to navigation.

The camera calibration process employs employs standard algorithms and images of a special pattern (e.g., chessboard or square grid) to estimate the camera's intrinsic parameters (focal length, skew, distortion, and image center) (52). On the basis of these parameters, camera distortion on endoscopic video images can be corrected. This is particularly important for vision-based tracking methods because it improves the tracking accuracy during navigation. Zhang (53) proposed a flexible new technique for camera calibration that is now used in many computer vision tasks. Hartley & Kang (54) simultaneously calibrated a camera's radial distortion function along with the other internal calibration parameters. The advantage of this method is that it determines radial distortion in a parameter-free manner without the need to use any particular model. More recently, a unified model has been reported to calibrate a wide variety of camera models, such as pinhole, fisheye, catadioptric, and multicamera networks (55).

Hand–eye calibration (HEC) aims to determine the relationship SMC between external position sensors (hand) and imaging devices (eye), such as endoscopic cameras and EUS probes. The HEC problem originally arose in the area of robotics, and several classical HEC methods (56–59) are still in routine use. Most HEC approaches usually employ the results of internal and external parameters from camera calibration using specific patterns. More recently, camera calibration–free approaches have been invoked to solve the HEC problem (60, 61).

3.2.2.2. Initial registration.

Initial registration is the process of determining the spatial relationship AME between computational anatomy and external tracking system prior to real-time navigation. This procedure, also referred to as tracker-to-model registration, aims to determine the spatial transformation AME in Equation 2.

Two strategies of marker-based and marker-free registration are commonly used to compute the optimal solution AME. Marker-based registration employs either artificial fiducial markers placed on the body or natural/anatomical fiducial markers available within the body or on its surface. Wognum et al. (62) validated a deformable image registration algorithm with 30–40 fiducial markers for pelvic cancer surgery, and Hughes-Hallett et al. (63) reviewed a fiducial-based registration method for guided partial nephrectomy. Inoue et al. (64) improved the accuracy of the point-based rigid-body registration algorithm with implanted fiducial markers for breast intervention, while Tabrizi & Mahvash (65) used five fiducial markers to perform initial registration during image-guided neurosurgery. Marker-free registration does not require any artificial or natural fiducial markers but instead employs the constraints of typical anatomical structures to estimate AME. Klein et al. (66) proposed a fiducial marker–free method by maximizing the percentage of external sensor measurements inside the preoperative volume to predict AME. Deguchi et al. (67) explored a marker-free framework that minimizes the distance between external sensor outputs and the center line of the organ, and an initial registration strategy to estimate the relationship AME without any fiducial markers was reported by Hofstad et al. (68). Luo (69), Luo & Mori (70), and Luo et al. (71) developed several marker-free registration methods to calculate AME.

Optical tracking uses an external position sensor to perceive IR-emitting or retro-reflective markers affixed to a surgical tool or object. The position sensor determines the tool's position and orientation in accordance with the information that the sensor receives from such markers, which are generally classified into active and passive categories. An example of a typical tracking system is the Polaris device (Northern Digital Inc., Canada); however, this device can be used only with rigid endoscopes, where the markers are fixed at the end distal to the camera.

EM tracking systems use embedded sensor coils to determine the location of objects. Each system consists of three main components: a control unit, EM sensors, and an EM transmitter that establishes a tracking volume. When the object is located inside the tracking volume, currents are induced in the coils. These currents are used to compute the position and orientation of the object in real time.

Existing EM tracking systems include the 3D Guidance product suite (Ascension Technology Corporation, United States) and Aurora (Northern Digital Inc., Canada). In contrast to the Polaris device, these systems require no line-of-sight constraints from the sensor (tool) to the field generator and can be embedded in nonrigid endoscopes. Today's EM trackers are widely used in various surgical navigation systems. Although optical and EM tracking are routinely employed to minimally invasive surgical procedures, there are a number of alternative trackers available as well, as discussed below.

Stereoscopic vision is employed to perceive depth information and 3D structure derived from video information from two or more video cameras. It is a powerful technique to estimate the position and orientation of objects within a visual scene. MicronTracker (ClaroNav, Canada) uses stereoscopic vision to create a new generation of trackers that can detect and track specially marked objects. This external tracking system employs visible light and computer vision to detect fully passive marked targets and track them by processing standard video images. ClaroNav reported that MicronTracker can be employed in various surgical procedures, including image-guided intervention, ablation, and biopsy; can be operated manually or using robotics; and can assist augmented-reality (AR) procedures by providing direct visualization (72).

Inertial tracking uses a miniature microelectromechanical triaxial inertial sensor attached at the endoscope's tip to measure the impact of gravity on each of the three orthogonal accelerometer axes (73). Similar to EM sensors, inertial sensors are very small and can be unobtrusively used for endoscopic image reorientation. However, inertial tracking can measure only relative changes in pose, rather than determining absolute values.

An optical position sensor is a microscopic image acquisition device that uses an array of photodiodes to convert light into an electrical current and perceive the position of a light spot. The sensor, including an optical lens, light source, and digital signal processor, can measure motion relative to an object's surface. Such a system acquires sequential surface images that are processed to determine 2D displacements of the surface.

On the basis of optical position sensor techniques, a new external tracking prototype can be created to track the endoscope's movement. This tracking prototype has been demonstrated to be an effective strategy to navigate flexible endoscopes (74).

Radio-frequency identification (RFID) has recently been developed to localize RFID-tagged objects with millimeter accuracy using phase difference (75, 76). This technology shows great promise for clinical applications because of its wireless, extremely small RFID tags and inherent powerful identification ability.

The Calypso system (Varian Medical Systems, United States) is a wireless four-dimensional localization system that uses a set of three transponders generating radio-frequency waves. The system has provided surgeons with accurate alignment to a target prostate in real time, with an error of 2.0 mm or better, and assists in avoiding unnecessary radiation to healthy tissues (e.g., the bladder) during prostate radiotherapy (77, 78).

With advances in optics, optical fibers can be used to quasi-continuously detect strain and temperature along the fibers’ direction. Commercially available optical backscatter reflectometers (Luna Innovations, Roanoke, Virginia) enable inspection of seven million data points along a 70-m-long fiber, corresponding to a spatial resolution of 10 μm. The ShapeTape device (Measurand, Canada) has been commercialized to track various surgical tools. Koizumi et al. (79) proposed ShapeTape-driven tracking for 3D US systems. Li et al. (80) developed and evaluated a new body–seat interface shape measurement system based on fiber-optic tracking, and Housden et al. (81) used ShapeTape to track a 2D US probe for freehand 3D US.

3.2.3. Hybrid tracking.

Hybrid tracking combines vision-based and external tracking techniques to navigate surgical instruments. It aims to tackle the disadvantages of using either vision-based methods or external tracking alone; for example, the initial registration discussed above is a rigid procedure that can lead to inaccurate navigation caused by either tissue deformation or the inherent drawbacks of tracking devices. Hybrid tracking also refers to multimodal information fusion of preoperative data (e.g., CT or MR volume), intraoperative videos (endoscopic or US images), and external tracking measurements. Accurate and real-time fusion strategies are the key to developing hybrid navigation systems.

Numerous hybrid tracking approaches have been discussed in the literature. Feuerstein et al. (82) proposed magneto-optical tracking of flexible laparoscopic US, and Soper et al. (83) reported hybrid tracking based on Kalman filtering for bronchoscopic navigation. Hybrid localization methods have been widely discussed for robotic endoscopic capsules (84, 85). Reichl et al. (86) explored a hybrid endoscope tracking with guaranteed smooth output, while Luo and colleagues (87–91) developed several hybrid navigation methods by using stochastic filtering and evolutionary computation algorithms.

3.3. Intuitive Visualization

All preoperative and intraoperative data must be appropriately presented to surgical personnel in the operating room. AEN requires a display system that provides surgeons with critical structural and functional information in real time, and should provide minimal interruption to the surgical work flow. 3D volumetric data must be presented to the surgeon in an intuitive manner so as to provide important and understandable information to evaluate anatomical structure and function.

A major advantage of navigation is simultaneous visualization of tracked surgical instruments in relation to multimodal data. Volumetric data can be visualized in several ways. Slice-based methods usually present orthogonal slices, allowing the surgeon to view patient 3D data in the axial, sagittal, and coronal directions (Figure 9). Multiplanar reconstruction is another way to inspect volumetric data with more than one slice orientation. Volume rendering is a visualization technique that uses a transfer function to assign each voxel an opacity or color that relates to the voxel's intensity in volumetric data (92). While the human visual system typically differentiates structures of interest in volumetric data from the surrounding image data by a boundary or a material interface, surface rendering is an intuitive way to separate structures of interest and simultaneously render the surface opaque and make other tissues transparent (93). Surface rendering requires previously segmented 3D data that are further processed in an intermediate step to generate anatomical 3D models, for example, with the marching cubes algorithm (94). In this respect, volume-based methods are more amenable to automatic algorithms than surface rendering. Moreland (92) recently published a thorough survey that discusses the range of current visualization pipelines.

figure
Figure 9 

AR was initially developed to solve the problem of how to integrate 3D virtual objects into a 3D real environment in real time. Various applications in medicine, manufacturing, visualization, path planning, and military operations have been described by Azuma et al. (95). Mixed reality refers to a combination of real and virtual images, as defined by Milgram & Kishino's (96) taxonomy. AR techniques are widely used to enhance intraoperative vision during various surgical interventions (63, 97–100).

3.4. Interactive Software

In order to ensure optimal interaction between surgeons and AEN systems, it is important to combine pre- and intraoperative data during the intervention so that they can be visualized in an intuitive manner. Moreover, reliable software, which simultaneously implements and updates various procedures relating to anatomical computation, surgical navigation, and intuitive visualization in real time, is a critical component of the AEN system (101).

Most commercial navigation software systems are proprietary, but since their development requires intensive effort, several open-source guidance software platforms and public libraries are available for academic use and surgical navigation development. 3D Slicer (https://www.slicer.org/) is a well-known software platform for medical image informatics, image processing, and 3D visualization that runs on various operating systems such as Windows, iOS, and Linux. The Image-Guided Surgery Toolkit (IGSTK) platform (http://www.igstk.org/) is a component-based framework that provides a common functionality for image-guided surgery applications. The Medical Imaging Interaction Toolkit (MITK) (http://mitk.org/wiki/MITK) is a free platform for development of interactive medical image processing software. Note that 3D Slicer, IGSTK, and MITK all use the open-source libraries of the Visualization Toolkit (VTK) and Insight Segmentation and Registration Toolkit (ITK). ITK (https://itk.org/) is a cross-platform system that provides developers with an extensive suite of software tools for image analysis, and VTK (http://www.vtk.org/) is a freely available library for 3D computer graphics, image processing, and visualization.

4. CLINICAL APPLICATIONS

Endoscopic intervention is broadly employed to inspect and operate on organs, airways, and vessels of the body to diagnose and treat various diseases in a minimally invasive manner. Originally, endoscopy was used only in the gastrointestinal (GI) tract, including the esophagus, stomach, and colon, but today it is widely used in the head and neck, throat, lung, abdomen, urinary tract, joints, and other areas. Clinical endoscopic applications (Table 1) are discussed in the following subsections.

image
CLICK TO VIEW
Table 1

Endoscopic navigation systems in clinical procedures

4.1. Neurosurgical Endoscopy

Neurosurgical endoscopic navigation usually uses optical tracking, MR data, and rigid endoscopes to provide surgeons with real-time online guidance and enhance the accuracy and safety during brain tumor resection (102). While neuroendoscopy is not a predominant method for neurosurgery, it can be employed when a target is close to a region accessible from a natural orifice. Registration between endoscopic video and C-arm CBCT is utilized to guide endoscopic skull base surgery (14). Recently, Torres-Corzo et al. (103) employed an electromagnetically navigated flexible neuroendoscope to explore the ventricles and basal cisterns of a patient with hydrocephalus.

4.2. Respiratory Endoscopy

Respiratory tract diseases can be diagnosed and treated by various endoscopic procedures. Scopis hybrid navigation (Scopis GmbH, Germany) is a surgical navigation system with AR capabilities for endoscopic sinus surgery (104). More recently, registration and fusion quantification were discussed in relation to AR-based nasal endoscopic surgery (105), and image-guided laryngoscopy was introduced as a possible alternative to conventional laryngectomy surgery (106). High-speed laryngoscopic recordings have been used to create 3D reconstructions of human laryngeal dynamics (107).

Bronchoscopy is routinely performed for lung cancer diagnosis, staging, and treatment, and various bronchoscopic navigation systems (Figure 10) have been described in the literature (47–50, 74, 83, 91). Thoracoscopy or video-assisted thoracic surgery is usually employed for pleural biopsy and pulmonary lesions in the lung and the heart (108, 109).

figure
Figure 10 

4.3. Gastrointestinal Endoscopy

The GI tract consists of various organs, including the esophagus, stomach, small intestine, large intestine (colon), and rectum. These organs can be inspected by different endoscopes. Esophagoscopy is a transnasal procedure performed under sedation or general anesthesia (110), whereas transorally introduced gastroscopy is an effective procedure to determine the resection margins of gastric cancer (111). Virtual gastroscopy, examination of a 3D upper GI tract reconstruction from CT scans, is used to evaluate malignancies of the stomach (112).

Colonoscopy is employed to detect and treat polyps, tumors, bleeding, or inflamed regions in the lower GI tract. However, EM-CT or colonoscopic video–CT registration and colonoscope tracking remain challenging for the advancement of colonoscopic navigation because of potential large deformations that can occur during intervention (113, 114).

4.4. Abdominal Laparoscopy

Laparoscopy is usually performed in the abdomen or pelvis, and image-guided laparoscopic procedures are frequently employed for liver surgery (115, 116). Laparoscopic prostatectomy is widely used for prostate cancer surgery (117, 118); this robot-assisted procedure is increasingly being performed for prostate tumor resection (119). Navigated laparoscopic gastrectomy is employed to treat gastric cancer (120).

4.5. Urinary Endoscopy

The urinary tract includes the urethra, kidney, ureters, and bladder. Cystoscopy plays an important role in predicting the grade and stage of bladder cancer (121, 122). Percutaneous nephroscopy is a routine surgical procedure that treats large or complex renal stones (123). Laparoscopic partial nephrectomy is an effective means of removing small renal tumors and simultaneously preserving the remainder of the kidney. An AR nephrectomy navigation system has been developed using EM tracking (124).

4.6. Joint Arthroscopy

Arthroscopy is a minimally invasive surgical procedure that inspects, diagnoses, and treats diseases inside joints such as hips and knees by use of an arthroscope. Computer-assisted arthroscopic navigation systems have been explored for hip and knee surgery (125, 126).

4.7. Others

Other surgical procedures also use endoscopes for various interventions. Cardiac surgery may use a 3D high-definition endoscopic system with augmented visualization (127), and endoscopic carpal tunnel release has been employed in hand surgery (128). Spinal surgery may use an epiduroscope to identify abnormalities in the epidural space, establish diagnosis, and administer treatment (129).

5. ENDOSCOPIC ADVANCES

Endoscopy is undergoing an evolution, and major improvements are being introduced as new technologies emerge. The concept of navigation is revolutionizing the development of modern endoscopy. In addition, new endoscopic devices, imaging, video processing, and interdisciplinary techniques are enhancing endoscopic interventions that have the potential to influence diagnosis, treatment, and clinical outcomes.

5.1. Wireless Capsule Endoscopy

The wireless capsule endoscope is a relatively new surgical device used to examine diseases in the GI tract, particularly the stomach and colon, that is completely changing conventional GI endoscopy, which must be performed manually. Current research in this area focuses on automatic detection, classification, and accurate localization of structures within the endoscopic field of view, as well as endoscopic video stabilization (130–132).

5.2. Robotic Endoscopy

An emerging technology trend in endoscopy is robotization, which aims to accurately and remotely manipulate endoscopes and other surgical instruments to targets and their surroundings. Robot-assisted laparoscopic procedures are finding increasingly broad use in abdominal procedures with the da Vinci surgical system (133, 134). Natural orifice transluminal endoscopic surgery (NOTES) is another new paradigm for laparoscopic procedures (135).

In contrast to the da Vinci robot and NOTES systems, which typically use rigid laparoscopes, the robotization of other endoscopic procedures (e.g., colonoscopy) that use flexible endoscopes presents a more challenging problem. In this respect, the combination of endoscopic navigation and robotized endoscope is a promising research direction. van der Stap et al. (136) proposed an image-based navigation strategy to robotize a flexible endoscope, and a framework comprising robotic steering and lumen centralization has been reported to automate the colonoscope (137). Shape-sensing techniques for continuum robots have been reviewed for endoscopic procedures (138).

5.3. New Endoscopic Imaging

Advanced imaging devices and technologies have the potential to greatly enhance endoscopic images. Newly available imaging devices and modalities that can be used in endoscopic interventions are as follows (139).

1.

High-resolution, high-magnification endoscopes provide surgeons with an image quality that is a significant improvement over that offered by standard video endoscopes.

2.

Digital chromoendoscopy uses narrow-band imaging to illuminate and highlight surface vascular structures that are characterized by distinct light absorption properties of hemoglobin and mucosa.

3.

Autofluorescence imaging aims to visualize and diagnose neoplastic lesions only, using specific light to interact with the fluorophore components of suspicious tissue.

5.3.1. Endomicroscopy.

Optical biopsy is a relatively new surgical technology that provides surgeons with online tissue histological analysis by using the properties of light during endoscopy. The endomicroscope is a new device for optical biopsy. Confocal laser endomicroscopy (CLE) was initially proposed for diagnosis of GI disease, and a flexible version of this instrument, probe-based CLE (pCLE), is used in endoscopy. Volumetric laser endomicroscopy has demonstrated an improved diagnostic performance over that of pCLE (140).

5.3.2. Endocytoscopy.

Endocytoscopy is another optical biopsy technique that uses a high-power, fixed-focus objective lens to achieve ultrahigh magnification of GI and respiratory tract mucosa at the cellular level. However, the precise role of this technique in the GI and respiratory tracts has yet to be determined.

5.3.3. Near-IR fluorescence.

Near-IR (NIR) fluorescence, classified as a molecular imaging modality, is an intraoperative imaging technique that employs NIR fluorescent light to identify suspicious targets and their margins during a surgical procedure. Since normal tissue and benign and malignant tumors have different concentrations of hemoglobin and water, as well as different levels of oxygen and ultrastructural scattering, NIR fluorescence provides a novel way to quantify blood and water concentrations and evaluate structural and functional information in tissue at the surgical site.

Intraoperative NIR fluorescence imaging is a fast-developing modality that enhances contrast and depth of tissue penetration relative to visible light and offers real-time visual information during surgery (141). An open issue involves the development of algorithms to more accurately reconstruct and display NIR fluorescence images and integrate them with preoperative CT or MR images and endoscopic videos. Additionally, fluorescence- or firefly-guided endoscopic intervention (Figure 11) is a promising development in cancer diagnosis and treatment.

figure
Figure 11 

5.4. Endoscopic Video Analysis

Video processing may be applied to augment endoscopic visualization of the surgical field and enhance image quality. Endoscope video images suffer from several drawbacks, including limited illumination and field of view, surface information that is not apparent to the naked eye, and surgical smoke. In order to address these issues, various video processing algorithms need to be developed in the following areas:

▪

illumination uniformity equalization to improve endoscopic field lighting conditions,

▪

motion magnification to reveal hidden surface information (e.g., neurovascular bundles) that are difficult to perceive visually during surgery (142),

▪

video defogging to remove surgical smoke and improve image visual quality (143), and

▪

3D reconstruction to generate additional views of the endoscopic field.

5.5. 3D Printing Technology

3D printing is becoming increasingly important in medicine, especially in surgery (144), providing methodology that uses medical 3D data to generate 3D physical models. On the basis of these models, surgical training, planning, and simulation can be performed more accurately and effectively. Furthermore, surgeons can use these models to intuitively guide procedures in the operating room (Figure 12). However, 3D printing implementation is time-consuming and very expensive, limiting its widespread use. Another issue involves the development of automatic and seamless fusion of 3D printing models and endoscopic interventions to enhance surgical navigation during interventional endoscopy.

figure
Figure 12 

6. CONCLUSIONS

Interventional endoscopy plays a critical role in diagnosing, staging, and treating various diseases in a minimally invasive manner. The concept of endoscopic navigation has revolutionized conventional endoscopic interventions and has provided surgeons with more precise, efficient, and reliable means of diagnosis and treatment. This review has investigated various technical aspects of AEN and has shown that several commercial surgical navigation systems can be used clinically to improve the precision and quality of endoscopic procedures. However, endoscopic navigation is by no means mature, and advances are ongoing. The core elements of computational anatomy, surgical tracking and navigation techniques, intuitive visualization approaches, and interactive software are becoming established while simultaneously evolving for the next generation of navigated endoscopic systems (Figure 13).

figure
Figure 13 

SUMMARY POINTS

1.

Navigation is an innovative surgical solution to two major questions in endoscopic interventions—where to go and how to get there. It aims to provide surgeons with the right information at the right place and the right time in the operating room.

2.

Surgical data from a variety of sources are increasingly widely involved in different endoscopic navigation systems. However, multiple modalities representing structural and functional information, as well as inherent patient variations, still present challenges for the development of accurate and robust volumetric segmentation, registration, and fusion algorithms for surgical stimulation and planning.

3.

Current surgical tracking and navigation techniques comprise three categories— vision-based tracking, external tracking, and hybrid methods—and are the key components of various image-guided surgical procedures, which enable surgeons to precisely track their surgical instruments in relation to the patient's anatomy. These techniques synchronize preoperative and intraoperative images so that an AR surgical environment can be established with direct 3D visualization and real-time surgical tool localization.

4.

Endoscopic applications are motivated by the clinical requirement of achieving the desired therapy while minimizing trauma to the patient.

5.

Recent major advances in imaging, sensing, robotics, information processing, machine intelligence, 3D printing, and related technical fields have led to the innovation and improvement of endoscopic navigation approaches.

6.

AEN is an interdisciplinary field that not only has enabled surgeons to make data-driven decisions in the operating room and solve problems in clinical practice but also has motivated the research community to study preoperative information processing, intraoperative imaging, surgical planning software, and surgical instrument tracking to further enhance this technology.

FUTURE ISSUES

1.

Endoscopic navigation systems require accurate computational anatomical models that are created from a range of heterogeneous multimodal data. Machine intelligence and machine learning are likely to be crucial for computing these anatomical models from surgical big data derived from multiple image modalities.

2.

Endoscopic vision is usually problematic during minimally invasive surgery. The development of endoscopic video processing algorithms is necessary to augment endoscopic field visualization to improve surgical performance.

3.

Machine intelligence techniques such as deep learning are increasingly being used in medical image computing and computer-assisted surgery. With applications to endoscope tracking and navigation, these techniques have given rise to an interesting research direction.

4.

Fluorescence-guided endoscopy is a promising surgical procedure that provides surgeons with visual and intuitive identification of normal tissues and tumor margins or suspicious regions. Such procedures could greatly improve surgical outcomes and critically reduce surgical time and health care costs.

5.

Endoscopic intervention using microscope-augmented navigation is a promising research direction that combines macroscopy (white-light endoscopy) with microscopy (endomicroscopy or endocytoscopy) to provide an enhanced understanding of surface microarchitecture in different types of disease. Endoscopists always pursue the ultimate objective of establishing an immediate endoscopic diagnosis that is consistent with the histological diagnosis.

6.

Endoscopic robotization with navigation is the next generation of endoscopy. However, the robotization of flexible endoscopes remains a challenge.

7.

In the foreseeable future, endoscopy will evolve into an entirely new technique referred to as intelligentized endoscopy that will be endowed with a number of intelligent characteristics, including (a) smart video augmentation and summarization, (b) AR visualization, (c) surgical tracking and navigation, (d) robotic manipulation, and (e) multifunctional theranostics.

disclosure statement

T.M.P. holds a research grant from Intuitive Surgical Inc. to study the processing of endoscopic video. The other authors are not aware of any affiliations, memberships, funding, or financial holdings that may be perceived as affecting the objectivity of this review.

acknowledgments

X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc.

literature cited

  • 1. 
    Marks JM, Dunkin B. 2013. Principles of Flexible Endoscopy for Surgeons. New York: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 2. 
    Nezhat C. 2011. Nezhat's History of Endoscopy: A Historical Analysis of Endoscopy's Ascension Since Antiquity. Dublin: Endo
    • Google Scholar
    Article Location
  • 3. 
    Antoniou SA, Antoniou GA, Koutras C, Antoniou AI. 2012. Endoscopy and laparoscopy: a historical aspect of medical terminology. Surg. Endosc. 26: 3650–54
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 4. 
    Roberts-Thomson IC, Singh R, Teo E, Nguyen NQ, Lidums I. 2010. The future of endoscopy. J. Gastroenterol. Hepatol. 25: 1051–57
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 5. 
    Bhatt J, Jones A, Foley S, Shah Z, Malone P, et al. 2010. Harold Horace Hopkins: a short biography. BJU Int. 106: 1425–28
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 6. 
    Campbell IS, Howell JD, Evans HH. 2016. Visceral vistas: Basil Hirschowitz and the birth of fiberoptic endoscopy. Ann. Intern. Med. 165: 214–18
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 7. 
    Johansen-Berg H, Behrens TE. 2009. Diffusion MRI: From Quantitative Measurement to In-Vivo Neuroanatomy. Amsterdam: Elsevier
    • Google Scholar
    Article Location
  • 8. 
    Wedeen V, Wang R, Schmahmann J, Benner T, Tseng W, et al. 2008. Diffusion spectrum magnetic resonance imaging (DSI) tractography of crossing fibers. NeuroImage 41: 1267–77
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Diffusion Tensor Imaging for Understanding Brain Development in Early Life

      Anqi Qiu,1,2 Susumu Mori,3,4 and Michael I. Miller5,61Department of Biomedical Engineering and Clinical Imaging Research Center, National University of Singapore, 117576 Singapore; email: [email protected]2Singapore Institute for Clinical Sciences, Agency for Science, Technology, and Research, 117609 Singapore3Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, Maryland 212054F.M. Kirby Functional MRI Center, Kennedy Krieger Institute, Baltimore, Maryland 212055Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 212186Center for Imaging Science, Johns Hopkins University, Baltimore, Maryland 21218
      Annual Review of Psychology Vol. 66: 853 - 876
      • .... Wedeen et al. (2008) further demonstrated that DSI tractography accurately shows the known anatomic fiber crossings in the optic chiasm, ...
      • ...These findings indicate that DSI is able to image crossing fibers in the brain (Wedeen et al. 2008); hence, ...
    • Using Diffusion Imaging to Study Human Connectional Anatomy

      Heidi Johansen-Berg1 and Matthew F.S. Rushworth1,21Centre for Functional MRI of the Brain, University of Oxford, Oxford, OX3 9DU, United Kingdom; email: [email protected]2Department of Experimental Psychology, University of Oxford, Oxford, OX3 9DU, United Kingdom; email: [email protected]
      Annual Review of Neuroscience Vol. 32: 75 - 94
      • ...although improved techniques can improve performance (Behrens et al. 2007, Parker & Alexander 2005, Wedeen et al. 2008)....

  • 9. 
    Golby AJ. 2015. Image-Guided Neurosurgery. Amsterdam: Elsevier
    • Google Scholar
    Article Location
  • 10. 
    Memon A, Weber B, Winterdahl M, Jakobsen S, Meldgaard P, et al. 2015. PET imaging of patients with non–small cell lung cancer employing an EGF receptor targeting drug as tracer. Br. J. Cancer 105: 1850–55
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 11. 
    Cutsem EV, Cervantes A, Nordlinger B, Arnold D. 2014. Metastatic colorectal cancer: ESMO clinical practice guidelines for diagnosis, treatment and follow-up. Ann. Oncol. 25: 1–9
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 12. 
    Shaw CC. 2014. Cone Beam Computed Tomography. Boca Raton, FL: CRC
    • Crossref
    • Google Scholar
    Article Location
  • 13. 
    Kapila SD. 2014. Cone Beam Computed Tomography in Orthodontics: Indications, Insights, and Innovations. Oxford, UK: Wiley-Blackwell
    • Google Scholar
    Article Location
  • 14. 
    Mirota DJ, Uneri A, Schafer S, Nithiananthan S, Reh DD, et al. 2013. Evaluation of a system for high-accuracy 3D image–based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery. IEEE Trans. Med. Imaging 32: 1215–26
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
    • Article Location
  • 15. 
    Srinivasan VM, Schafer S, Ghali MGZ, Arthur A, Duckworth EAM. 2016. Cone-beam CT angiography (Dyna CT) for intraoperative localization of cerebral arteriovenous malformations. J. NeuroInterv. Surg. 8: 69–74
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 16. 
    Errico C, Pierre J, Pezet S, Desailly Y, Lenkei Z, et al. 2015. Ultrafast ultrasound localization microscopy for deep super-resolution vascular imaging. Nature 57: 499–502
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Biomolecular Ultrasound and Sonogenetics

      David Maresca,1, Anupama Lakshmanan,2, Mohamad Abedi,2 Avinoam Bar-Zion,1 Arash Farhadi,2 George J. Lu,1 Jerzy O. Szablowski,1 Di Wu,3 Sangjin Yoo,1 and Mikhail G. Shapiro11Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA; email: [email protected]2Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California 91125, USA3Division of Engineering and Applied Sciences, California Institute of Technology, Pasadena, California 91125, USA
      Annual Review of Chemical and Biomolecular Engineering Vol. 9: 229 - 252
      • ...breaking the classical tradeoff exposed in c [adapted with permission from Errico et al. (18)]. (e) Illustration of focused ultrasound energy delivery with millimeter precision. (top) Local blood-brain barrier opening induced by stable microbubble cavitation and tracked with gadolinium-enhanced magnetic resonance image (MRI); (bottom) MRI temperature map of a phantom during focused ultrasound insonification....
      • ...It was used to generate sub–10 μm–resolution images of the brain or tumors at the organ scale (Figure 1d) (18, 19)....

  • 17. 
    Davila JS, Momblan D, Gines A, Sanchez-Montes C, Araujo I, et al. 2016. Endoscopic-assisted laparoscopic resection for gastric subepithelial tumors. Surg. Endosc. 30: 199–203
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 18. 
    Reck M, Heigener D, Mok T, Soria J, Rabe K. 2013. Management of non-small-cell lung cancer: recent developments. Lancet 382: 24–30
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 19. 
    Sylvester PT, Evans JA, Zipfel GJ, Chole RA, Uppaluri R, et al. 2015. Combined high-field intraoperative magnetic resonance imaging and endoscopy increase extent of resection and progression-free survival for pituitary adenomas. Pituitary 18: 72–85
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 20. 
    Zaidi HA, De Los Reyes K, Barkhoudarian G, Litvack ZN, Bi WL, et al. 2016. The utility of high-resolution intraoperative MRI in endoscopic transsphenoidal surgery for pituitary macroadenomas: early experience in the advanced multimodality image guided operating suite. Neurosurg. Focus 40: E18
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 21. 
    Zhang H, Wang F, Zhou T, Wang P, Chen X, et al. 2017. Analysis of 137 patients who underwent endoscopic transsphenoidal pituitary adenoma resection under high-field intraoperative magnetic resonance imaging navigation. World Neurosurg. 104: 802–15
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 22. 
    Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, et al. 1991. Optical coherence tomography. Science 254: 1178–81
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Measures of Function and Structure to Determine Phenotypic Features, Natural History, and Treatment Outcomes in Inherited Retinal Diseases

      Artur V. Cideciyan, Arun K. Krishnan, Alejandro J. Roman, Alexander Sumaroka, Malgorzata Swider, and Samuel G. JacobsonDepartment of Ophthalmology, Scheie Eye Institute, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; email: [email protected]
      Annual Review of Vision Science Vol. 7: 747 - 772
      • ...Three discoveries brought structural measures of the retina and the RPE to living eyes: optical coherence tomography (OCT) (Huang et al. 1991)...
      • ...photoreceptor cells were thought to have minimal reflectivity (Hee et al. 1995, 1996; Huang et al. 1991)....
    • Optical Coherence Tomography and Glaucoma

      Alexi Geevarghese,1 Gadi Wollstein,1,2,3 Hiroshi Ishikawa,1,2 and Joel S. Schuman1,2,3,41Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY 10016, USA; email: [email protected]2Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York 11201, USA3Center for Neural Science, NYU College of Arts and Sciences, New York, NY 10003, USA4Department of Physiology and Neuroscience, NYU Langone Health, NYU Grossman School of Medicine, New York, NY 10016, USA
      Annual Review of Vision Science Vol. 7: 693 - 726
      • ...OCT was first introduced in 1991 by Huang et al. (1991) as a noninvasive in vivo cross-sectional imaging technology using low-coherence interferometry....
    • Imaging the Retinal Vasculature

      Stephen A. Burns, Ann E. Elsner, and Thomas J. GastIndiana University School of Optometry, Bloomington, Indiana 47405, USA; email: [email protected], [email protected], [email protected]
      Annual Review of Vision Science Vol. 7: 129 - 153
      • ...Scanning the beam across the retina and analyzing the interference signal provide a depth-resolved image of retinal reflectance, a technique known as OCT (Huang et al. 1991)....
    • Cellular-Scale Imaging of Transparent Retinal Structures and Processes Using Adaptive Optics Optical Coherence Tomography

      Donald T. Miller and Kazuhiro KurokawaSchool of Optometry, Indiana University, Bloomington, Indiana 47405, USA; email: [email protected], [email protected]
      Annual Review of Vision Science Vol. 6: 115 - 148
      • ...TD-OCT was introduced by Huang et al. (1991) almost three decades ago and spatially resolves backscatter across depth in the retina by rapidly changing the optical path length of the reference arm in a scanning manner....
    • Optical-Based Analysis of Soft Tissue Structures

      Will Goth,1 John Lesicko,1,2 Michael S. Sacks,1,2,3 and James W. Tunnell11Department of Biomedical Engineering,2Center for Cardiovascular Simulation, and3Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas 78712; email: [email protected]
      Annual Review of Biomedical Engineering Vol. 18: 357 - 385
      • ...2.1.3. Optical coherence tomography.OCT originated in the early 1990s and has rapidly been adapted for clinical use and commercial systems (Table 3) (73)....
    • Imaging Glaucoma

      Donald C. HoodDepartments of Psychology and Ophthalmology, Columbia University, New York, NY 10027; email: [email protected]
      Annual Review of Vision Science Vol. 1: 51 - 72
      • ...although OCT imaging of the retina was developed earlier (Huang et al. 1991)....
    • Superresolution Multidimensional Imaging with Structured Illumination Microscopy

      Aurélie Jost1,2 and Rainer Heintzmann1,2,31Institute of Physical Chemistry, Abbe Center of Photonics, Friedrich Schiller University, Jena, Germany; email: [email protected], [email protected]2Institute of Photonic Technology, Jena, Germany3Randall Division of Cell and Molecular Biophysics, Kings College London, London, United Kingdom
      Annual Review of Materials Research Vol. 43: 261 - 282
      • ...uses interferometric detection to achieve depth discrimination and is a method well suited for imaging of nonfluorescent layered structures (2)....
    • Optical Tomography

      Christoph HaischInstitute of Hydrochemistry, Technische Universität München, D-81377 Munich, Germany; email: [email protected]
      Annual Review of Analytical Chemistry Vol. 5: 57 - 77
      • ...An overview of the technique can be found in References 101...
    • Microscopic Imaging and Spectroscopy with Scattered Light

      Nada N. Boustany,1 Stephen A. Boppart,2 and Vadim Backman31Department of Biomedical Engineering, Rutgers University, Piscataway, New Jersey 08854; email: [email protected]gers.edu2Departments of Electrical and Computer Engineering, Bioengineering, and Medicine, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801; email: [email protected]3Department of Biomedical Engineering, McCormick School of Engineering and Applied Science, Northwestern University, Evanston, Illinois 60208; email: [email protected]
      Annual Review of Biomedical Engineering Vol. 12: 285 - 314
      • ...primarily in the areas of ophthalmology, cardiology, dermatology, and oncology (70...
    • Advances in Light Microscopy for Neuroscience

      Brian A. Wilt, Laurie D. Burns, Eric Tatt Wei Ho, Kunal K. Ghosh, Eran A. Mukamel, and Mark J. SchnitzerJames H. Clark Center and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305; email: [email protected]
      Annual Review of Neuroscience Vol. 32: 435 - 506
      • ...Optical coherence tomography (OCT) relies on backscattering of light as the contrast mechanism and provides optical sectioning in a way analogous to how forms of ultrasound imaging use round-trip time-of-flight measurements to determine the depths of signal sources (Huang et al. 1991)....
    • Optical Projection Tomography

      James SharpeMRC Human Genetics Unit, Western General Hospital, Crewe Road South, EH4 2XU, United Kingdom; email: [email protected]
      Annual Review of Biomedical Engineering Vol. 6: 209 - 228
      • ...an increasingly popular technique called optical coherence tomography (OCT) has recently expanded the meaning of tomography to include exactly this type of optical sectioning approach (17, 18)....
      • ...OCT is an optical equivalent of ultrasonic scanning—coherent light is focused into a specimen and interferometry is used to limit detection of reflected photons to a specific time-of-flight that corresponds to a specific depth within the tissue (17, 18)....
    • Molecular Genetics of Human Retinal Disease

      Amir Rattner,1,4 Hui Sun,1,4 and Jeremy Nathans1–41Department of Molecular Biology and Genetics, 2Department of Neuroscience, 3Department of Ophthalmology, 4Howard Hughes Medical Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205; e-mail: [email protected]
      Annual Review of Genetics Vol. 33: 89 - 131
      • ...Optical coherence tomography uses laser interferometry to measure optical reflectivity and generates cross-sectional images of the retina with a spatial resolution of ∼10 μm (95, 101)....
    • Imaging Transgenic Animals

      T. F. Budinger1 D. A. Benaron2 and A. P. Koretsky2 1Department of Bioengineering and Center for Functional Imaging, Lawrence Berkeley National Laboratory, University of California, Berkeley, California 94720; email: [email protected]bl.gov 2Hansen Experimental Physics Laboratory (HEPL), Department of Physics, Stanford University, Stanford, CA 94305; 3Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, PA 15213
      Annual Review of Biomedical Engineering Vol. 1: 611 - 648
      • ...A particular form of optical imaging known as optical coherence tomography (OCT) was originally developed to image the transparent tissue of the eye (39) and, ...

  • 23. 
    Fercher AF. 1996. Optical coherence tomography. J. Biomed. Opt. 1: 157–73
    • Crossref
    • Medline
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Three-Dimensional Confocal Microscopy of the Living Human Eye

      Barry R. Masters and Matthias BöhnkeUniversitäts-Augenklinik, Inselspital, University of Bern, 3010 Bern, Switzerland; e-mail: [email protected]
      Annual Review of Biomedical Engineering Vol. 4: 69 - 91
      • ...Optical coherence tomography (OCT) is an optical imaging technique based on a Michelson interferometer and a low coherence light source (99, 100, 101)....

  • 24. 
    Yelbuz TM, Choma MA, Thrane L, Kirby ML, Izatt JA. 2002. Optical coherence tomography: a new high-resolution imaging technology to study cardiac development in chick embryos. Circulation 106: 2771–74
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Optical Projection Tomography

      James SharpeMRC Human Genetics Unit, Western General Hospital, Crewe Road South, EH4 2XU, United Kingdom; email: [email protected]
      Annual Review of Biomedical Engineering Vol. 6: 209 - 228
      • ...an increasingly popular technique called optical coherence tomography (OCT) has recently expanded the meaning of tomography to include exactly this type of optical sectioning approach (17, 18)....
      • ...OCT is an optical equivalent of ultrasonic scanning—coherent light is focused into a specimen and interferometry is used to limit detection of reflected photons to a specific time-of-flight that corresponds to a specific depth within the tissue (17, 18)....
      • ...but instead directly measure properties within the specimen, usually sampled as a rasterized section—OCT (18), ...
      • ...OCT has recently been used in developmental studies (18, 30)....

  • 25. 
    Jia Y, Bailey S, Wilson D, Tan O, Klein M, et al. 2014. Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration. Ophthalmology 121: 1435–44
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 26. 
    Ehlers J, Xu D, Kaiser P, Singh R, Srivastava S. 2014. Intrasurgical dynamics of macular hole surgery: an assessment of surgery-induced ultrastructural alterations with intraoperative optical coherence tomography. Retina 34: 213–21
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 27. 
    Klein BR, Brown EN, Casden RS. 2016. Preoperative macular spectral-domain optical coherence tomography in patients considering advanced-technology intraocular lenses for cataract surgery. J. Cataract Refract. Surg. 42: 537–41
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 28. 
    Drexler W, Fujimoto J. 2015. Optical Coherence Tomography: Technology and Applications. Berlin: Springer
    • Crossref
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
    More AR articles citing this reference

    • Cellular-Scale Imaging of Transparent Retinal Structures and Processes Using Adaptive Optics Optical Coherence Tomography

      Donald T. Miller and Kazuhiro KurokawaSchool of Optometry, Indiana University, Bloomington, Indiana 47405, USA; email: [email protected], [email protected]
      Annual Review of Vision Science Vol. 6: 115 - 148
      • ...that can be realized with any of the three broad schemes of illumination and detection, point-scanning, line field, and full field (Drexler & Fujimoto 2015)....
      • ...Extensive theoretical and quantitative descriptions of these various methods are available in the literature for the interested reader (Drexler & Fujimoto 2015)....

  • 29. 
    de Boer JF, Leitgeb R, Wojtkowski M. 2017. Twenty-five years of optical coherence tomography: the paradigm shift in sensitivity and speed provided by Fourier domain OCT. Biomed. Opt. Express 8: 3248–80
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 30. 
    Bailey DL, Willowson KP. 2013. An evidence-based review of quantitative SPECT imaging and potential clinical applications. J. Nucl. Med. 54: 83–89
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 31. 
    Bruyant PP. 2002. Analytic and iterative reconstruction algorithms in SPECT. J. Nucl. Med. 43: 1343–58
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 32. 
    Toney LK, Wanner M, Miyaoka RS, Alessio AM, Wood DE, Vesselle H. 2014. Improved prediction of lobar perfusion contribution using technetium-99m–labeled macroaggregate of albumin single photon emission computed tomography/computed tomography with attenuation correction. J. Thorac. Cardiovasc. Surg. 148: 2345–52
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 33. 
    Haraldsen A, Bluhme H, Røhl L, Pedersen EM, Jensen AB, et al. 2016. Single photon emission computed tomography (SPECT) and SPECT/low-dose computerized tomography did not increase sensitivity or specificity compared to planar bone scintigraphy for detection of bone metastases in advanced breast cancer. Clin. Physiol. Funct. Imaging 36: 40–46
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 34. 
    Sadowski SM, Neychev V, Millo C, Shih J, Nilubol N, et al. 2016. Prospective study of 68Ga-DOTATATE positron emission tomography/computed tomography for detecting gastro-entero-pancreatic neuroendocrine tumors and unknown primary sites. J. Clin. Oncol. 34: 588–96
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 35. 
    Winchester DE, Chauffe RJ, Meral R, Nguyen D, Ryals S, et al. 2015. Clinical utility of inappropriate positron emission tomography myocardial perfusion imaging: test results and cardiovascular events. J. Nucl. Cardiol. 22: 9–15
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 36. 
    Rochitte CE, George RT, Chen MY, Arbab-Zadeh A, Dewey M, Miller JM. 2014. Computed tomography angiography and perfusion to assess coronary artery stenosis causing perfusion defects by single photon emission computed tomography: the CORE320 study. Eur. Heart J. 35: 1120–30
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 37. 
    Greenwood JP, Herzog BA, Brown JM, Everett CC, Nixon J, et al. 2016. Prognostic value of cardiovascular magnetic resonance and single-photon emission computed tomography in suspected coronary heart disease: long-term follow-up of a prospective, diagnostic accuracy cohort study. Ann. Intern. Med. 16: 1–9
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 38. 
    Kobatake H, Masutani Y. 2017. Computational Anatomy Based on Whole Body Imaging. Tokyo: Springer Jpn.
    • Crossref
    • Google Scholar
    Article Location
  • 39. 
    Bankman IN. 2008. Handbook of Medical Image Processing and Analysis. Amsterdam: Elsevier
    • Google Scholar
    Article Location
  • 40. 
    Smistad E, Falch TL, Bozorgi M, Elster AC, Lindseth F. 2015. Medical image segmentation on GPUs—a comprehensive review. Med. Image Anal. 20: 1–18
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 41. 
    Hajnal JV, Hill DL. 2001. Medical Image Registration. Boca Raton, FL: CRC
    • Crossref
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Deformable Medical Image Registration: Setting the State of the Art with Discrete Methods

      Ben Glocker1,4,Aristeidis Sotiras,2,Nikos Komodakis,3 and Nikos Paragios21Computer Aided Medical Procedures, Technische Universität München, 85748 Garching, Germany2Department of Applied Mathematics, École Centrale de Paris/INRIA Saclay, Ile-de-France, 92290 Orsay, France; email: [email protected]3Computer Science Department, University of Crete, Heraklion, Greece4Current address: Microsoft Research Cambridge, United Kingdom
      Annual Review of Biomedical Engineering Vol. 13: 219 - 244
      • ...so we would like to point the reader to the books by Hajnal et al. (12)...
      • ...a similarity measure based on the (normalized) sum of absolute differences or cross correlation can be sufficient (12)....
    • Image-Guided Interventions: Technology Review and Clinical Applications

      Kevin Cleary1 and Terry M. Peters21Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007; email: [email protected]2Robarts Research Institute, University of Western Ontario, London, Ontario, Canada N6A 5K8; email: [email protected]
      Annual Review of Biomedical Engineering Vol. 12: 119 - 142
      • ...A good early overview of medical image registration can be found in Reference 14....

  • 42. 
    Viergever MA, Maintz JA, Klein S, Murphy K, Staring M, Pluim JP. 2016. A survey of medical image registration. Med. Image Anal. 33: 140–44
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 43. 
    Schork N. 2015. Personalized medicine: time for one-person trials. Nature 520: 609–11
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Precision Medicine Is Not Just Genomics: The Right Dose for Every Patient

      Richard W. PeckPharma Research and Exploratory Development, Roche Innovation Center Basel, 4070 Basel, Switzerland; email: [email protected]
      Annual Review of Pharmacology and Toxicology Vol. 58: 105 - 122
      • ...many patients do not respond well or even gain no benefit at all (13, 14)....
    • Single-Subject Studies in Translational Nutrition Research

      Nicholas J. Schork1,2,3 and Laura H. Goetz2,4,51Translational Genomics Research Institute, Phoenix, Arizona 85004; email: [email protected]2J. Craig Venter Institute, La Jolla, California 92037; email: [email protected]3Departments of Psychiatry and Family Medicine and Public Health, University of California, San Diego, La Jolla, California 920374Department of Surgery, Scripps Clinic Medical Group, La Jolla, California 920375Department of Molecular and Experimental Medicine, The Scripps Research Institute, La Jolla, California 92037
      Annual Review of Nutrition Vol. 37: 395 - 422
      • ...and the reasons for this are largely unknown but could be explored with appropriate study designs [see, e.g., the editorial by Schork (101)]. ...
      • ...The origins of N-of-1 clinical trials have been discussed by many authors (47, 67, 101), ...

  • 44. 
    Wu G, Shen D, Sabuncu M. 2016. Machine Learning and Medical Imaging. Amsterdam: Elsevier
    • Google Scholar
    Article Location
  • 45. 
    Zhou SK, Greenspan H, Shen D. 2017. Deep Learning for Medical Image Analysis. Amsterdam: Elsevier
    • Google Scholar
    Article Location
  • 46. 
    Deguchi D, Mori K, Feuerstein M, Kitasaka T, Mauer CR Jr., et al. 2009. Selective image similarity measure for bronchoscope tracking based on image registration. Med. Image Anal. 13: 621–33
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 47. 
    Luo X, Feuerstein M, Deguchi D, Kitasaka T, Takabatake H, Mori K. 2012. Development and comparison of new hybrid motion tracking for bronchoscopic navigation. Med. Image Anal. 16: 577–96
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 48. 
    Merritt SA, Khare R, Bascom R, Higgins WE. 2013. Interactive CT–video registration for the continuous guidance of bronchoscopy. IEEE Trans. Med. Imaging 32: 1376–96
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 49. 
    Luo X, Mori K. 2014. Discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking. IEEE Trans. Med. Imaging 33: 1248–61
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 50. 
    Shen M, Giannarou S, Yang GZ. 2015. Robust camera localisation with depth reconstruction for bronchoscopic navigation. Int. J. Comput. Assist. Radiol. Surg. 10: 801–13
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 51. 
    Zhang L, Wahle A, Chen Z, Zhang L, Downe RW, et al. 2015. Simultaneous registration of location and orientation in intravascular ultrasound pullbacks pairs via 3D graph–based optimization. IEEE Trans. Med. Imaging 34: 2550–61
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 52. 
    Szeliski R. 2011. Computer Vision: Algorithms and Applications. London: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 53. 
    Zheng Z. 2000. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22: 1330–34
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Vision-Based Navigation in Image-Guided Interventions

      Daniel J. Mirota,1 Masaru Ishii,2 and Gregory D. Hager11Department of Computer Science, Johns Hopkins University, Baltimore, Maryland 21218; email: [email protected]2Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins Medical Institutions, Baltimore, Maryland 21224
      Annual Review of Biomedical Engineering Vol. 13: 297 - 319
      • ...Identification of these parameters is a well-studied problem in the computer vision and photogrammetry literature (14, 15) and therefore is not further discussed....

  • 54. 
    Hartley R, Kang SB. 2007. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 29: 1309–21
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 55. 
    Ramalingam S, Sturm P. 2017. A unifying model for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 39: 1309–19
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 56. 
    Shiu Y, Ahmad S. 1989. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX=XB. IEEE Trans. Robot. Autom. 5: 16–29
    • Crossref
    • Google Scholar
    Article Location
  • 57. 
    Tsai R, Lenz R. 1989. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans. Robot. Autom. 5: 345–58
    • Crossref
    • Google Scholar
    Article Location
  • 58. 
    Horaud R, Dornaika F. 1995. Hand–eye calibration. Int. J. Robot. Res. 14: 195–210
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 59. 
    Daniilidis K. 1999. Hand–eye calibration using dual quaternions. Int. J. Robot. Res. 18: 286–98
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 60. 
    Heller J, Havlena M, Sugimoto A, Pajdla T. 2011. Structure-from-motion based hand–eye calibration using L∞-minimization. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3497–503. Piscataway, NJ: IEEE
    • Google Scholar
    Article Location
  • 61. 
    Heller J, Havlena M, Pajdla T. 2016. Globally optimal hand–eye calibration using branch-and-bound. IEEE Trans. Pattern Anal. Mach. Intell. 38: 1027–33
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 62. 
    Wognum S, Heethuis SE, Rosario T, Hoogeman MS, Bel A. 2014. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers. Med. Phys. 41: 071916
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 63. 
    Hughes-Hallett A, Mayer E, Marcus H, Cundy T, Pratt P, et al. 2014. Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 83: 266–73
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 64. 
    Inoue M, Yoshimura M, Sato S, Nakamura M, Yamada M, et al. 2015. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers. Med. Phys. 42: 1904–10
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 65. 
    Tabrizi LB, Mahvash M. 2015. Augmented reality–guided neurosurgery: accuracy and intraoperative application of an image projection technique. J. Neurosurg. 123: 206–11
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 66. 
    Klein T, Traub J, Hautmann H, Ahmadian A, Navab N. 2007. Fiducial-free registration procedure for navigated bronchoscopy. In Lecture Notes in Computer Science, vol. 4791: Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2007), ed. N Ayache, S Ourselin, A Maeder, pp. 475–82. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 67. 
    Deguchi D, Feuerstein M, Kitasaka T, Suenaga Y, Ide I, et al. 2012. Real-time marker-free patient registration for electromagnetic navigated bronchoscopy: a phantom study. Int. J. Comput. Assisted Radiol. Surg. 7: 359–69
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 68. 
    Hofstad EF, Sorger H, Leira HO, Amundsen T, Lang T. 2014. Automatic registration of CT images to patient during the initial phase of bronchoscopy: a clinical pilot study. Med. Phys. 41: 041903
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 69. 
    Luo X. 2014. A bronchoscopic navigation system using bronchoscope center calibration for accurate registration of electromagnetic tracker and CT volume without markers. Med. Phys. 41: 061913
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 70. 
    Luo X, Mori K. 2014. Real-time bronchoscope three-dimensional motion estimation using multiple sensor-driven alignment of CT images and electromagnetic measurements. Comput. Med. Imaging Graph. 38: 540–48
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 71. 
    Luo X, Wan Y, He X, Mori K. 2015. Adaptive marker-free registration using a multiple-point strategy for real-time and robust endoscope electromagnetic navigation. Comput. Methods Programs Biomed. 118: 147–57
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 72. 
    Baum Z, Ungi T, Lasso A, Fichtinger G. 2017. Usability of a real-time tracked augmented reality display system in musculoskeletal injections. Proc. SPIE 10135: 10135T
    • Google Scholar
    Article Location
  • 73. 
    Holler K, Penne J, Schneider A, Jahn J, Boronat J, et al. 2009. Endoscopic orientation correction. In Lecture Notes in Computer Science, vol. 5761: Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2009), ed. G-Z Yang, DJ Hawkes, D Rueckert, A Noble, C Taylor, pp. 459–66. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 74. 
    Luo X, Kitasaka T, Mori K. 2013. Externally navigated bronchoscopy using 2-D motion sensors: dynamic phantom validation. IEEE Trans. Med. Imaging 32: 1725–64
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 75. 
    Hekimian-Williams C, Grant B, Liu X, Zhang Z, Kumar P. 2010. Accurate localization of RFID tags using phase difference. In Proceedings of the 2010 IEEE International Conference on RFID, pp. 89–96. Piscataway, NJ: IEEE
    • Crossref
    • Google Scholar
    Article Location
  • 76. 
    Wille A, Broll M, Winter S. 2011. Phase difference based RFID navigation for medical applications. In Proceedings of the 2011 IEEE International Conference on RFID, pp. 98–105. Piscataway, NJ: IEEE
    • Crossref
    • Google Scholar
    Article Location
  • 77. 
    Klayton T, Price R, Buyyounouski MK, Sobczak M, Greenberg R, et al. 2012. Prostate bed motion during intensity-modulated radiotherapy treatment. Int. J. Radiat. Oncol. Biol. Phys. 84: 130–36
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 78. 
    Curtis W, Khan M, Magnelli A, Stephans K, Tendulkar R, Xia P. 2013. Relationship of imaging frequency and planning margin to account for intrafraction prostate motion: analysis based on real-time monitoring data. Int. J. Radiat. Oncol. Biol. Phys. 85: 700–6
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 79. 
    Koizumi N, Sumiyama K, Suzuki N, Hattori A, Tajiri H, Uchiyama A. 2002. Development of three-dimensional endoscopic ultrasound system with optical tracking. In Lecture Notes in Computer Science, vol. 2488: Proceedings of the 3rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2002), ed. T Dohi, R Kikinis, pp. 60–65. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 80. 
    Li Y, Aissaoui R, Lacoste M, Dansereau J. 2004. Development and evaluation of a new body-seat interface shape measurement system. IEEE Trans. Biomed. Eng. 51: 2040–50
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 81. 
    Housden RJ, Treece GM, Gee AH, Prager RW. 2007. Hybrid systems for reconstruction of freehand 3D ultrasound data. Tech. rep., Univ. Cambridge
    • Google Scholar
    Article Location
  • 82. 
    Feuerstein M, Reichl T, Vogel J, Traub J, Navab N. 2009. Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors. IEEE Trans. Med. Imaging 28: 951–67
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 83. 
    Soper TD, Haynor DR, Glenny RW, Seibel EJ. 2010. In vivo validation of a hybrid tracking system for navigation of an ultrathin bronchoscope within peripheral airways. IEEE Trans. Biomed. Eng. 57: 736–45
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 84. 
    Than TD, Alici G, Zhou H, Li W. 2012. A review of localization systems for robotic endoscopic capsules. IEEE Trans. Biomed. Eng. 59: 2387–99
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Medical Technologies and Challenges of Robot-Assisted Minimally Invasive Intervention and Diagnostics

      Nabil Simaan, Rashid M. Yasin, and Long WangDepartment of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee 37240, USA; email: [email protected]
      Annual Review of Control, Robotics, and Autonomous Systems Vol. 1: 465 - 490
      • ...several works have been published on electromagnetic tracking of capsule robots using either external or capsular sensors (114...

  • 85. 
    Bao G, Pahlavan K, Mi L. 2015. Hybrid localization of microrobotic endoscopic capsule inside small intestine by data fusion of vision and RF sensors. IEEE Sens. J. 15: 2669–78
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 86. 
    Reichl T, Luo X, Menzel M, Hautmann H, Mori K, Navab N. 2013. Hybrid electromagnetic and image-based tracking of endoscopes with guaranteed smooth output. Int. J. Comput. Assist. Radiol. Surg. 8: 955–65
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 87. 
    Luo X, Feuerstein M, Kitasaka T, Mori K. 2012. Robust bronchoscope motion tracking using sequential Monte Carlo methods in navigated bronchoscopy: dynamic phantom and patient validation. Int. J. Comput. Assist. Radiol. Surg. 7: 371–87
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 88. 
    Luo X, Mori K. 2014. Robust endoscope motion estimation via an animated particle filter for electromagnetically navigated endoscopy. IEEE Trans. Biomed. Eng. 61: 85–95
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 89. 
    Luo X, Jayarathne U, McLeod A, Mori K. 2014. Enhanced differential evolution to combine optical mouse sensor with image structural patches for robust endoscopic navigation. In Lecture Notes in Computer Science, vol. 8674: Proceedings of the 17th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2014), ed. P Golland, N Hata, C Barillot, J Hornegger, R Howe, pp. 340–48. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 90. 
    Luo X, Wan Y, He X. 2015. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion. Med. Phys. 42: 1808–17
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 91. 
    Luo X, Wan Y, He X, Mori K. 2015. Observation-driven adaptive differential evolution and its application to accurate and smooth bronchoscope three-dimensional motion tracking. Med. Image Anal. 24: 282–96
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 92. 
    Moreland K. 2013. A survey of visualization pipelines. IEEE Trans. Vis. Comput. Graph. 19: 367–78
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Locations:
    • Article Location
    • Article Location
  • 93. 
    Preim B, Bartz D. 2007. Visualization in Medicine: Theory, Algorithms, and Applications. San Francisco: Morgan Kaufmann
    • Google Scholar
    Article Location
  • 94. 
    Nielson G. 2003. On marching cubes. IEEE Trans. Vis. Comput. Graph. 9: 283–97
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 95. 
    Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B. 2001. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 21: 34–47
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 96. 
    Milgram P, Kishino F. 1994. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77: 1321–32
    • Google Scholar
    Article Location
  • 97. 
    Kang X, Azizian M, Wilson E, Wu K, Martin AD, et al. 2014. Stereoscopic augmented reality for laparoscopic surgery. Surg. Endosc. 28: 2227–35
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Procedural Telementoring in Rural, Underdeveloped, and Austere Settings: Origins, Present Challenges, and Future Perspectives

      Juan P. Wachs,1 Andrew W. Kirkpatrick,2,3 and Samuel A. Tisherman41School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA; email: [email protected]2Departments of Critical Care Medicine, Surgery, and Medicine; Snyder Institute for Chronic Diseases; and the Trauma Program, University of Calgary and Alberta Health Services, Calgary, Alberta T2N 2T9, Canada3Tele-Mentored Ultrasound Supported Medical Interaction (TMUSMI) Research Group, Foothills Medical Centre, Calgary, Alberta T2N 2T9, Canada4Department of Surgery and the Program in Trauma, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
      Annual Review of Biomedical Engineering Vol. 23: 115 - 139
      • ...the most commonly adopted solution is external display of the images on a side monitor (75), ...

  • 98. 
    Haouchine N, Cotin S, Peterlik I, Dequidt J, Lopez MS, et al. 2015. Impact of soft tissue heterogeneity on augmented reality for liver surgery. IEEE Trans. Vis. Comput. Graph. 21: 584–97
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 99. 
    Barsom EZ, Graafland M, Schijven MP. 2016. Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 30: 4174–83
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 100. 
    Bernhardt S, Nicolau SA, Soler L, Doignon C. 2017. The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 37: 66–90
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Procedural Telementoring in Rural, Underdeveloped, and Austere Settings: Origins, Present Challenges, and Future Perspectives

      Juan P. Wachs,1 Andrew W. Kirkpatrick,2,3 and Samuel A. Tisherman41School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA; email: [email protected]2Departments of Critical Care Medicine, Surgery, and Medicine; Snyder Institute for Chronic Diseases; and the Trauma Program, University of Calgary and Alberta Health Services, Calgary, Alberta T2N 2T9, Canada3Tele-Mentored Ultrasound Supported Medical Interaction (TMUSMI) Research Group, Foothills Medical Centre, Calgary, Alberta T2N 2T9, Canada4Department of Surgery and the Program in Trauma, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
      Annual Review of Biomedical Engineering Vol. 23: 115 - 139
      • ...Although navigation-related AR applications exist for laparoscopic (66), endoscopic (67), and spinal surgery (68)...
    • Visualization of Biomedical Data

      Seán I. O'Donoghue,1,2,3 Benedetta Frida Baldi,2 Susan J. Clark,2 Aaron E. Darling,4 James M. Hogan,5 Sandeep Kaur,6 Lena Maier-Hein,7 Davis J. McCarthy,8,9 William J. Moore,10 Esther Stenau,7 Jason R. Swedlow,10 Jenny Vuong,1 and James B. Procter101Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Eveleigh NSW 2015, Australia; email: [email protected]2Genomics and Epigenetics Division, Garvan Institute of Medical Research, Sydney NSW 2010, Australia3School of Biotechnology and Biomolecular Sciences, University of New South Wales (UNSW), Kensington NSW 2033, Australia4The ithree Institute, University of Technology Sydney, Ultimo NSW 2007, Australia5School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane QLD, 4000, Australia6School of Computer Science and Engineering, University of New South Wales (UNSW), Kensington NSW 2033, Australia7Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany8European Bioinformatics Institute (EBI), European Molecular Biology Laboratory (EMBL), Wellcome Genome Campus, Hinxton CB10 1SD, United Kingdom9St. Vincent's Institute of Medical Research, Fitzroy VIC 3065, Australia10School of Life Sciences, University of Dundee, Dundee DD1 5EH, United Kingdom
      Annual Review of Biomedical Data Science Vol. 1: 275 - 304
      • ...An exciting frontier in medicine is the use of AR (92) to enhance, ...

  • 101. 
    Cleary K, Peters TM. 2010. Image-guided interventions: technology review and clinical applications. Annu. Rev. Biomed. Eng. 12: 119–42
    • Link
    • Web of Science ®
    • Google Scholar
  • 102. 
    Simpson AL, Sun K, Pheiffer TS, Rucker DC, Sills AK, et al. 2014. Evaluation of conoscopic holography for estimating tumor resection cavities in model-based image-guided neurosurgery. IEEE Trans. Biomed. Eng. 61: 1833–43
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 103. 
    Torres-Corzo JG, Rangel-Castilla L, Islas-Aguilar MA, Vecchia RRD. 2017. A novel approach of navigation-assisted flexible neuroendoscopy. Oper. Neurosurg. 2017: opx118
    • Google Scholar
    Article Location
  • 104. 
    Citardi MJ, Agbetoba A, Bigcas JL, Luong A. 2016. Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. Int. Forum Allergy Rhinol. 6: 523–28
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
    More AR articles citing this reference

    • Procedural Telementoring in Rural, Underdeveloped, and Austere Settings: Origins, Present Challenges, and Future Perspectives

      Juan P. Wachs,1 Andrew W. Kirkpatrick,2,3 and Samuel A. Tisherman41School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA; email: [email protected]2Departments of Critical Care Medicine, Surgery, and Medicine; Snyder Institute for Chronic Diseases; and the Trauma Program, University of Calgary and Alberta Health Services, Calgary, Alberta T2N 2T9, Canada3Tele-Mentored Ultrasound Supported Medical Interaction (TMUSMI) Research Group, Foothills Medical Centre, Calgary, Alberta T2N 2T9, Canada4Department of Surgery and the Program in Trauma, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
      Annual Review of Biomedical Engineering Vol. 23: 115 - 139
      • ...Although navigation-related AR applications exist for laparoscopic (66), endoscopic (67), and spinal surgery (68)...

  • 105. 
    Chu Y, Yang J, Ai D, Li W, Song H, et al. 2017. Registration and fusion quantification of augmented reality based nasal endoscopic surgery. Med. Image Anal. 42: 241–56
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 106. 
    Gerlach T, Friebe MH. 2016. Image guided laryngoscopy versus laryngectomy surgery: patient safety and system review. Cogent Eng. 3: 1256563
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 107. 
    Semmler M, Kniesburges S, Birk V, Ziethe A, Patel R, Dollinger M. 2016. 3D reconstruction of human laryngeal dynamics based on endoscopic high-speed recordings. IEEE Trans. Med. Imaging 35: 1615–24
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 108. 
    Gill R, Zheng Y, Barlow J, Jayender J, Girard E, et al. 2015. Image-guided video assisted thoracoscopic surgery—phase I–II clinical trial. J. Surg. Oncol. 112: 18–25
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 109. 
    Marchetti G, Valsecchi A, Indellicati D, Arondi S, Trigiani M, Pinelli V. 2015. Ultrasound-guided medical thoracoscopy in the absence of pleural effusion. Chest 147: 1008–12
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 110. 
    Bush C, Postma G. 2013. Transnasal esophagoscopy. Otolaryngol. Clin. N. Am. 46: 41–52
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 111. 
    Xuan Y, Hur H, Byun C, Han S, Cho Y. 2013. Efficacy of intraoperative gastroscopy for tumor localization in totally laparoscopic distal gastrectomy for cancer in the middle third of the stomach. Surg. Endosc. 27: 4364–70
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 112. 
    Li M, Zuo X, Li Y. 2014. Virtual gastroscopy for the evaluation of stomach malignancy. Endoscopy 46: E320–21
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 113. 
    Roth HR, Hampshire TE, Helbren E, Hu M, Vega R, et al. 2014. Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study. Proc. SPIE 9036: 903609
    • Crossref
    • Google Scholar
    Article Location
  • 114. 
    Oda M, Kondo H, Kitasaka T, Furukawa K, Miyahara R, et al. 2017. Robust colonoscope tracking method for colon deformations utilizing coarse-to-fine correspondence findings. Int. J. Comput. Assist. Radiol. Surg. 12: 39–50
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 115. 
    Robu MR, Edwards P, Ramalhinho J, Thompson S, Davidson B, et al. 2017. Intelligent viewpoint selection for efficient CT to video registration in laparoscopic liver surgery. Int. J. Comput. Assist. Radiol. Surg. 12: 1079–88
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 116. 
    Collins JA, Weis JA, Heiselman JS, Clements LW, Simpson AL, et al. 2017. Improving registration robustness for image-guided liver surgery in a novel human-to-phantom data framework. IEEE Trans. Med. Imaging 36: 1502–10
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 117. 
    KleinJan G, van den Berg N, Brouwer O. 2014. Optimisation of fluorescence guidance during robot-assisted laparoscopic sentinel node biopsy for prostate cancer. Eur. Urol. 66: 991–98
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 118. 
    Mahara A, Khan S, Murphy EK, Schned AR, Hyams ES, Halter RJ. 2015. 3D microendoscopic electrical impedance tomography for margin assessment during robot-assisted laparoscopic prostatectomy. IEEE Trans. Med. Imaging 34: 1590–601
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 119. 
    Wallerstedt A, Tyritzis S, Thorsteinsdottir T, Carlsson S. 2015. Short-term results after robot-assisted laparoscopic radical prostatectomy compared to open radical prostatectomy. Eur. Urol. 67: 660–70
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 120. 
    Hayashi Y, Misawa K, Oda M, Hawkes DJ, Mori K. 2016. Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer. Int. J. Comput. Assist. Radiol. Surg. 11: 827–36
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 121. 
    Dees-Ribbers HM, Pos FJ, Betgen A, Bex A, Hulshof MC, et al. 2013. Fusion of planning CT and cystoscopy images for bladder tumor delineation: a feasibility study. Med. Phys. 40: 051713
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 122. 
    Mariappan P, Lavin V, Phua CQ, Khan SAA, Donat R, Smith G. 2017. Predicting grade and stage at cystoscopy in newly presenting bladder cancers—a prospective double-blind clinical study. Urology 109: 134–39
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 123. 
    Ghani KR, Andonian S, Bultitude M, Desai M, Giusti G, et al. 2016. Percutaneous nephrolithotomy: update, trends, and future directions. Eur. Urol. 70: 382–96
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 124. 
    Schneider A, Pezold S, Sauer A, Ebbing J, Wyler S, et al. 2014. Augmented reality assisted laparoscopic partial nephrectomy. In Lecture Notes in Computer Science, vol. 8674: Proceedings of the 17th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2014), ed. P Golland, N Hata, C Barillot, J Hornegger, R Howe, pp. 357–64. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 125. 
    Kuhn AWB, Ross JR, Bedi A. 2015. Three-dimensional imaging and computer navigation in planning for hip preservation surgery. Sports Med. Arthrosc. Rev. 23: e31–38
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 126. 
    Yan CH, Chiu KY, Ng FY, Chan PK, Fang CX. 2015. Comparison between patient-specific instruments and conventional instruments and computer navigation in total knee arthroplasty: a randomized controlled trial. Knee Surg. Sports Traumatol. Arthrosc. 23: 3637–45
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 127. 
    Ruttkay T, Gotte J, Walle U, Doll N. 2015. Minimally invasive cardiac surgery using a 3D high-definition endoscopic system. Innov. Technol. Tech. Cardiothorac. Vasc. Surg. 10: 431–34
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 128. 
    Hansen T, Majeed H. 2014. Endoscopic carpal tunnel release. Hand Clin. 30: 47–53
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 129. 
    Kallewaard JW, Vanelderen P, Richardson J, Zundert JV, Heavner J, Groen GJ. 2013. Epiduroscopy for patients with lumbosacral radicular pain. Pain Pract. 14: 365–77
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 130. 
    Yuan Y, Li B, Meng MQH. 2015. Improved bag of feature for automatic polyp detection in wireless capsule endoscopy images. IEEE Trans. Autom. Sci. Eng. 13: 529–35
    • Crossref
    • Web of Science ®
    • Google Scholar
    Article Location
  • 131. 
    Karargyris A, Koulaouzidis A. 2015. Odocapsule: next-generation wireless capsule endoscopy with accurate lesion localization and video stabilization capabilities. IEEE Trans. Biomed. Eng. 62: 352–60
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 132. 
    Natali CD, Beccani M, Simaan N, Valdastri P. 2016. Jacobian-based iterative method for magnetic localization in robotic capsule endoscopy. IEEE Trans. Robot. 32: 327–38
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 133. 
    Marcus HJ, Hughes-Hallett A, Cundy TP, Yang GZ, Darzi A, Nandi D. 2015. da Vinci robot-assisted keyhole neurosurgery: a cadaver study on feasibility and safety. Neurosurg. Rev. 38: 367–71
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 134. 
    Morelli L, Di Franco G, Guadagni S, Rossi L, Palmeri M, et al. 2018. Robot-assisted total mesorectal excision for rectal cancer: case-matched comparison of short-term surgical and functional outcomes between the da Vinci Xi and Si. Surg. Endosc. 32: 589–600
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 135. 
    McGee M, Rosen M, Marks J, Onders R, Chak A, et al. 2006. A primer on natural orifice transluminal endoscopic surgery: building a new paradigm. Surg. Innov. 13: 86–93
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 136. 
    van der Stap N, Slump CH, Broeders IAMJ, van der Heijden F. 2014. Image-based navigation for a robotized flexible endoscope. In Lecture Notes in Computer Science, vol. 8899: Proceedings of the International Workshop on Computer-Assisted and Robotic Endoscopy (CARE 2014), ed. X Luo, T Reichl, D Mirota, T Soper, pp. 77–87. Heidelberg, Ger.: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 137. 
    Pullens HJM, van der Stap N, Rozeboom ED, Schwartz MP, van der Heijden F, et al. 2016. Colonoscopy with robotic steering and automated lumen centralization: a feasibility study in a colon model. Endoscopy 48: 286–90
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 138. 
    Shi C, Luo X, Qi P, Li T, Song S, et al. 2017. Shape sensing techniques for continuum robots in minimally invasive surgery: a survey. IEEE Trans. Biomed. Eng. 64: 1665–78
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 139. 
    Konda VJ, Waxman I. 2016. Endoscopic Imaging Techniques and Tools. Berlin: Springer
    • Crossref
    • Google Scholar
    Article Location
  • 140. 
    Leggett C, Gorospe E, Chan D, Muppa P, Owens V, et al. 2016. Comparative diagnostic performance of volumetric laser endomicroscopy and confocal laser endomicroscopy in the detection of dysplasia associated with Barrett's esophagus. Gastrointest. Endosc. 83: 880–88
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 141. 
    Vahrmeijer AL, Hutteman M, van der Vorst JR, van de Velde CJH, Frangioni JV. 2013. Image-guided cancer surgery using near-infrared fluorescence. Nat. Rev. Clin. Oncol. 10: 507–18
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 142. 
    McLeod A, Baxter J, de Ribaupierre S, Peters T. 2014. Motion magnification for endoscopic surgery. Proc. SPIE 9036: 90360C
    • Google Scholar
    Article Location
  • 143. 
    Luo X, McLeod A, Pautler S, Schlachta C, Peters T. 2017. Vision-based surgical field defogging. IEEE Trans. Med. Imaging 36: 2021–30
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location
  • 144. 
    Malik H, Darwood A, Shaunak S, Kulatilake P, El-Hilly A, et al. 2015. Using 3D printing to create personalized brain models for neurosurgical training and preoperative planning. J. Surg. Res. 199: 512–22
    • Crossref
    • Medline
    • Web of Science ®
    • Google Scholar
    Article Location

More AR articles citing this reference

Equation(s):

1.

Equation(s):

2.

Equation(s):

1.

Equation(s):

2.
  • Figures
  • Tables
image
image
image
image
image
image
image
image
image
image
image
image
image
  • Table 1  -Endoscopic navigation systems in clinical procedures
  • Figures
  • Tables
image

Figure 1  Modern endoscopy. (a) An intraoperative imaging system. (b) An endoscope. Images courtesy of Olympus Corporation, Japan.

Download Full-ResolutionDownload PPT

Figure Locations

...Modern endoscopy generally consists of intraoperative imaging systems and endoscopes (Figure 1)....

image

Figure 2  Stereoscopic endoscopic images (cross-eye view) acquired from robot-assisted prostatectomy.

Download Full-ResolutionDownload PPT

Figure Locations

...It provides on-site visualization of the operating field to assist surgeons in manipulating endoscopes and other surgical instruments to regions of interest during endoscopic interventions (Figure 2). ...

image

Figure 3  (a) Convex endobronchial ultrasound and (b) an ultrathin radial ultrasonic probe used for lung cancer examination. Images courtesy of Olympus Corporation, Japan.

Download Full-ResolutionDownload PPT

Figure Locations

...endobronchial US) or an ultrathin radial ultrasonic probe through its working channel (Figure 3)....

image

Figure 4  Three-dimensional optical coherence tomography image of the optic nerve and surrounding retina. Image courtesy of Centre for Eye Health Research Group, University of New South Wales, Australia.

Download Full-ResolutionDownload PPT

Figure Locations

...OCT is commonly used for diagnosis and surgery of eye disease (25–27). 2D OCT images can be reconstructed to provide three-dimensional (3D) visualization (Figure 4)....

image

Figure 5  Work flow or general principles of most advanced endoscopic systems.

Download Full-ResolutionDownload PPT

Figure Locations

...The general principles of various AEN systems are computational anatomy, surgical navigation, intuitive visualization, and interactive software (Figure 5)....

image

Figure 6  A surgical planning and simulation system for bronchoscopic intervention.

Download Full-ResolutionDownload PPT

Figure Locations

...While computational anatomy promotes diagnostic, preoperative planning (Figure 6), and surgical simulation (Figure 7)...

image

Figure 7  Virtual pneumoperitoneum simulation for laparoscopic intervention.

Download Full-ResolutionDownload PPT

Figure Locations

...While computational anatomy promotes diagnostic, preoperative planning (Figure 6), and surgical simulation (Figure 7), ...

image

Figure 8  Flowchart of video-based tracking for advanced endoscopic navigation. Abbreviation: DoF, degrees of freedom.

Download Full-ResolutionDownload PPT

Figure Locations

...in the coordinate system of the 3D preoperative computational anatomy (Figure 8)....

image

Figure 9  Plane-based visualization of volumetric computed tomography data.

Download Full-ResolutionDownload PPT

Figure Locations

...allowing the surgeon to view patient 3D data in the axial, sagittal, and coronal directions (Figure 9)....

image

Figure 10  (a) Vision-based and (b) electromagnetically driven bronchoscopic navigation systems.

Download Full-ResolutionDownload PPT

Figure Locations

...and various bronchoscopic navigation systems (Figure 10) have been described in the literature (47–50, 74, 83, 91)....

image

Figure 11  Near-IR fluorescence imaging used to identify vessels during robot-assisted laparoscopic partial nephrectomy. Image courtesy of Surgical Intuitive, Inc., United States.

Download Full-ResolutionDownload PPT

Figure Locations

...fluorescence- or firefly-guided endoscopic intervention (Figure 11) is a promising development in cancer diagnosis and treatment. ...

image

Figure 12  3D-printed liver model with marked vessels (a) that were used to guide laparoscopic liver surgery (b).

Download Full-ResolutionDownload PPT

Figure Locations

...surgeons can use these models to intuitively guide procedures in the operating room (Figure 12)....

image

Figure 13  The evolution of modern endoscopy in the foreseeable future.

Download Full-ResolutionDownload PPT

Figure Locations

...and interactive software are becoming established while simultaneously evolving for the next generation of navigated endoscopic systems (Figure 13). ...

  • Figures
  • Tables

Table 1  Endoscopic navigation systems in clinical procedures

ProcedureTool appliedArea(s) viewedOrifice usedNavigation
Brain
NeuroendoscopyNeuroendoscopeBrainSmall incisionYes
Respiratory tract
SinuscopySinuscopeNoseNoseYes
LaryngoscopyLaryngoscopeLarynxMouthYes
BronchoscopyBronchoscopeLung/bronchiMouthYes
ThoracoscopyThoracoscopeChest/lungSmall incisionYes
Gastrointestinal tract
EsophagoscopyEsophagoscopeEsophagusNoseNA
GastroscopyGastroscopeStomachMouthNA
ColonoscopyColonoscopeColon/rectumAnusYes
Abdomen
LaparoscopyLaparoscopeLiver/prostateSmall incisionYes
Urinary tract
CystoscopyCystoscopeBladderUrethraNA
NephroscopyNephroscopeKidneySmall incisionNA
NephrectomyLaparoscopeKidneySmall incisionYes
Joints
ArthroscopyArthroscopeJointsSmall incisionYes
Others: Otoscopy, colposcopy, hysteroscopy, amnioscopy, fetoscopy, and falloposcopy

Abbreviation: NA, not applicable. In cases marked “not applicable,” we did not find any literature reports of typically related navigation or image-guided systems.

Previous Article Next Article
  • Related Articles
  • Literature Cited
  • Most Downloaded
Most Downloaded from this journal

Deep Learning in Medical Image Analysis

Dinggang Shen, Guorong Wu, Heung-Il Suk
Vol. 19, 2017

Abstract - FiguresPreview

Abstract

This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core ...Read More

  • Full Text HTML
  • Download PDF
  • Figures
image

Figure 1: Architectures of two feed-forward neural networks.

image

Figure 2: Three representative deep models with vectorized inputs for unsupervised feature learning. The red links, whether directed or undirected, denote the full connections of units in two consecut...

image

Figure 3: Three key mechanisms (i.e., local receptive field, weight sharing, and subsampling) in convolutional neural networks.

image

Figure 4: Construction of a deep encoder–decoder via a stacked auto-encoder and visualization of the learned feature representations. The blue circles represent high-level feature representations. The...

image

Figure 5: Similarity maps identifying the correspondence for the point indicated by the red cross in the template (a) with regard to the subject (b) by hand-designed features (d,e) and by stacked auto...

image

Figure 6: Typical registration results on 7.0-T magnetic resonance images of the brain by (c) Demons (87), (d) HAMMER (88), and (e) HAMMER combined with stacked auto-encoder (SAE)-learned feature repr...

image

Figure 7: Typical prostate segmentation results of two different patients produced by three different feature representations. Red contours indicate manual ground-truth segmentations, and yellow conto...

image

Figure 8: The architecture of the fully convolutional network used for tissue segmentation in Reference 48.

image

Figure 9: (a) Shared feature learning from patches of different modalities, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), with a discriminative multimodal deep Boltz...

image

Figure 10: Functional networks learned from the first hidden layer of the deep auto-encoder from Reference 33. The functional networks in the left column correspond to (from top to bottom) the default...


The Effect of Nanoparticle Size, Shape, and Surface Chemistry on Biological Systems

Alexandre Albanese, Peter S. Tang, and Warren C.W. Chan
Vol. 14, 2012

Abstract - FiguresPreview

Abstract

An understanding of the interactions between nanoparticles and biological systems is of significant interest. Studies aimed at correlating the properties of nanomaterials such as size, shape, chemical functionality, surface charge, and composition with ...Read More

  • Full Text HTML
  • Download PDF
  • Figures
image

Figure 1: Overview of nano-bio interactions and their impact on the nanoengineering process. Typically, nanoparticles with a single or combination of known variable(s) (e.g., size, or size and surface...

image

Figure 2: Nanoparticle-cell interactions. (a) List of factors that can influence nanoparticle-cell interactions at the nano-bio interface. (b) Ligand-coated nanoparticles interacting with cells. The ...

image

Figure 3: Nanoparticles in tumor-specific delivery. Nanoparticles can be injected into a patient's blood and accumulate at the site of the tumor owing to enhanced permeation and retention. This prefer...

image

Figure 4: Evolution of nanoparticle design, highlighting the interplay between evolution of nanomaterial design and fundamental nano-bio studies. Abbreviations: Ab, antibody; EPR, enhanced permeation ...


Fluid Dynamics of Respiratory Infectious Diseases

Lydia Bourouiba
Vol. 23, 2021

Abstract - FiguresPreview

Abstract

The host-to-host transmission of respiratory infectious diseases is fundamentally enabled by the interaction of pathogens with a variety of fluids (gas or liquid) that shape pathogen encapsulation and emission, transport and persistence in the environment,...Read More

  • Full Text HTML
  • Download PDF
  • Figures
image

Figure 1: Core ideas about germ theory and transmission and their implications for epidemiology and public health, stemming from the legacy of Pasteur, Koch, Snow (not shown), Flügge, and Wells, estab...

image

Figure 2: The isolated respiratory drop emission paradigm, which remains the foundation of current infection control guidelines: the dichotomy between isolated small- and large-droplet respiratory emi...

image

Figure 3: (a) Paradigm shift from Wells's isolated droplet picture to that of the high-momentum turbulent (high-Re) multiphase exhalation cloud that carries droplets much further than if they were emi...

image

Figure 4: (a) Exhaled air with initial volume V0 and momentum I0 containing mucosalivary droplets of a given size distribution forms the multiphase cloud of initial density ρc(0) and initial buoyancy ...

image

Figure 5: Integrated PASS infection control management. (a) Masks reduce the forward momentum of the turbulent gas cloud and its droplet payload, though poor seals allow the gas cloud to follow the pa...

image

Figure 6: (a) Droplet size distributions from the literature (69–93) comparing respiratory emissions under a range of conditions; measured with different instrumentation and at different distances fro...

image

Figure 7: Compilation of results from the literature on quantification of droplet concentrations in a range of respiratory emissions from both infected and healthy subjects, showing a wide range of va...

image

Figure 8: (a) Sequence of emission of mucosalivary fluid (MS) from the respiratory tract (RT) during violent exhalations. The bulk of MS transforms into sheets that pierce with fluid retraction into l...


Neural Stimulation and Recording Electrodes

Stuart F. Cogan
Vol. 10, 2008

Abstract - FiguresPreview

Abstract

Electrical stimulation of nerve tissue and recording of neural electrical activity are the basis of emerging prostheses and treatments for spinal cord injury, stroke, sensory deficits, and neurological disorders. An understanding of the electrochemical ...Read More

  • Full Text HTML
  • Download PDF
  • Figures
image

Figure 1: Typical charge-balanced, current waveforms used in neural stimulation. The parameters vary widely depending on the application and size of the electrode. Waveform parameters usually falling ...

image

Figure 2: Capacitive (TiN), three-dimensional faradaic (iridium oxide), and pseudocapacitive (Pt) charge-injection mechanisms.

image

Figure 3: Scanning electron micrograph of the porous surface of sputtered TiN that gives rise to a high ESA/GSA ratio.

image

Figure 4: Schematic view of a pore cross-section showing the pore resistance (R1‥R3) and double-layer capacitance (C1‥C3) elements that give rise to a delay-line and time-constant for accessing all th...

image

Figure 5: An AIROF microelectrode for intracortical stimulation and recording.

image

Figure 6: A CV of AIROF in phosphate buffered saline (PBS) at 50 mV s−1. The time integral of the negative current, shown by the blue region of the voltammogram, represents a CSCc of 23 mC cm−2.

image

Figure 7: Comparison of cyclic voltammograms of platinum, SIROF, and smooth TiN macroelectrodes (GSA = 1.4 cm2) in PBS at a sweep rate of 20 mV s−1. 1, 2 indicate Pt oxidation and reduction; 3, 4 indi...

image

Figure 8: A comparison of the difference in response of 50 mV s−1 and 50,000 mV s−1 CVs of an AIROF microelectrode implanted in cat cortex within one day following implantation and six weeks after imp...

image

Figure 9: Impedance of an AIROF microelectrode (GSA = 940 μm2) in three electrolytes of different ionic conductivities but fixed phosphate buffer concentration. The conductivities are determined by th...

image

Figure 10: Impedance of an AIROF microelectrode (same as Figure 9) in PBS and unbuffered saline of similar ionic conductivities. The low-frequency charge-transfer impedance increases with decreasing b...

image

Figure 11: Comparison of the impedance of a smooth and porous TiN film demonstrating the reduction in impedance realized with a highly porous electrode coatings.

image

Figure 12: Impedance of SIROF coatings on PtIr macroelectrodes as a function of thickness.

image

Figure 13: A voltage transient of an AIROF microelectrode in response to a biphasic, symmetric (ic = ia) current pulse.

image

Figure 14: Comparison of voltage transients of an AIROF microelectrode pulsed at 48 nC phase−1 at pulsewidths from 0.1–0.5 ms.

image

Figure 15: Comparison of the initial and final Va for an AIROF microelectrode showing the large Va at the end of the current pulse when the AIROF is reduced.

image

Figure 16: Charge-injection capacity as a function of electrode area. The importance of nonuniform current distributions and transport limitations in determining Qinj are reflected in the area depende...

image

Figure 17: Comparison of in vivo and in vitro voltage transients of an AIROF electrode pulsed in an inorganic model of interstitial fluid (model-ISF) and subretinally in rabbit.

image

Figure 18: Comparison of the CV response of an AIROF electrode in PBS, model-ISF, and subretinally in rabbit.

image

Figure 19: Comparison of the impedance magnitude of an AIROF electrode in model-ISF and subretinally in rabbit.


Glutaminolysis: A Hallmark of Cancer Metabolism

Lifeng Yang, Sriram Venneti, Deepak Nagrath
Vol. 19, 2017

Abstract - FiguresPreview

Abstract

Glutamine is the most abundant circulating amino acid in blood and muscle and is critical for many fundamental cell functions in cancer cells, including synthesis of metabolites that maintain mitochondrial metabolism; generation of antioxidants to remove ...Read More

  • Full Text HTML
  • Download PDF
  • Figures
image

Figure 1: Amino acid metabolic pathways in cancer cells. This detailed schematic depicts the involvement of essential amino acids and nonessential amino acids in protein synthesis, central carbon meta...

image

Figure 2: Glutamine anaplerosis into the TCA cycle. Glutamine is taken up via ASCT2 (SLC1A5) and is converted into glutamate. Glutamate is metabolized to α-KG through the action of either GLUD or tran...

image

Figure 3: Oncogenic signaling, tumor suppressor, and tumor microenvironment effects on glutamine metabolism. Expression levels of enzymes involved in the glutaminolysis pathway are regulated by intrin...

image

Figure 4: Glutamine provides carbon and nitrogen sources for cells. (a) Glutamine donates amide and amino nitrogens for purine, nonessential amino acid, and glucosamine synthesis. The green rectangles...

image

Figure 5: Metabolic pathways control NADPH and ROS balance. Glucose enters the pentose phosphate pathway to generate two NADPH molecules via G6PD and 6PGDH. Serine derived from 3-phosphate glycerate o...

image

Figure 6: Roles of glutamine in tumor proliferation. Glutamine is taken up by cells via ASCT2 (SLC1A5) and is exported out of the cytoplasm by SLC7A5 to enable uptake of leucine. Leucine binds to Sest...

image

Figure 7: Roles of glutamine in the regulation of tumor metastasis, apoptosis, and epigenetics. (a) ROS activate cytochrome c release from mitochondria, which in turn trigger the caspase apoptotic pat...

image

Figure 8: Multiple sources maintain intracellular glutamine levels in cancer cells. (a) Cancer cells can generate glutamine through glutamine anabolism. De novo glutamine synthesis is mediated by the ...

image

Figure 9: 18F-glutamine uptake, positron emission tomography (PET) imaging, and SLC1A5 expression in several cancer. (a) 18F-glutamine uptake is mediated mainly by the glutamine transporter SCL1A5 in ...


See More
  • © Copyright 2022
  • Contact Us
  • Email Preferences
  • Annual Reviews Directory
  • Multimedia
  • Supplemental Materials
  • FAQs
  • Privacy Policy
Back to Top

PRIVACY NOTICE

Accept

This site requires the use of cookies to function. It also uses cookies for the purposes of performance measurement. Please see our Privacy Policy.