Navigation
Open Access
Article
An automated approach for early detection of diabetic retinopathy using SD-OCT images
Ahmed H. ElTanboly1,Agustina Palacio2,Ahmed M. Shalaby1,Andrew E. Switala1,Omar Helmy3,Shlomit Schaal3,Ayman El-Baz1,*
1
Department of Bioengineering, University of Louisville, 423 Lutz Hall, Louisville, KY
2
Uveitis and Retina Fellow, Consultores Oftalmologicos, Hospital fernandez, Buenos Aires, argentina
3
Department of Ophthalmology and Visual Sciences, University of Massachusetts Medical School, Worcester, MA
DOI: 10.2741/E817 Volume 10 Issue 2, pp.197-207
Published: 01 January 2018
*Corresponding Author(s):  
Ayman El-Baz
E-mail:  
aselba01@louisville.edu
Abstract

This study was to demonstrate the feasibility of an automatic approach for early detection of diabetic retinopathy (DR) from SD-OCT images. These scans were prospectively collected from 200 subjects through the fovea then were automatically segmented, into 12 layers. Each layer was characterized by its thickness, tortuosity, and normalized reflectivity. 26 diabetic patients, without DR changes visible by funduscopic examination, were matched with 26 controls, according to age and sex, for purposes of statistical analysis using mixed effects ANOVA. The INL was narrower in diabetes (p = 0.14), while the NFL (p = 0.04) and IZ (p = 0.34) were thicker. Tortuosity of layers NFL through the OPL was greater in diabetes (all p < 0.1), while significantly greater normalized reflectivity was observed in the MZ and OPR (both p < 0.01) as well as ELM and IZ (both p < 0.5). A novel automated method enables to provide quantitative analysis of the changes in each layer of the retina that occur with diabetes. In turn, carries the promise to a reliable non-invasive diagnostic tool for early detection of DR.

Key words

Spectral Domain Optical Coherence Tomography, SD-OCT, Retinal Segmentation; Reflectivity, Tortuosity, Thickness, Diabetic Retinopathy, DR

2. Introduction

Spectral domain optical coherence tomography (SD-OCT) is a widely used tool for the diagnosis and evaluation of retinal diseases. Utilizing interferometry, low coherence light is reflected from retinal tissue to produce a two-dimensional grayscale image of the retinal layers. Differences in reflectivity of retinal layers produce different intensities on SD-OCT scan, allowing for noninvasive visualization of distinct retinal layers (1-4). This detailed cross-sectional anatomy of the retina is often referred to as “in-vivo histology” and is instrumental in the assessment of several common retinal pathologies including diabetic retinopathy (DR), age-related macular degeneration (AMD), macular hole, macular edema, vitreo-macular traction (VMT), choroid neovascularization, and epiretinal membrane. SD-OCT can be also used to assess retinal nerve fiber layer (RNFL) thickness for the evaluation of glaucoma (5). Retinal layer morphology and retinal thickness measurements are used to identify and measure retinal abnormalities such as macular edema, and these measurements are also used to monitor disease progression and response to treatment (1-3).

With the exception of retinal thickness measurements, current SD-OCT provides limited objective quantitative data, and therefore images must be subjectively interpreted by an eye specialist (1). As a result, findings are susceptible to human bias and error. Ideally, OCT data should be tracked quantitatively and objectively in order to monitor the progression of abnormalities as well as aid in the diagnosis of various pathologies.

The challenge with diseases such as DR is that the patient is not aware of the disease until the changes in the retina have progressed to a level that treatment tends to be less effective. Therefore, automated early detection could limit the severity of the disease and assist ophthalmologists in investigating and treating it more efficiently.

The purpose of this study was to develop a novel automated algorithm that objectively quantifies three features; namely the reflectivity, tortuosity and thickness of retinal layers from the segmented OCT images, and to apply this algorithm to quantitatively distinguish normal and diabetic subjects (Figure 1).

Figure 1. A typical OCT scan of a normal subject showing the 12-distinct layers.

3. Materials and methods

The proposed method consists of three basic steps:

1. 12 distinct retinal layers are localized and segmented based on a novel joint model that combines shape, intensity, and spatial information. The shape prior is built using a subset of co-aligned training OCT images.

2. The three features are extracted from the segmented OCT images.

3. The layers showing significant difference based on statistical analysis are determined.

The mathematical details of the proposed joint model are detailed below.

3.1. Data collection

This study was reviewed and approved by the Institutional Review Board (IRB) at the University of Louisville - School of Medicine. Following IRB approval, subjects were recruited at the Kentucky Lions Eye Center, University of Louisville Department of Ophthalmology and Visual Sciences, Louisville, Kentucky between June 2015 and December 2015. Informed consent was obtained from all participants. Subjects with either normal retinas or diabetes ranging in age from 10 to 79 years old were included in the study. Past medical history, ophthalmologic history, smoking status, and current medications were obtained via chart review and subject interview. Persons with history significant for any retinal pathology, history significant for diabetes mellitus, high myopia defined as a refractive error less than or equal to −6.0.0 diopters, and tilted OCT image were excluded from participation in the study. SD-OCT scans were prospectively collected from 200 different subjects (40 with diabetes and 160 non-diabetic controls free of retinal pathology) using the Zeiss Cirrus HD-OCT 5000. SD-OCT data were exported for analysis as 8-bit, greyscale raw files with size 1024 pixels × 1024 pixels × N slices, where N = 5 or 21. For N = 5, the field of view as 6 mm nasal-temporal (N-T) and 2 mm posterior-anterior (P-A), and the slice spacing was 0.2.5 mm. For N = 21, the field of view was 9 mm N-T and 2 mm P-A, while the slice spacing was 0.3 mm.

3.2. Automatic segmentation of twelve retinal layers

Let “g” be a grayscale image taking integer values from (0 – 255) and “m” is the associated region map (segmented image) taking values from a set of labels (0 – 12). An input OCT image “g”, co-aligned to the training database, and its map “m”, are described with a joint probability model (6):

P(g , m) = P(g | m) P(m)”,

that combines a conditional distribution of the images given the map P(g | m), and an unconditional probability distribution of maps “P(m) = Psp (m) PV (m)”. Here, “Psp (m)” denotes a weighted shape prior, and “PV (m)” is a Gibbs probability distribution with potentials “V”, that specifies a Markov Gibbs random field (MGRF) model of spatially homogeneous maps “m”.

3.2.1. Adaptive shape model Psp (m)

In order to account for the inhomogeneity of the OCT images, the shape information is taken into account in the segmentation. The shape model is built using 12 OCT scans, selected from 6 men and 6 women. “Ground truth” segmentations of these scans were delineated under supervision of retina specialists. Using one of the optimal scans as a reference (no tilt, centrally located fovea), the others were co-registered using a thin plate spline (TPS) (7).

The same deformations were applied to their respective ground truth segmentations, which were then averaged to produce a probabilistic shape prior of the typical retina, i.e., each position (x, y) in the reference space is assigned a prior probability “P (m)” to lie within each of the 12 tissue classes.

An image to be segmented is first aligned to the shape database by a new technique integrating the TPS with multi-resolution edge tracking that identifies control points to initialize the alignment. First, the “`a trous” algorithm (8) decomposes each scan by undecimated 130 wavelet transform. In a three-band appearance of the retina, two hyperreflective bands are separated by a hyporeflective band, corresponding roughly to the layers from ONL to MZ. Contours following the gradient maxima of this wavelet component provide initial estimates of the vitreous/NFL, MZ/EZ, and RPE/choroid boundaries (Figure 2). The fourth gradient maximum could estimate the OPL/ONL boundary, but that is not sharp enough an edge to be of use. These ridges in gradient magnitude were followed through scale space to the third wavelet component, corresponding to a scale of approximately 15 micrometers for the OCT scans used in this study. The foveal pit was then determined as the point of closest approach of the vitreous/NFL and MZ/EZ contours. Control points were then located on these boundaries at the foveal pit and at uniform intervals nasally and temporally therefrom. Finally, the optimized TPS was employed in order to align the input image to the shape database using the control points identified.

Figure 2. Illustration of the basic steps of the proposed system framework.

3.2.2. First order intensity model

In order to make the segmentation adaptive and not biased to only the shape information, we model the empirical gray level distribution of the OCT images. The first-order visual appearance of each label of the image is modeled by separating a mixed distribution of pixel intensities into individual components associated with the dominant modes of the mixture. The latter is approximated using the linear combination of discrete Gaussians (LCDG) approach, which employs positive and negative Gaussian components that is based on a modified version of the classical Expectation Maximization (EM) algorithm. For detailed information, please refer to (9).

3.2.3. Second-Order MGRF model PV (m)

For better spatial homogeneity of segmentation, the MGRF model of dependencies between adjacent region labels is combined with the shape prior and intensity mode (9). This model is identified using the nearest pixels’ 8-neighborhood and analytical bi-valued Gibbs potentials. The potentials are approximated analytically from the empirical probability of equal label pairs in the training region maps.

The steps of the segmentation framework are illustrated in Figure 3, whereas Figure 3 shows segmentation results on different SD-OCT images from subjects in different decades of life. The performance of the proposed segmentation framework relative to manual segmentation was evaluated using the agreement coefficient (AC) and the Dice similarity coefficient (DSC) (10-11).

More details about each component of this model and the segmentation algorithm are completely discussed in (12).

Figure 3. Segmentation results for different OCT images in row (A) for normal (1-3), diabetic retinopathy (4), and AMD (5) cases. Results of the proposed approach are displayed in row (B). The DSC score is displayed above each result.

3.3. Feature extraction from SD-OCT images

Several quantitative data can be derived from the segmented SD-OCT images in order to optimally characterize retinal morphology. This paper specifically addresses and discusses four distinct retinal features that are extracted from the segmented OCT scans. The first feature is the “reflectivity” of the retinal layers, which was obtained from two regions per scan, comprising the thickest portions of the retina on the nasal and temporal sides of the foveal peak. Mean reflectivity is expressed on a normalized scale, calibrated such that the formed vitreous has a mean value of 0 units normalized reflectivity scale (NRS), and the retinal pigment epithelium has a mean value of 1000 NRS. The average grey level within a segment was calculated using Huber’s M-estimate, which is resistant to outlying values that may be present, such as very bright pixels in the innermost segment that properly belong to the internal limiting membrane and not the NFL. Average grey levels were converted to NRS units via an offset and uniform scaling. Statistical analysis employed ANCOVA (13) on a full factorial design with factors gender, side of the fovea (nasal or temporal) and retinal layer, and continuous covariate age. The second feature is the “tortuosity” of the retinal layers, which calculates the curvature values for each point across the layer. First, a locally weighted polynomial is applied for smoothing the surface, then Manger curvature is calculated for each point. The third feature is the “thickness” of the retinal layers, which uses Laplace’s equation to calculate the streamlines between the corresponding points on the two surfaces for each retinal layer (14). The fourth and last feature is the “foveal angle”, which is calculated for both nasal and temporal sides. The angle is estimated between the normal to the foveal peak and the line connecting the foveal peak and the peak estimated on the nasal/temporal side (Figure 4).

Figure 4. Illustrative images of the 4 extracted features, (a) reflectivity, (b) Tortuosity, (c) Thickness, (d) foveal angles

4. Results

This section addresses the experimental results after applying the proposed approach on the images collected, followed by analysis of the three features across different decades of life. The proposed novel segmentation approach was first validated using “ground truth” for subjects, which was collected from 200 subjects aged 10-79 years. Subjects with high myopia (less than -6.0. diopters), and tilted OCT were excluded. This ground truth was created by manual delineations of retina layers reviewed with different retina specialists (SS, AP, AH, DS).

Figure 2 shows the segmentation of 12 distinct retinal layers for two different examples. In addition to the visual results in this figure, the robustness and accuracy of our approach are evaluated using both AC and DSC metrics, and the average deviation (AD) distance metric comparing our segmentation with the ground truth. Mean boundary error was 6.87 micrometers from ground truth, averaged across all 13 boundaries. The NFL/vitreous boundary of the retina was placed most accurately, with 2.78 µm mean error. The worst performance was on the RPE/choroid boundary, with 11.6 µm mean error. Whereas, only the RPE/choroid boundary was reliably detected by the other approach (15). Table 1 summarizes the quantitative comparison of our segmentation method and the other method versus the ground truth, based on the three evaluation metrics for all subjects. Statistical analysis using paired t-test demonstrates a significant difference in terms of all three metrics of our segmentation method over the other method, as confirmed by p less than 0.05. This analysis clearly demonstrates the promise of the developed approach for the segmentation of the OCT scans (Figure 1).

Table 1. Comparative segmentation accuracy of the proposed segmentation technique and the other method versus the “ground truth”
Evaluation Metric
DSCAC, %AD, μM
Our Segmentation0.763 ± 0.159873.2 ± 4.466.87 ± 2.78
The Other Method (15)0.41 ± 0.2632.25 ± 9.715.1 ± 8.6
p - value< 0.0001< 0.0001< 0.00395

After segmenting the 12 retinal layers and extracting their features, statistical analysis was conducted on those features (tortuosity, reflectivity, and thickness) for all 12 layers extracted from available subjects. The purpose of this analysis was to find out whether these features are significant to discriminate between normal and diabetic subjects. The statistical results are shown in Table 2. According to the unpaired t-test results, the tortuosity of INL, the reflectivity of MZ, and the thickness of NFL show statistically significant differences between normal and diseased cases (p less than 0.05). For now, these results encouraged us to explore the classification potential of those features on three layers only.

26 diabetic patients were paired with a control case, matched for sex and age (plus or minus 1 year), and free of retinal pathology. Statistical analysis employed mixed effects ANOVA, with fixed effects diagnosis, retinal layer, and side of fovea (nasal or temporal) in a full factorial design. There was a random intercept for each case pair, nested within age, nested within sex. The three measurements were tested individually with univariate ANOVA. Post hoc testing per retinal layer used the same design, excluding the layer effect and its interactions, of course, and applied a Bonferroni correction for multiple comparisons. Statistical summary states the following:

Table 2. The statistical analysis results. All values are represented as Mean (StD)
The 12-Layers Segmented in RetinaReflectivityTortuosity, mm−1Thickness, µm
NormalDiabeticNormalDiabeticNormalDiabetic
Nerve Fiber Layer (NFL)667(147)665(165)0.704(1.72)0.738(2.03)19.13(12.98)21.03(12.17)
Ganglion Cell Layer (GCL)584(103)590(111)0.891(3.68)1.02(4.63)34.73(18.17)31.51(17.93)
Inner Plexiform Layer (IPL)638(96.8)631(117)0.861(3.71)1.15(4.74)33.72(15.29)33.62(16.19)
Inner Nuclear Layer (INL)479(95.8)464(101)1.19(4.85)2.17(6.18)32.01(15.52)27.91(15.09)
Outer Plexiform Layer (OPL)546(93.7)514(108)1.17(4.86)1.75(5.62)27.20(14.31)30.63(13.94)
Outer Nuclear Layer (ONL)303(74.3)292(108)0.842(3.13)1.72(4.95)67.07(23.62)61.60(22.62)
External Limiting Membrane(ELM)396(135)465(236)0.134(0.247)0.102(0.164)13.78(3.71)14.71(2.99)
Myoid Zone (MZ)414(160)512(309)0.113(0.196)0.101(0.168)15.51(4.34)14.30(3.96)
Ellipsoid Zone (EZ)996(189)1038(281)0.115(0.202)0.111(0.198)14.55(3.66)14.97(3.05)
Outer PhotoReceptor (OPR)823(209)883(271)0.126(0.249)0.061(0.009)14.85(5.54)15.10(5.51)
Interdigitation Zone (IZ)1121(180)1164(191)0.083(0.138)0.061(0.093)14.31(3.81)15.56(3.10)
Retinal Pigment Epithelium (RPE)1046(97.9)1041(135)0.077(0.165)0.057(0.106)23.48(6.39)23.34(6.75)

4.1. Normalized reflectivity

Normalized reflectivity varied significantly by diagnosis and retinal layer (interaction F = 5.73, 11 numerator d.f., 1167 denominator d.f., p less than 0.0001). The main effect of diagnosis was also significant (F = 59.9, p less than 0.0001). The interaction of diagnosis with either side or layer was insignificant (F = 0.27, p = 0.99 and F = 2.74, p = 0.098, respectively). Reflectivity in diabetic subjects was 67.4 units greater overall than in their matched controls. The greatest difference in reflectivity was found in MZ and OPR, followed by ELM, EZ, and IZ (Figure 5).

Figure 5. Mean reflectivity per layer for diabetes and control cases. Error bars are 1 standard deviation. ** p (Control = Diabetes) < 0.0.5.

4.2. Tortuosity

Tortuosity varied significantly by diagnosis (F = 55, p less than 0.0001), again showing a significant diagnosis/layer interaction (F = 13.4, p less than 0.0001), but no diagnosis/side (F = 0.005, p = 0.99) or diagnosis/side/layer (F = 0.29, p = 0.99) interactions. Post hoc testing revealed significant differences anterior to the ONL, where tortuosity averaged 0.25 mm-1, 0.35 mm-1, 1.07 mm-1, 0.84 mm-1, and 0.85 mm-1 greater in the NFL, GCL, IPL, INL, and OPL, respectively, in diabetes compared to control (Figure 6).

Figure 6. Mean tortuosity of layer boundaries for diabetes and control cases. Error bars are 1 standard deviation. ** p (Control = Diabetes) < 0.0.5.

4.3. Thickness

Thickness followed the same pattern as reflectivity and tortuosity, with significant diagnosis/layer interaction (F = 6.24, p less than 0.0001). The main effect of diagnosis did not differ significantly from zero, however (F = 0.63, 1 n.d.f, p = 0.43). The diagnosis × side (F = 0.009, 1 n.d.f., p = 0.93) and diagnosis × side × layer (F = 1.77, 11 n.d.f., p = 0.054) interaction effects were also insignificant. The most pronounced differences in thickness were a wider NFL and IZ and a narrower INL in diabetes (Figure 7).

Figure 7. Mean thickness per layer for diabetes and control cases. Error bars are 1 standard deviation. ** p (Control = Diabetes) < 0.0.5.

5. Discussion

Ophthalmic OCT, first introduced in 1991 by Huang et al. practically revolutionized ophthalmic practice (4). SD-OCT is an essential part of the modern retinal evaluation, which provides invaluable and unsurpassed clinical information, otherwise unavailable. The basic SD-OCT image is a histology-equivalent optic reflectivity B-scan retinal section. To-date, all SD-OCT images are manually interpreted by an ophthalmologist on the basis of anatomic appearance and human pattern recognition. The need for an automated processing and an un-biased interpretation of retinal scans is pertinent. Accurate reproducible automated SD-OCT image analysis will enable earlier identification of retinal conditions, enable better follow up strategies and plans, eliminate human errors, and allow more efficient and cost-effective patient care. Although initial preliminary automated image processing exists in some commercially available SD-OCT models, it is currently limited to retinal thickness, retinal volume and partial retinal segmentation.

Segmentation of retinal layers from SD-OCT images has been previously attempted by several groups. Several notable achievements and pitfalls are worth discussing. Ishikawa et al. developed an automated algorithm that identifies four retinal layers using an adaptive thresholding technique (16). This algorithm failed with poor-quality images and also failed with some good-quality ones. Bagci et al. proposed an automated algorithm that extracted seven retinal layers using a customized filter for edge enhancement in order to overcome uneven tissue reflectivity (17). However, further work is needed to apply this algorithm to more advanced retinal abnormalities. Mishra et al. applied an optimization scheme to identify seven retinal layers (18). The algorithm could not separate highly reflective image features. Chui et al. proposed an automated approach for segmenting 8 retinal layers using graph theory along with dynamic programming, which reduced processing time (15). The algorithm yet worked only with high-contrast images. Another automated approach was proposed by Rossant et al. to segment eight retinal layers using active contours, k-means, and Markov random fields (19). This method performed well even when retinal blood vessels shaded the layers, but failed in blurry images. Kajic et al. developed an automated approach to segment 8 layers using a large number of manually segmented images that were used as input to a statistical model (20). Supervised learning was performed by applying knowledge of the expected shapes of structures, their spatial relationships, and their textural appearances. Yang et al. devised an approach to segment eight retinal layers using gradient information in dual scales, utilizing local and complementary global gradient information simultaneously (21). This algorithm showed promise in segmenting both healthy and diseased scans, yet more work is needed to evaluate it on retinas affected with outer/inner retinal diseases. Yazdanpanah et al. presented a semi-automated approach to extract 9 layers from OCT images using Chan and Vese’s energy-minimizing active contour without edges model (22). This algorithm incorporated a shape prior based on expert anatomical knowledge of retinal layers. The proposed method required user initialization and was never tested on human retinas nor on diseased retinas. Ghorbel et al. proposed a method for segmenting 8 retinal layers based on active contours and Markov random field model (23). A Kalman filter was also designed to model the approximate parallelism between photoreceptor segments. Dufour et al. proposed an automatic graph-based multi-surface segmentation algorithm that added prior information from a learnt model by internally employing soft constraints (24). Yin et al. applied a user-guided segmentation method that first manually defined lines at irregular regions for which automatic approached fail to segment (25). Then the algorithm is guided by these traced lines to trace the 3D retinal layers using edge detectors that are based on robust likelihood estimators. Ehnes et al. developed a graph-based algorithm for retinal segmentation which could segment up to eleven layers in images of different devices (26). Srimathi et al. applied an algorithm for retinal layer segmentation that first reduced speckle noise in OCT images, then extracted layers based on a method that combines active contour model and diffusion maps (27). Tian et al. proposed a real-time automated segmentation method that was implemented using the shortest path between two end nodes (28). This was incorporated with other techniques, such as masking and region refinement, in order to make use of the spatial information of adjacent frames.

One study reports that 21 percent of type 2 diabetics have some form of retinopathy at the time of diagnosis, and that during the first two decades of the disease, 61 percent have retinopathy (29). Diagnosing diabetes before further complication is the goal of numerous researchers. Fundus are photography that allows to have a wide view of the retina the picture is taken from the dilated pupil but are 2D and give none information about depth. While with OCT you have 3D information and also allows to distinguish the morphology of the different retinal layers. Our approach to automated pre-DR identification is based on the analysis of 2D slices from the macular OCT volume. Mizutani et al. investigated a computerized method for the detection of micro aneurysms on retinal fundus images that are considered to be early signs of DR (30). His scheme was developed by using the training cases and when the method was evaluated, the sensitivity for detecting micro aneurysms was 65 percent at 27 false positives per image. Jaafar et al. suggested an automated method for the detection of hard and soft exudates in fundus images as the earliest signs of diabetic retinopathy, its success is subjective to the existence of those candidates (31). Pachiyappan et al. describe a system for detecting of the macular abnormalities caused due to DR by applying morphological operations, filters and thresholds on the fundus images of the patient (32).

Durkin et al. patented a technology for the use of Raman spectroscopy that can be employed for the related US Application Data detection of molecular changes underlying ocular pathologies but this invention needs excessive techniques and equipment and technician of using laser in addition to the experts of ophthalmologist to interpret the results (33). You and co-workers patented a fundus camera with infrared-based technology for detecting and monitoring DR, the method is primarily based on the fundus camera, which although is useful for non-invasive diabetic retinopathy detection and monitoring but fundus limitation to 2D imaging which doesn’t give details in depth still disadvantage of the technique (34).

The above discussion demonstrates that there are some limitations associated with retinal layers’ segmentation such as the low accuracy achieved when having images with low signal to noise ratio (SNR), and the fact that the majority of the proposed approaches could segment only up to eight retinal layers, while methods that segmented more layers were successful only with high-contrast images. Most of systems for early DR detection that are being introduced in the literature have been proposed from fundus images. Fundus photography uses the same concept of the indirect ophthalmoscope for a wide view of the retina. One of the reasons fundus pictures are more common is that it can give a good presentation of systemic diseases. However, one of its crucial drawbacks is it give pictures in 2D with no appreciation for depth. To the best of our knowledge, there are no systems in the literature that aim at early detection of DR using OCT scans, and we are the first group proposing such system.

The automated data analysis revealed subtle but clinically significant quantitative characteristics of retinal layer and demonstrated significant-features changes throughout the decades of life and between layers. The framework includes a new approach for the segmentation of the 12 distinct retinal layers. Applications of the proposed approach yield promising results that could, in the near future, replace the use of current technologies for early detection of DR. The retina, being a direct derivative of the brain, cannot heal and does not regenerate. To-date retina diseases are detected after substantial anatomical damage to the retinal architecture has already occurred. Successful treatment nowadays can only slow disease progression, or at best maintain present visual function. Revealing the normal topographic and age-dependent characteristics of retina reflectivity and defining rates of normal age-related changes, will enable us to detect pre-disease conditions. This carries the promise for the development of future preventive retinal medicine that will allow early detection and early treatment of retinal conditions prior to the recognition of advanced anatomy-distorting clinical findings that is available today.

6. Acknowledgement

Ayman El-Baz and Shlomit Schaal share joint senior authorship of this article. This project was supported in part by the Coulter Translational Partnership Grant (Schaal and El-Baz 2015), by an unrestricted institutional grant from Research to Prevent Blindness (RPB), and by the University of Louisville Summer Research Scholar Program (Schaal and Neyer 2015).This project was supported by Zeiss in the means of Zeiss Cirrus HD-OCT 5000 machine loan to the University of Louisville. Two provisional patent applications have been filed regarding this technology: US Provisional Patent # 50290-6 and # 62/256,980 (Schaal and Hajrasouliha 2015).

References

    1. G.J. Jaffe, J. Caprioli: Optical coherence tomography to detect and manage retinal disease and glaucoma. Am J Ophthalmol 137, 156-169 (2004)
    DOI: 10.1016/S0002-9394(03)00792-X

    2. A. Baghaie, Z. Yu, R.M. D’Souza: State-of-the-art in retinal optical coherence tomography image analysis. Quant Imag Meg Surg 5, 603 (2015)

    3. L.M. Sakata, J. DeLeon-Ortega, V. Sakata, C.A. Girkin: Optical coherence tomography of the retina and optic nerve–a review. Clin Exp Ophthalmol 37, 90-99 (2009)
    DOI: 10.1111/j.1442-9071.2009.02015.x
    PMid:19338607

    4. D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T. Flotte, K. Gregory, C.A. Puliafito: Optical coherence tomography. Science 254, 1178-1181 (1991)
    DOI: 10.1126/science.1957169
    PMid:1957169 PMCid:PMC4638169

    5. C.P. Gracitelli, R.Y. Abe, F.A. Medeiros: Spectral-domain optical coherence tomography for glaucoma diagnosis. Open Ophthalmol J 9, 68-77 (2015)
    DOI: 10.2174/1874364101509010068
    PMid:26069519 PMCid:PMC4460228

    6. A. El-Baz, A. Elnakib, F. Khalifa, M.A. El-Ghar, P. McClure, A. Soliman, G. Gimel’farb: Precise segmentation of 3-D magnetic resonance angiography. IEEE Trans Biomed Eng 59, 2019-2029 (2012)
    DOI: 10.1109/TBME.2012.2196434
    PMid:22547453

    7. J. Lim, M.H. Yang: A direct method for modeling non-rigid motion with thin plate spline. In: Computer Vision and Pattern Recognition, CVPR 2005, Vol 1, 1196-1202 (2005)

    8. E. Lega, H. Scholl, J.M. Alimi, A. Bijaoui, P. Bury: A parallel algorithm for structure detection based on wavelet and segmentation analysis. Parallel Comput 21, 265-285 (1995)
    DOI: 10.1016/0167-8191(94)00076-M

    9. A. Alansary, M. Ismail, A. Soliman, F. Khalifa, M. Nitzken, A. Elnakib, M. Mostapha, A. Black, K. Stinebruner, M.F. Casanova, J.M. Zurada, A. El-Baz: Infant brain extraction in T1-weighted MR images using BET and refinement using LCDG and MGRF models. IEEE J Biomed Health Inform 20, 925-935 (2016)
    DOI: 10.1109/JBHI.2015.2415477
    PMid:25823048

    10. L.R. Dice: Measures of the amount of ecologic association between species. Ecology 26, 297-302 (1945)
    DOI: 10.2307/1932409

    11. K.L. Gwet: Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol 61, 29-48 (2008)
    DOI: 10.1348/000711006X126600
    PMid:18482474

    12. A. ElTanboly, , M. Ismail, A. Switala, M. Mahmoud, A. Soliman, T. Neyer, A. Palacio et al. “A novel automatic segmentation of healthy and diseased retinal layers from OCT scans.” In Image Processing (ICIP), 2016 IEEE International Conference, 116-120. (2016)

    13. R.G. Lomax, D.L. Hahs-Vaughn: Statistical Concepts: A Second Course. Routledge, London (2013)

    14. F. Khalifa, G.M. Beache, G. Gimel’farb, G.A. Giridharan, A. El-Baz: Accurate automatic analysis of cardiac cine images. IEEE Trans Biomed Eng 59, 445-455 (2012)
    DOI: 10.1109/TBME.2011.2174235
    PMid:22057040

    15. S.J. Chiu, X.T. Li, P. Nicholas, C.A. Toth, J.A. Izatt, S. Farsiu: Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Opt Express 18, 19413-19428s (2010)

    16. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, J.S. Schuman: Macular segmentation with optical coherence tomography. Invest Ophthalmol Vis Sci 46, 2012-2017 (2005)
    DOI: 10.1167/iovs.04-0335
    PMid:15914617 PMCid:PMC1939723

    17. A.M. Bagci, M. Shahidi, R. Ansari, M. Blair, N.P. Blair, R. Zelkha: Thickness profiles of retinal layers by optical coherence tomography image segmentation. Am J Ophthalmol 146, 679-687 (2008)
    DOI: 10.1016/j.ajo.2008.06.010
    PMid:18707672 PMCid:PMC2590782

    18. A. Mishra, A. Wong, K. Bizheva, D.A. Clausi: Intra-retinal layer segmentation in optical coherence tomography images. Opt Express 17, 23719-23728 (2009)
    DOI: 10.1364/OE.17.023719
    PMid:20052083

    19. F. Rossant, I. Ghorbel, I. Bloch, M. Paques, S. Tick: Automated segmentation of retinal layers in OCT imaging and derived ophthalmic measures. In: Biomedical Imaging: From Nano to Macro, ISBI’09. 1370-1373 (2009)

    20. V. Kajić, B. Považay, B. Hermann, B. Hofer, D. Marshall, P.L. Rosin, W. Drexler W: Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis. Opt Express 18, 14730-14744 (2010)
    DOI: 10.1364/OE.18.014730
    PMid:20639959

    21. Q. Yang, C.A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A.S. Raza, D.C. Hood, K. Chan: Automated layer segmentation of macular OCT images using dual-scale gradient information. Opt Express 18, 21293-21307. (2010)

    22. A. Yazdanpanah, G. Hamarneh, B.R. Smith, M.V. Sarunic: Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach. IEEE Trans Med Imag 30, 484-496 (2011)
    DOI: 10.1109/TMI.2010.2087390
    PMid:20952331

    23. I. Ghorbel, F. Rossant, I. Bloch, S. Tick, M. Paques: Automated segmentation of macular layers in OCT images and quantitative evaluation of performances. Pattern Recogn 44, 1590-1603. (2011)

    24. P.A. Dufour, L. Ceklic, H. Abdillahi, S. Schroder, S. De Dzanet, U. Wolf-Schnurrbusch, J. Kowal: Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints. IEEE Trans Med Imag 32, 531-543 (2013)
    DOI: 10.1109/TMI.2012.2225152
    PMid:23086520

    25. X. Yin, J.R. Chao, R.K. Wang: User-guided segmentation for volumetric retinal optical coherence tomography images. J Biomed Opt 19, 086020 (2014)

    26. A. Ehnes, Y. Wenner, C. Friedburg, M.N. Preising, W. Bowl, W. Sekundo, E.M. zu Bexten, K. Stieger, B. Lorenz: Optical coherence tomography (OCT) device independent intraretinal layer segmentation. Transl Vis Sci Technol 3, 1 (2014)

    27. M. Srimathi, A. Usha: Retinal layer segmentation of optical coherence tomography images with active contour model and diffusion map. Aust J Basic Appl Sci 9, 166-171 (2015)

    28. J. Tian, B. Varga, G.M. Somfai, W.H. Lee, W.E. Smiddy, D.C. DeBuc: Real-time automatic segmentation of optical coherence tomography volume data of the macular region. PloS One 10, e0133908 (2015)

    29. D.K. Karumanchi, E.G. Gaillard, J. Dillon: Early diagnosis of diabetes through the eye. Photochem Photobiol 91, 1497-1504. (2015)

    30. A. Mizutani, C. Muramatsu, Y. Hatanaka, S. Suemori, T. Hara, H. Fujita: Automated microaneurysm detection method based on double ring filter in retinal fundus images. In: SPIE Medical Imaging. 72601N (2009)

    31. H.F. Jaafar, A.K. Nandi, W. Al-Nuaimy: Automated detection of exudates in retinal images using a split-and-merge algorithm. In: EUSIPCO. 1622-1626 (2010)

    32. A. Pachiyappan, U.N. Das, T.V. Murthy, R. Tatavarti: Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images. Lipids Health Dis 11, 1 (2012)

    33. A.J. Durkin, M.N. Ediger, V.M. Chenault: Method for non-invasive identification of individuals at risk for diabetes. United States patent US 6,721,583 (2004)

    34. J. You, Q. Li, Z. Guo, Q. Sun, C. Li: Apparatus and method for non-invasive diabetic retinopathy detection and monitoring. United States patent US 9,089,288 (2015)

Share and Cite
Ahmed H. ElTanboly, Agustina Palacio, Ahmed M. Shalaby, Andrew E. Switala, Omar Helmy, Shlomit Schaal, Ayman El-Baz. An automated approach for early detection of diabetic retinopathy using SD-OCT images. Frontiers in Bioscience-Elite. 2018. 10(2); 197-207.