Advertisement

Deep Learning for Automatic Identification and Characterization of the Bleeding Potential of Enteric Protruding Lesions in Capsule Endoscopy

  • Author Footnotes
    ∗ Authors share co-first authorship.
    João Afonso
    Footnotes
    ∗ Authors share co-first authorship.
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
    Search for articles by this author
  • Author Footnotes
    ∗ Authors share co-first authorship.
    Miguel Mascarenhas
    Footnotes
    ∗ Authors share co-first authorship.
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal

    Faculty of Medicine of the University of Porto, Porto, Portugal
    Search for articles by this author
  • Tiago Ribeiro
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
    Search for articles by this author
  • Hélder Cardoso
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal

    Faculty of Medicine of the University of Porto, Porto, Portugal
    Search for articles by this author
  • Patrícia Andrade
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal

    Faculty of Medicine of the University of Porto, Porto, Portugal
    Search for articles by this author
  • João P.S. Ferreira
    Affiliations
    Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
    Search for articles by this author
  • Miguel Mascarenhas Saraiva
    Correspondence
    Correspondence: Address correspondence to: Miguel Mascarenhas Saraiva, MD, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Rua Oliveira Martins 104, Porto 4200-427, Portugal.
    Affiliations
    ManopH Gastroenterology Clinic, Porto, Portugal
    Search for articles by this author
  • Guilherme Macedo
    Affiliations
    Department of Gastroenterology, São João University Hospital, Porto, Portugal

    WGO Gastroenterology and Hepatology Training Center, Porto, Portugal

    Faculty of Medicine of the University of Porto, Porto, Portugal
    Search for articles by this author
  • Author Footnotes
    ∗ Authors share co-first authorship.
Open AccessPublished:April 16, 2022DOI:https://doi.org/10.1016/j.gastha.2022.04.008

      Background and Aims

      Capsule endoscopy (CE) revolutionized the study of the small intestine, overcoming the limitations of conventional endoscopy. Nevertheless, reviewing CE images is time-consuming. Convolutional Neural Networks (CNNs) are an artificial intelligence architecture with high performance levels for image analysis. Protruding lesions of the small intestine exhibit enormous morphologic diversity in CE images. We aimed to develop a CNN-based algorithm for automatic detection of varied small-bowel protruding lesions.

      Methods

      A CNN was developed using a pool of CE images containing protruding lesions or normal mucosa/other findings. A total of 2565 patients were included. These images were inserted into a CNN model with transfer learning. We evaluated the performance of the network by calculating its sensitivity, specificity, accuracy, positive predictive value, and negative predictive value.

      Results

      A CNN was developed based on a total of 21,320 CE images. Training and validation data sets comprising 80% and 20% of the total pool of images, respectively, were constructed for development and testing of the network. The algorithm automatically detected small-bowel protruding lesions with an accuracy of 97.1%. Our CNN had a sensitivity, specificity, positive, and negative predictive values of 95.9%, 97.1%, 83.0%, and 95.7%, respectively. The CNN operated at a rate of approximately 355 frames per second.

      Conclusion

      We developed an accurate CNN for automatic detection of enteric protruding lesions with a wide range of morphologies. The development of these tools may enhance the diagnostic efficiency of CE.

      Keywords

      Abbreviations used in this paper:

      AI (artificial intelligencem), AUROC (area under the receiver operating characteristic curve), CE (capsule endoscopy), CNN (convolutional neural network), OGIB (obscure gastrointestinal bleeding), ROC (receiver operating characteristic)

      Introduction

      Small-bowel tumors represent 5% of all tumors of the gastrointestinal tract. The exploration of the small bowel has always been difficult due to its inaccessibility to conventional endoscopic methods. Capsule endoscopy (CE) has revolutionized the approach to patients with suspected small-bowel disease, allowing minimally invasive visual inspection of its full length. CE has shown great clinical value in several distinct clinical settings, including obscure gastrointestinal bleeding (OGIB) and the diagnosis and monitoring of patients with Crohn’s disease.
      • Triester S.L.
      • Leighton J.A.
      • Leontiadis G.I.
      • et al.
      A meta-analysis of the yield of capsule endoscopy compared to other diagnostic modalities in patients with obscure gastrointestinal bleeding.
      ,
      • Le Berre C.
      • Trang-Poisson C.
      • Bourreille A.
      Small bowel capsule endoscopy and treat-to-target in Crohn's disease: a systematic review.
      Additionally, this technique is also indicated for the workup of patients with clinical or imaging suspicion of small-bowel tumors and patients with inherited polyposis syndromes.
      • Pennazio M.
      • Spada C.
      • Eliakim R.
      • et al.
      Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) clinical guideline.
      ,
      • Schwartz G.D.
      • Barkin J.S.
      Small-bowel tumors detected by wireless capsule endoscopy.
      The identification of enteric protruding lesions by CE is often difficult as these lesions have significant pleomorphism.
      • Korman L.Y.
      • Delvaux M.
      • Gay G.
      • et al.
      Capsule Endoscopy Structured Terminology (CEST): proposal of a standardized and structured terminology for reporting capsule endoscopy procedures.
      Moreover, these lesions frequently have a covert clinical progression, with patients frequently demonstrating nonspecific symptoms. OGIB, although not specific, is a cardinal feature in patients presenting with small-bowel protruding lesions of diverse etiology.
      • Schwartz G.D.
      • Barkin J.S.
      Small-bowel tumors detected by wireless capsule endoscopy.
      In fact, small-bowel tumors are responsible for up to 5% of all cases of OGIB.
      • Koulaouzidis A.
      • Rondonotti E.
      • Giannakou A.
      • et al.
      Diagnostic yield of small-bowel capsule endoscopy in patients with iron-deficiency anemia: a systematic review.
      The bleeding potential varies significantly between different lesions, and its assessment is a cornerstone for an accurate interpretation of CE findings. Saurin et al
      • Saurin J.C.
      • Delvaux M.
      • Gaudin J.L.
      • et al.
      Diagnostic value of endoscopic capsule in patients with obscure digestive bleeding: blinded comparison with video push-enteroscopy.
      have proposed a classification for ascertainment of the bleeding potential of several CE findings. This classification divides CE findings into 3 distinct categories according to their bleeding potential: no bleeding potential (P0), uncertain bleeding potential (P1), or high bleeding potential (P2). Lesions with high bleeding potential include vascular lesions (eg, angiectasia), large ulcers, and small-bowel tumors. Their prompt identification is essential for adequate acute and long-term patient management.
      • Saurin J.C.
      • Delvaux M.
      • Gaudin J.L.
      • et al.
      Diagnostic value of endoscopic capsule in patients with obscure digestive bleeding: blinded comparison with video push-enteroscopy.
      Nevertheless, the evaluation of CE exams can be a burdensome task for a gastroenterologist. Indeed, each CE exam produces approximately 50,000 images, which requires approximately 30–120 minutes for reading.
      • Leenhardt R.
      • Vasseur P.
      • Li C.
      • et al.
      A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy.
      Moreover, CE exams are susceptible for overlooking significant lesions due to the significant pleomorphism of its findings adding to the fact that lesions may be restricted to a very small number of frames.
      Large databases of CE images have potentiated the development of computational tools for assistance in reading CE exams. Most notably, the simultaneous increase in computational power has enabled the development of artificial intelligence (AI) tools for automated analysis of medical imaging. Recent evidence has delivered promising results regarding the application of these tools across distinct medical specialties.
      • Yasaka K.
      • Akai H.
      • Abe O.
      • et al.
      Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study.
      • Esteva A.
      • Kuprel B.
      • Novoa R.A.
      • et al.
      Dermatologist-level classification of skin cancer with deep neural networks.
      • Gargeya R.
      • Leng T.
      Automated identification of diabetic retinopathy using deep learning.
      The implementation of these tools in gastrointestinal endoscopy has suggested that these technological advances may lead to an increase in their diagnostic yield.
      • Repici A.
      • Badalamenti M.
      • Maselli R.
      • et al.
      Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial.
      CE is expected to benefit significantly from the adoption of these tools as they may allow to overcome significant CE drawbacks, particularly the time required for reading CE exams. To date, no AI algorithm has been developed for the detection and characterization of the bleeding potential of small-bowel protruding lesions. Thus, we aimed to develop a Convolutional Neural Networks (CNN)-based model for automatic detection and assessment of the bleeding potential of small-bowel protruding lesions in CE images.

      Methods

      Study Design

      A multicentric proof-of-concept study was designed for development and validation of a CNN model for automatic detection of protruding lesions and characterization of their bleeding potential. CE exams from 2 different institutions, São João University Hospital (Porto, Portugal) and ManopH Gastroenterology Clinic (Porto, Portugal), were retrospectively reviewed. A total of 2565 CE exams (1483 from São João University Hospital and 1082 from ManopH Gastroenterology Clinic) were performed in 2311 patients. The full-length CE video of all participants was reviewed. Inclusion and labelling of frames were performed by 3 gastroenterologists with experience in CE (M.M., H.C., and M.M.S., each with over 1500 CE exams prior to this study). The inclusion and final labelling of the frames was dependent on agreement of at least 2 of the 3 researchers.
      The protocol of this study was approved by the ethics committee of São João University Hospital/Faculty of Medicine of the University of Porto (No. CE 407/2020). This study is of retrospective nature and was conducted in respect to the original and subsequent versions of the Declaration of Helsinki. Thus, there was no interference in the conventional clinical management of each included patient. Any information deemed to potentially identify the subjects was omitted, and each patient was assigned a random number to guarantee effective data anonymization for researchers involved in CNN network development. A team with Data Protection Officer certification (Maastricht University) confirmed the nontraceability of data and conformity with the general data protection regulation.

      CE Protocol

      All CE procedures used the PillCam SB3 system (Medtronic, Minneapolis, MN). This system incorporates 3 major components: the endoscopic capsule, a sensor belt connected to a data recorder, and a software program for image revision. The capsule measures 26.2 mm in length and 11.4 mm in width. It has a high-resolution camera with a 156° field of view. The automatically adjustable frame rate oscillates between 2 and 6 frames per second, depending on the speed of progression of the endoscopic capsule. The battery of the endoscopic capsule has an estimated life of ≥8 hours. The images were reviewed using the PillCam Software version 9.0 (Medtronic, Minneapolis, MN). Images were processed to remove possible patient-identifying information (name, operating number, date of procedure). Each extracted frame was stored and assigned a consecutive number. Fasting and bowel preparation before the CE exam were performed according to previously issued recommendations.
      • Rondonotti E.
      • Spada C.
      • Adler S.
      • et al.
      Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) technical review.

      Classification of Lesions

      Each frame was evaluated for the presence of enteric protruding lesions (polyps, epithelial tumors, subepithelial lesions and nodules).
      • Korman L.Y.
      • Delvaux M.
      • Gay G.
      • et al.
      Capsule Endoscopy Structured Terminology (CEST): proposal of a standardized and structured terminology for reporting capsule endoscopy procedures.
      The hemorrhagic potential of these lesions was estimated according to Saurin’s classification
      • Saurin J.C.
      • Delvaux M.
      • Gaudin J.L.
      • et al.
      Diagnostic value of endoscopic capsule in patients with obscure digestive bleeding: blinded comparison with video push-enteroscopy.
      : P0, no hemorrhagic potential; P1, uncertain/intermediate hemorrhagic potential; P2, high hemorrhagic potential. Protruding lesions were considered as P2 when large (≥10 mm), ulcerated, or when hemorrhagic stigmata were present. These lesions were classified as P1 when small (<10 mm) and with intact overlying mucosa (eg, subepithelial lesions). CE images were included regardless of bowel preparation quality (ie, images with both good and poor bowel preparation quality were selected).

      Development of the CNN

      From the collected pool of images (n = 21,320), 2945 showed protruding lesions (P1, 1860 frames; P2, 1085 frames). The remaining images displayed normal small-bowel mucosa. The full image data set was split into 2 distinct sets, for constitution of training and validation data sets. The training data set comprised 80% of the consecutively extracted images (n = 17,056). The remaining 20% were used as the validation data set (n = 4264). The validation data set was used for assessing the performance of the CNN. A flowchart summarizing the study design and image selection for the development (training and validation) of the CNN is presented in Figure 1.
      Figure thumbnail gr1
      Figure 1Study flow chart for the training and validation phases. AUROC, area under the receiver operating characteristic curve; CE, capsule endoscopy; CNN, convolutional neural network; N, normal small-bowel mucosa; P1P, protruding lesions with uncertain hemorrhagic potential (P1); P2P, protruding lesions with high hemorrhagic potential (P2).
      To create the CNN, we used the Xception model
      • Chollet F.
      Xception: deep learning with depthwise separable convolutions.
      with its weights trained on ImageNet (a large-scale image data set aimed for use in development of object recognition software).
      • Deng J.
      • Dong W.
      • Socher R.
      • et al.
      ImageNet: a large-scale hierarchical image database.
      To transfer this learning to our data, we kept the convolutional layers of the model. We removed the last fully connected layers and attached fully connected layers based on the number of classes we used to classify our endoscopic images. We used 2 blocks, each having a fully connected layer followed by a Dropout layer of 0.3 drop rate. Following these 2 blocks, we added a Dense layer with a size defined as the number of categories (4) to classify. The learning rate of 0.0001, batch size of 22, and the number of epochs of 100 were set by trial and error. We used Tensorflow 2.3 and Keras libraries to prepare the data and run the model.
      • Abadi M.
      • Barham P.
      • Chen J.
      • et al.
      TensorFlow: a system for large-scale machine learning.
      To minimize bias toward the most prominent class (Normal), we have applied data augmentation techniques on the images of the P1 and P2 categories, and we used class-weighted gradients. The analyses were performed with a computer equipped with a 2.1 GHz Intel Xeon Gold 6130 processor (Intel, Santa Clara, CA) and a double NVIDIA Quadro RTX 4000 graphic processing unit (NVIDIA Corporate, Santa Clara, CA).
      To create the CNN, we used the Xception model with its weights trained on ImageNet. We used Tensorflow 2.3 and Keras libraries to prepare the data and run the model. For each image, the CNN calculated the probability for each category.

      Model Performance and Statistical Analysis

      The primary outcome measures included sensitivity, specificity, precision, and the accuracy in differentiating between images containing normal mucosa and enteric protruding lesions with distinct bleeding potential. Moreover, we used receiver operating characteristic (ROC) curves analysis and area under the ROC curves to measure the performance of our model in the distinction between the categories. The network’s classification was compared to the diagnosis provided by specialists’ analysis, the latter being considered the gold standard.
      In addition to its diagnostic performance, the algorithm’s computational speed was determined using the validation image data set by calculating the time required for the CNN to provide output for all images.
      For each image, the CNN calculated the probability for each of the 3 categories (normal mucosa, P1 protruding lesions, and P2 protruding lesions). The software generated heatmaps that localized features originating a class probability, helping in the identification of morphological features, that lead to the lesion detection and differentiation by the CNN (Figure 2). A higher probability value translated into a greater confidence in the CNN prediction. The category with the highest probability score was given as the CNN’s predicted classification (Figure 3). Sensitivities, specificities, and precisions are presented as means ± standard deviation. ROC curves were graphically represented, and area under the receiver operating characteristic curves (AUROCs) were calculated. A statistical analysis was performed using the Scikit-learn v0.22.2 (Scikit).
      • Pedregosa F.
      • Varoquaux G.
      • Gramfort A.
      • et al.
      Scikit-learn: machine learning in Python.
      Figure thumbnail gr2
      Figure 2Heatmaps obtained from the application of the convolutional neural network. (A) Example of heatmap showing a P1P lesion as identified by the CNN. (B) Example of heatmap showing a P2P lesion. CNN, convolutional neural network; P1P, protruding lesions with uncertain hemorrhagic potential (P1); P2P, protruding lesions with high hemorrhagic potential (P2).
      Figure thumbnail gr3
      Figure 3Output obtained from the application of the convolutional neural network. The bars represent the probability estimated by the network. The finding with the highest probability was outputted as the predicted classification. A blue bar represents a correct prediction. Red bars represent an incorrect prediction. N, normal small-bowel mucosa; P1P, protruding lesions with uncertain hemorrhagic potential (P1); P2P, protruding lesions with high hemorrhagic potential (P2).

      Results

      Construction and Development of the Network

      We developed a CNN based on a total of 2565 CE exams performed between 2015 and 2020. From these exams, 21,320 images were extracted: 2945 showing protruding lesions (P1, 1860 frames; P2, 1085 frames; and the remaining showing normal mucosa). The training data set comprised 80% of the total image pool (n = 17,056). The remaining 20% (n = 4264) were used for testing the model. The latter subset of images comprised 372 and 217 frames showing P1 and P2 protruding lesions, respectively, and 3675 images with normal mucosa. Each image was evaluated by the CNN, which predicted a classification, subsequently compared with the classification provided by the specialists. With repeated data inputs, the overall accuracy of the multilayer CNN increased, in both training and validation environments (Figure 4).
      Figure thumbnail gr4
      Figure 4Evolution of the accuracy of the convolutional neural network. Progress of the overall performance of the CNN during training and validation phases, as images were repeatedly fed into the neural network. CNN, convolutional neural network.

      Overall Performance of the Convolutional Neural Network

      The distribution of results is displayed in Table 1. Overall, the CNN had a mean sensitivity of 95.9% ± 3.3% and specificity of 97.1% ± 1.1%. The accuracy of the CNN was 97.1% ± 1.2%. The positive predictive value and negative predictive value were 83.0% ± 11.0% and 95.7% ± 9.2%, respectively.
      Table 1Confusion Matrix of the Automatic Detection vs Expert Classification
      Expert classification
      NormalP1P2
      CNN classificationNormal25493
      P1142099
      P21510190
      Normal, normal mucosa; P1, lesions with uncertain/intermediate hemorrhagic risk (red spots); P2, high-bleeding-risk lesions (angiectasia/varices).

      CNN Performance for the Detection and Distinction of Normal Mucosa or Enteric Protruding Lesions

      We aimed to evaluate the performance of the algorithm for the detection and distinction of enteric protruding lesions with different hemorrhagic potential. The summary of results is shown in Table 2. The trained network had a sensitivity of 90.6%, a specificity of 96.9%, and an overall accuracy of 96.4% for the detection of protruding lesions with uncertain/intermediate bleeding potential (P1 lesions). The AUROC was 0.98. The network identified P2 protruding lesions (high hemorrhagic potential) with a sensitivity, specificity, and accuracy of 97.7%, 98.4%, and 98.3%, respectively. The AUROC was 1.00. The CNN differentiated P2 protruding lesions from P1 protruding lesions with a sensitivity of 98.6%, a specificity of 96.3%, and an accuracy of 97.2%. Normal mucosa was detected with a sensitivity of 95.4%, specificity of 95.9%, and accuracy of 95.5%. The ROC curves and respective AUROCs for detection of normal mucosa and P1 and P2 protruding lesions are represented in Figure 5.
      Table 2Convolutional Neural Network Performance for Detection and Differentiation P1 and P2 Protruding Lesions
      CNN´s diagnostic performance marksSensitivitySpecificityPPVNPVAccuracy
      Overall, mean % ± SD95.9 ± 3.397.1 ± 1.183.0 ± 11.095.7 ± 9.297.1 ± 1.2
      P1P vs all, %90.696.973.999.196.4
      P2P vs all, %97.798.476.399.998.3
      Normal vs all, %95.495.999.377.095.5
      P1 vs Normal, %93.996.874.499.496.5
      P2 vs Normal, %99.198.580.099.998.5
      P2 vs P1, %98.696.394.299.197.2
      Normal, normal mucosa; NPV, negative predictive value; P1P, protruding lesions with uncertain/intermediate hemorrhagic risk; P2P, protruding lesions with high bleeding risk; PPV, positive predictive value; SD, standard deviation.
      Figure thumbnail gr5
      Figure 5ROC analyses of the network’s performance in the detection of normal mucosa, P1 protruding lesions, and P2 protruding lesions. AUROC, area under the receiver operating characteristic curve; N, normal small-bowel mucosa; P1P, protruding lesions with uncertain hemorrhagic potential (P1); P2P, protruding lesions with high hemorrhagic potential (P2).

      Computational Performance of the Convolutional Neural Network

      The CNN required 12 seconds to complete the reading of the entire validation data set. This translates into an approximated reading rate of 355 frames per second (3 ms/image).

      Discussion

      This proof-of-concept study documents the development of an accurate deep learning algorithm for automatic detection and classification of the hemorrhagic potential of enteric protruding lesions in CE. The CNN detected protruding lesions of the small bowel with a sensitivity of approximately 96% and a specificity and accuracy of 97%. To the best of our knowledge, this is the first study to evaluate the performance of a CNN, simultaneously aiding in automatic detection of small-bowel protruding lesions and characterization of their bleeding potential. Our network reached high levels of image processing performance, with each image being read by the CNN in 0.003 seconds.
      Most patients in whom a small-bowel tumor is ultimately diagnosed present with OGIB or iron-deficient anemia.
      • Bailey A.A.
      • Debinski H.S.
      • Appleyard M.N.
      • et al.
      Diagnosis and outcome of small bowel tumors found by capsule endoscopy: a three-center Australian experience.
      Therefore, CE is performed during the workup of these problems, and therefore, most small-bowel tumors are initially diagnosed by CE. The diagnosis of small-bowel tumors by CE is difficult as these lesions have significant pleomorphism. Morphologic scores have been developed for the diagnosis of small-bowel protruding lesions. Although reaching good interobserver agreement, such tools have suboptimal sensitivity and specificity.
      • Girelli C.M.
      • Porta P.
      • Colombo E.
      • et al.
      Development of a novel index to discriminate bulge from mass on small-bowel capsule endoscopy.
      Moreover, the assessment of the bleeding potential of these lesions may have significant impact in both the etiologic diagnosis and therapeutic approach to these patients. In our study, we included several types of enteric protruding lesion with distinct morphology. The inclusion of lesions across a large morphologic spectrum increases the difficulty in automatic detection.
      The evaluation of CE exams is time-consuming, and lesions may be restricted to a small number of frames, thus increasing the risk of overlooking significant lesions.
      • Koulaouzidis A.
      • Iakovidis D.K.
      • Karargyris A.
      • et al.
      Optimizing lesion detection in small-bowel capsule endoscopy: from present problems to future solutions.
      The development of AI algorithms has shown promising results in overcoming these drawbacks.
      • Ding Z.
      • Shi H.
      • Zhang H.
      • et al.
      Gastroenterologist-level identification of small-bowel diseases and normal variants by capsule endoscopy using a deep-learning model.
      • Aoki T.
      • Yamada A.
      • Kato Y.
      • et al.
      Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: a multicenter study.
      • Aoki T.
      • Yamada A.
      • Aoyama K.
      • et al.
      Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading.
      The multilayer architecture of CNNs resembles the organization of the animal visual cortex and are primed for the automatic interpretation of images. Recent studies have suggested the high diagnostic performance of CNN-based models for small-bowel CE, including for the detection of ulcers and erosions, celiac disease, luminal blood content, angiectasia, as well as small-bowel protruding lesions.
      • Leenhardt R.
      • Vasseur P.
      • Li C.
      • et al.
      A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy.
      ,
      • Aoki T.
      • Yamada A.
      • Aoyama K.
      • et al.
      Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network.
      • Aoki T.
      • Yamada A.
      • Kato Y.
      • et al.
      Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network.
      • Klang E.
      • Barash Y.
      • Margalit R.Y.
      • et al.
      Deep learning algorithms for automated detection of Crohn's disease ulcers by video capsule endoscopy.
      • Wang X.
      • Qian H.
      • Ciaccio E.J.
      • et al.
      Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction.
      • Saito H.
      • Aoki T.
      • Aoyama K.
      • et al.
      Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network.
      A common conclusion across most studies focusing on the development of AI models for automatic detection of lesions in CE is the potential of these technologies to improve the time required for reading full-length CE exams, as well as the lesion detection rate.
      • Ding Z.
      • Shi H.
      • Zhang H.
      • et al.
      Gastroenterologist-level identification of small-bowel diseases and normal variants by capsule endoscopy using a deep-learning model.
      ,
      • Aoki T.
      • Yamada A.
      • Aoyama K.
      • et al.
      Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading.
      In line with this, in our study, a CNN provided accurate detection and differentiation of the bleeding potential of enteric protruding lesions. Moreover, the algorithm had a high image-processing capacity (speed of processing: 0.003 seconds/image), which is expected to translate into shorter reading times.
      Saito et al
      • Saito H.
      • Aoki T.
      • Aoyama K.
      • et al.
      Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network.
      recently published a multicenter retrospective study reporting the development of a deep learning algorithm based on a CNN for automatic detection of enteric protruding lesions. Their model achieved a sensitivity of 91% and a specificity of 80%. The overall accuracy of their CNN was 85%. In this study, OGIB was the most frequent indication in patients for whom a protruding lesion was ultimately found. Nevertheless, the authors do not provide a classification of the hemorrhagic potential of these lesions.
      This work has several highlights. First, to our knowledge, it is the first to detect protruding lesions of the small bowel and to assess their bleeding potential according to a validated classification system (Saurin’s classification). Although the use of this classification system is not widespread, the authors believe that the implementation of automatic tools for automatic characterization of the bleeding potential of enteric lesions will have a significant clinical impact. Second, we included a large number of images from 2 gastroenterology centers. Third, our algorithm demonstrated high levels of performance in the detection and differentiation of such lesions. Finally, the architecture of our network demonstrated a high image-processing performance, with an approximate reading rate of 355 frames per second. This reading rate is higher than that reported for other CNNs for assisted CE reading.
      • Saito H.
      • Aoki T.
      • Aoyama K.
      • et al.
      Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network.
      ,
      • Tsuboi A.
      • Oka S.
      • Aoyama K.
      • et al.
      Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images.
      With consistent implementation of these tools to routine clinical practice, these performance marks may be reflected into shorter CE reading times.
      This study has several limitations. First, our study is of retrospective design. Second, our model was developed using still frames. Thus, evaluating the performance of this technology using full-length videos in large prospective multicentric studies is required prior to the introduction to clinical practice. Additionally, our model focused on the Pillcam SB3 system. Therefore, our results may not be generalizable to other CE systems or applicable to other clinical scenarios. Future studies should include the evaluation of this model using other CE systems, different computer specifications, and, particularly, different data sets.
      In conclusion, deep learning-based tools are expected to have a significant impact in the interpretation of CE exams, and their application to routine clinical practice is expected to grow in the near future. The authors believe that our results may help to sediment the application of AI technology in this field.

      Authors' Contributions

      Miguel Mascarenhas: Study design; Revision of CCE videos, image extraction, and labelling; Construction and development of the CNN; Data analysis; Revision of the manuscript; and Approval of final version of the manuscript. Tiago Ribeiro: Study design, Construction and development of the CNN, Data analysis, Bibliographic review, Drafting of the manuscript, Revision of the manuscript, and Approval of final version of the manuscript. João Afonso: Study design, Construction and development of the CNN, Data analysis, Bibliographic review, Revision of the manuscript, and Approval of final version of the manuscript. Hélder Cardoso: Study design; Revision of CCE videos, image extraction, and labelling; Revision of the manuscript; and Approval of final version of the manuscript. Patrícia Andrade: Study design, Revision of the manuscript, and Approval of final version of the manuscript. João Ferreira: Study design, Construction and development of the CNN, Data analysis, Statistical analysis, Revision of the manuscript, and Approval of final version of the manuscript. Miguel Mascarenhas Saraiva: Study design; Revision of CCE videos, image extraction, and labelling; Revision of the manuscript; and Approval of final version of the manuscript. Guilherme Macedo: Study design, Revision of the manuscript, and Approval of final version of the manuscript.

      References

        • Triester S.L.
        • Leighton J.A.
        • Leontiadis G.I.
        • et al.
        A meta-analysis of the yield of capsule endoscopy compared to other diagnostic modalities in patients with obscure gastrointestinal bleeding.
        Am J Gastroenterol. 2005; 100: 2407-2418
        • Le Berre C.
        • Trang-Poisson C.
        • Bourreille A.
        Small bowel capsule endoscopy and treat-to-target in Crohn's disease: a systematic review.
        World J Gastroenterol. 2019; 25: 4534-4554
        • Pennazio M.
        • Spada C.
        • Eliakim R.
        • et al.
        Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) clinical guideline.
        Endoscopy. 2015; 47: 352-376
        • Schwartz G.D.
        • Barkin J.S.
        Small-bowel tumors detected by wireless capsule endoscopy.
        Dig Dis Sci. 2007; 52: 1026-1030
        • Korman L.Y.
        • Delvaux M.
        • Gay G.
        • et al.
        Capsule Endoscopy Structured Terminology (CEST): proposal of a standardized and structured terminology for reporting capsule endoscopy procedures.
        Endoscopy. 2005; 37: 951-959
        • Koulaouzidis A.
        • Rondonotti E.
        • Giannakou A.
        • et al.
        Diagnostic yield of small-bowel capsule endoscopy in patients with iron-deficiency anemia: a systematic review.
        Gastrointest Endosc. 2012; 76: 983-992
        • Saurin J.C.
        • Delvaux M.
        • Gaudin J.L.
        • et al.
        Diagnostic value of endoscopic capsule in patients with obscure digestive bleeding: blinded comparison with video push-enteroscopy.
        Endoscopy. 2003; 35: 576-584
        • Leenhardt R.
        • Vasseur P.
        • Li C.
        • et al.
        A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy.
        Gastrointest Endosc. 2019; 89: 189-194
        • Yasaka K.
        • Akai H.
        • Abe O.
        • et al.
        Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study.
        Radiology. 2018; 286: 887-896
        • Esteva A.
        • Kuprel B.
        • Novoa R.A.
        • et al.
        Dermatologist-level classification of skin cancer with deep neural networks.
        Nature. 2017; 542: 115-118
        • Gargeya R.
        • Leng T.
        Automated identification of diabetic retinopathy using deep learning.
        Ophthalmology. 2017; 124: 962-969
        • Repici A.
        • Badalamenti M.
        • Maselli R.
        • et al.
        Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial.
        Gastroenterology. 2020; 159: 512-520.e7
        • Rondonotti E.
        • Spada C.
        • Adler S.
        • et al.
        Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) technical review.
        Endoscopy. 2018; 50: 423-446
        • Chollet F.
        Xception: deep learning with depthwise separable convolutions.
        IEEE Conference Publication, IEEE Xplore, 2017
        • Deng J.
        • Dong W.
        • Socher R.
        • et al.
        ImageNet: a large-scale hierarchical image database.
        in: 2009 IEEE conference on computer vision and pattern recognition, 20-25 June 2009. IEEE Conference Publication, IEEE Xplore, 2009
        • Abadi M.
        • Barham P.
        • Chen J.
        • et al.
        TensorFlow: a system for large-scale machine learning.
        in: Proceedings of the 12th USENIX conference on operating systems design and implementation. USENIX Association, Savannah2016: 265-283
        • Pedregosa F.
        • Varoquaux G.
        • Gramfort A.
        • et al.
        Scikit-learn: machine learning in Python.
        J Mach Learn Res. 2011; 12: 2825-2830
        • Bailey A.A.
        • Debinski H.S.
        • Appleyard M.N.
        • et al.
        Diagnosis and outcome of small bowel tumors found by capsule endoscopy: a three-center Australian experience.
        Am J Gastroenterol. 2006; 101: 2237-2243
        • Girelli C.M.
        • Porta P.
        • Colombo E.
        • et al.
        Development of a novel index to discriminate bulge from mass on small-bowel capsule endoscopy.
        Gastrointest Endosc. 2011; 74 (quiz 1115.e1-5): 1067-1074
        • Koulaouzidis A.
        • Iakovidis D.K.
        • Karargyris A.
        • et al.
        Optimizing lesion detection in small-bowel capsule endoscopy: from present problems to future solutions.
        Expert Rev Gastroenterol Hepatol. 2015; 9: 217-235
        • Ding Z.
        • Shi H.
        • Zhang H.
        • et al.
        Gastroenterologist-level identification of small-bowel diseases and normal variants by capsule endoscopy using a deep-learning model.
        Gastroenterology. 2019; 157: 1044-1054.e5
        • Aoki T.
        • Yamada A.
        • Kato Y.
        • et al.
        Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: a multicenter study.
        Gastrointest Endosc. 2021; 93: 165-173.e1
        • Aoki T.
        • Yamada A.
        • Aoyama K.
        • et al.
        Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading.
        Dig Endosc. 2020; 32: 585-591
        • Aoki T.
        • Yamada A.
        • Aoyama K.
        • et al.
        Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network.
        Gastrointest Endosc. 2019; 89: 357-363.e2
        • Aoki T.
        • Yamada A.
        • Kato Y.
        • et al.
        Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network.
        J Gastroenterol Hepatol. 2020; 35: 1196-1200
        • Klang E.
        • Barash Y.
        • Margalit R.Y.
        • et al.
        Deep learning algorithms for automated detection of Crohn's disease ulcers by video capsule endoscopy.
        Gastrointest Endosc. 2020; 91: 606-613.e2
        • Wang X.
        • Qian H.
        • Ciaccio E.J.
        • et al.
        Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction.
        Comput Methods Programs Biomed. 2020; 187: 105236
        • Saito H.
        • Aoki T.
        • Aoyama K.
        • et al.
        Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network.
        Gastrointest Endosc. 2020; 92: 144-151.e1
        • Tsuboi A.
        • Oka S.
        • Aoyama K.
        • et al.
        Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images.
        Dig Endosc. 2020; 32: 382-390