Digestive Disease Week 2019


2019年5月18日~5月21日にかけてアメリカ サンディエゴにて開催されましたDigestive Disease Week2019に参加いたしました.本学会は,胃腸病学,肝臓病学,内視鏡検査,胃腸外科の分野における医師,研究者およびその業界の最大組織であり,一流の研究に対するアプローチを学びその分野の引導者とコミュニケーションを図ることで,自分の研究に対するフィードバックを得ることを目的に開催されています.本研究室からは廣安先生,奥村駿介(M2),清野允貴(M1)の3名が参加しました.発表形式は奥村(M2),清野(M1)がポスター発表を行いました.発表題目は以下の通りです.

  • “UNSUPERVISED MACHINE LEARNING BASED AUTOMATIC DEMARCATION LINE DRAWING SYSTEM ON NBI IMAGES OF EARLY GASTRIC CANCER”
     S.OKUMURA, T.YASUDA, H.ICHIKAWA, S.HIWA, N.YAGI, T.HIROYASU.
  • “MACHINE-LEARNING-BASED AUTOMATIC DIAGNOSTIC SYSTEM USING LINKED COLOR IMAGING FOR HELICOBACTER PYLORI INFECTION: EXAMINATION OF IMAGE AFTER ERADICATION”
     M.SEINO, T.YASUDA, H.ICHIKAWA, S.HIWA, N.YAGI, T.HIROYASU.



学会には共同研究者である朝日大学病院消化器内科の八木先生,同志社大学生命医科学部の市川先生も参加されました.DDWは医学学会ですが,近年臨床におけるAIの注目もあり,「医用画像に対するコンピュータ診断」のセッションが開設されておりました.我々は2名とも本セッションで発表を行いました.周りには画像処理や機械学習を用いて診断支援システムの構築を行っている先生方が多くおられました.発表時間は2時間でしたが,日本の先生のみならず,海外の先生や工学者の方々も多数我々の研究に興味をもってくださり,絶え間なくディスカッションを行うことができました.本ディスカッションより,現状の課題が明確になり,また他の発表を聞くことで,今後の研究方針のヒントも得ることができました.
今回得た知見とモチベーションを今後の研究に活かし,修了までの残り9ヶ月を有意義な時間にしたいと思います.また後輩の育成にもより一層力を入れていこうと思います.


 

【文責:M2 奥村(駿)】

 

学会参加報告書

 
報告者氏名
 
奥村駿介
発表論文タイトル 教師なし学習に基づく早期胃癌NBI内視鏡画像に対するDemarcation Line自動診断システム
発表論文英タイトル UNSUPERVISED MACHINE LEARNING BASED AUTOMATIC DEMARCATION LINE DRAWING SYSTEM ON NBI IMAGES OF EARLY GASTRIC CANCER
著者 奥村駿介,安田剛士,市川寛,日和悟,八木信明,廣安知之
主催 Digestive Disease Week
講演会名 DDW2019
会場 San Diego Convention Center
開催日程 2019/05/18-2019/05/21

 
 

  1. 講演会の詳細

2019年05月18日から2019年05月21日にかけて,San Diego Convention Centerにて開催されたDigestive Disease Week 2019(DDW2019)に参加いたしました.本学会は,胃腸病学,肝臓病学,内視鏡検査,胃腸外科の分野における医師,研究者およびその業界の最大組織であり,一流の研究に対するアプローチを学びその分野の引導者とコミュニケーションを図ることで,自分の研究に対するフィードバックを得ることを目的に開催されています.
 
私は全日参加いたしました.本研究室からは他に,廣安先生,M1の学生として清野くんが参加しました.企業も多く参加していました.
 

  1. 研究発表
    • 発表概要

私は20日のAGA-Computers in Endoscopyのセッションで発表を行いました.発表はポスター形式で,計2時間参加者の方と議論を行いました.
タイトルは,UNSUPERVISED MACHINE LEARNING BASED AUTOMATIC DEMARCATION LINE DRAWING SYSTEM ON NBI IMAGES OF EARLY GASTRIC CANCERで,教師なし学習と色特徴を用いてNBI拡大観察画像から早期胃癌の境界線(Demarcation Line)の診断を自動化し,30枚の画像における感度80.5%達成したことを報告しました.以下に抄録を記載致します.

Introduction
In this study, we aim to assist physicians in diagnosing early gastric cancer through an automatic demarcation line (DL) drawing system on NBI images. Recently, research on image diagnosis using supervised machine learning such as deep learning has attracted interest. However, for these methods, it is necessary to prepare a large number of supervised signals for the system to learn and ensure reliability. Moreover, in an image diagnosis by supervised learning methods, the feature to be used is not precise, making the result interpretation complicated. Therefore, in this study, we propose a new method based on unsupervised machine learning.
2.Aims and Methods
The aim of this study is to automate the DL diagnosis of early gastric cancer for lesion resection. In this study, a diagnostic system using “unsupervised machine learning,” which does not require any supervised signals by a physician, is proposed. The proposed method is explained as follows. First, data is classified based on the value of the features, and the lesion site is specified. Specifically, gastric mucosal structures are quantified using 13 types of color features. Second, an NBI image is divided into 400 superpixels (a set of pixels with similar features) and calculated features are stored in each superpixel. Finally, each superpixel is classified based on the features by k-means clustering, and the lesion site is specified. In this system, three patterns of the detected DLs are displayed for each image. Moreover, a physician has the functionality of drawing the DL corresponding to each case. The algorithm of this system is shown in Fig. 1. In this study, we applied our system to early gastric cancer lesions (20 NBI magnified observation images) of 10 cases in which an endoscopic examination was performed at Asahi University Hospital from April 2014 to December 2016. A corresponding DL was detected for each case. The effectiveness of the proposed system was verified by comparing the detected result with the DL determined by the physician.
3.Results The average detection rate of the lesion area by the proposed system was 84.5 % (F-measure). According to the results, the obtained DL can replicate the DL determined by an experienced physician without supervised signals. Experimental images and their detection results are shown in Fig. 2. Thus, this system enabled the automatic detection of early gastric cancer by DL drawing and helped physicians in determining the DL.
4.Conclusion In this study, we developed a system that automatically detected the DL using unsupervised learning. The detection of the lesion area was accurate and it was possible to identify the DL without depending on the physician’s experience. Moreover, the features that used in this system were apparent and this system helped the physicians to understand the result. The accuracy achieved can be further improved in future.
  • 質疑応答

今回の講演発表では,以下のような質疑を受けました.本発表では質問者の名前は聞いておりません.
 
・質問内容1
未分化型癌の画像も扱っているのか,という質問を頂きました.
この質問に対して,いまは分化型癌のみを対象としていると答えました.
 
・質問内容2
deep learningとの違いはどこかという質問を頂きました.
この質問に対して,識別器の構築に必要となる教師データを一切必要としない点と,特徴量を自らが設計している点にあると答えました.
 
・質問内容3
Sensitivity,Specificityの評価値(%)はどのように決定しているのかという質問を頂きました.
この質問に対して,熟練医のDLを元にsuperpixel単位で正解ラベルを割り当て、検出されたsuperpixelと正解ラベルの数の比較で評価していると答えました.
 
・質問内容4
特徴量の決定に使用した画像は何枚で,validationに使用した画像は何枚かという質問を頂きました.
この質問に対して,特徴量設計は50枚で行い,30枚でvalidationを行なっていると答えました.
 
・質問内容5
ポスターに載っている拡大NBIの画像が非常に鮮明に撮影されているものが多いが,提供元はどこかという質問を頂きました.
この質問に対して,日本の朝日大学病院と京都府立医科大学病院から提供していただいていると答えました.
 
 
 
 
・質問内容6
プログラムを組むのに使用している言語は何で,一枚あたりの処理速度はどの程度かという質問を頂きました.
この質問に対して,言語はpythonで,現在は一枚あたり20秒程度で,現在プログラムの高速化をしていない状態であると答えました.
 
・質問内容7
腸上皮化生を有する画像も扱っているかという質問を頂きました.
この質問に対して,現在は扱っていないと答えましたが,Light Blue Crestを認める画像も使用データに含まれているため,おそらくこの回答は間違いであり,腸上皮化生の画像も扱っていると答えるべきでした.
 
・質問内容8
使用している色特徴量は具体的にどのようなものを用いているのかという質問を頂きました.
現在はLab色空間とHSV色空間を用いてNBI画像を定量化しおり,superpixelごとにそれぞれの平均値,最大値,最小値を格納して特徴量としていると答えました.
 

  • 感想

今回は医学学会への参加でしたが,AIとComputingのセッションで発表しました.周りには私たちと同じようにCADの研究を行なっている先生方が多くおられました.私は,Deep Learningをはじめとする教師あり学習で必要な信頼度の高い教師信号を一切必要としない点,また説明できる特徴を使用して癌を検出している点を推しポイントとして発表しました.発表時間は2時間でしたが,絶え間なく多くの先生方とディスカッションすることができ,非常に有意義な時間を過ごすことができました.ディスカッションの中で,臨床における今後のAIの立ち位置,また「特徴が理解できるシステム」の重要性を再確認できました.今回の学会で得られた知見を活かし,修了までの残り9ヶ月より一層研究に励みたいと思います.
 

  1. 聴講

今回の講演会では,下記の4件の発表を聴講しました.
 

発表タイトル       : ARTIFICIAL INTELLIGENCE-ASSISTED ENDOSCOPY IN CHARACTERIZATION OF GASTRIC LESIONS USING MAGNIFYING NARROW BAND IMAGING ENDOSCOPY
著者                  : Sergey V. Kashin, Roman Kuvaev, Ekaterina Albertovna Kraynova, Olga Dunaeva, Alexander Rusakov, Evgeny Nikonov
セッション名       : Poster session
Abstract            : Aims of this study were to develop and evaluate an artificial intelligence based system for histology prediction of gastric lesions using magnifying narrow band imaging (M-NBI) endoscopy. We selected and analyzed 265 endoscopy M-NBI images of gastric lesions from 128 patients who underwent upper M-NBI endoscopy (Olympus Exera GIF Q160Z, Lucera GIF Q260Z). All images were divided into four classes: (1) type A (n=46): non-neoplastic and non-metaplastic lesions with regular circular microsurface (MS) and regular microvascular (MV) patterns; (2) Type B (n=90): intestinal metaplasia with tubulo-villous MS and regular MV patterns; (3) Type C (n=74) neoplastic lesions with irregular MS or MV pattern; (4) artifacts (n=55). During automated classification quadrant areas were calculated on the image, geometrical and topological features were computed for every fragment. Using the greedy forward selection algorithm, the set of five most significant features were selected: three geometric features (the compactness of the MS pattern, the perimeter of the MS pattern, the average of area of the component of the MV pattern) two topological features (the kurtosis of the histogram of the 0-th persistence diagram of the image, the first norm of the 0-th persistence diagram of the signed distance function). Support vector machine (SVM) classifier was used for 4-class automated diagnosis.Training and testing were performed for every image by a k-fold method (k=10).
The average percentage of correctly recognized areas was 91.4%. Classification precision (positive predictive value), recall (sensitivity), F-score for class A were 96.5 90.4 93.3 for class B were 93.7, 92.0, 92.9, respectively, for class C were 83.3, 91.3, 87.1, respectively, and for artifacts were 99.2, 91.7, 95.3, respectively
The designed system based on the extraction of the geometrical and topological features from M-NBI image and analysis by SVM could provide effective recognition of three types of gastric mucosal changes.

この研究は NBI拡大画像から,胃病変の組織学的分類を予測し,胃癌になるリスクの高い部分をカラーマップで示す研究でした.5種類の構造特徴量を使用し,画像ごとに微小血管構築像と粘膜微細構造を元にSVMを用いて4つの組織学的所見に分類され,そこから癌になるリスクが高い部分を検出していました.
今回の学会の中では,私が行なっているNBI拡大の研究と最も類似している研究だったと思います.特に微小血管構築像,粘膜微細構造の定量化は取り組んでみたい内容であったため,この研究で行われている分類指標や使用特徴量等,今後参考にしたいと思いました.

発表タイトル       :ARTIFICIAL INTELLIGENCE-ASSISTED POLYP DETECTION SYSTEM FOR COLONOSCOPY, BASED ON THE LARGEST AVAILABLE COLLECTION OF CLINICAL VIDEO DATA FOR MACHINE LEARNING
著者                  : Masashi Misawa, Shinei Kudo, Yuichi Mori, Tomonari Cho, Shinichi Kataoka, Yasuharu Maeda, Yushi Ogawa, Kenichi Takeda, Hiroki Nakamura, Katsuro Ichimasa, Naoya Toyoshima, Noriyuki Ogata, Toyoki Kudo, Tomokazu Hisayuki, Takemasa Hayashi, Kunihiko Wakamura, Toshiyuki Baba, Fumio Ishida, Hayato Itoh, Masahiro Oda, Kensaku Mori
セッション名       : Poster session
Abstract : Background and aims
Eradication of neoplastic polyps during colonoscopy is the most effective means of preventing colorectal cancer. However, one meta-analysis showed that about 26% of neoplasms are missed per colonoscopy (van Rijn et al., Am J Gastroenterol, 2006.). To tackle this issue, we have reported a pilot study of a real-time computer-aided detection (CADe) system that uses artificial intelligence (AI) to assess colonoscopy images (Misawa et al. Gastroenterology, 2018; Figure 1 shows an overview of the system). In the current study, we aimed to create the largest collection of annotated colonoscopy video data (Mori et al. Endoscopy, 2017), re-build the CADe system, and evaluate its performance.
Methods
To train the CADe, we retrospectively collected colonoscopy video data from December 2017 to August 2018. Expert endoscopists annotated every frame of each video as to the presence or absence of polyps. In all, 3,017,088 video frames (about 28 hours), including 930 colorectal polyps, were used to train the CADe. The CADe system uses a three-dimensional convolutional neural network, which is a deep learning method that is designed for analyzing spatiotemporal data such as video data. To evaluate the performance of the CADe, we analyzed completely separate video data derived from colonoscopy videos from 64 patients obtained from August 2018 to October 2018. Inclusion criteria were i) aged over 20 years; and ii) consent to participate. Exclusions were i) inflammatory bowel disease; and ii) polyposis. After these videos had been annotated by research assistants and audited by expert endoscopists regarding the presence or absence of polyps, size, and macroscopic type, they were treated as a gold standard. Sensitivity and false positive detection rate (FP) were calculated. We defined true positive detection as more than half the frames that included polyps being detected by the CADe. The FP was calculated using videos from patients with no polyps and was defined as the number of non-polyp frames detected by the system divided by the total number of non-polyp frames.
Results
In all, 87 colorectal lesions from 47 patients and 17 patients without polyps were analyzed. Thirty-one of the 87 lesions were protruded polyps, 55 were flat polyps (including five laterally spreading tumors) and one was an advanced colon cancer. The median size was 5 (IQR: 3–6) mm. The sensitivity of the AI system was 86% (75/87) and the FP for the non-polyp frames 26% (47,384/185,444). The sensitivities for diminutive polyps (≤5 mm), protruded polyps, and flat polyps were 84%, 84%, and 87%, respectively.
Conclusion
The developed AI system has high sensitivity regardless of polyp size and morphology and has potential for automated polyp detection.

この研究では,大腸内視鏡検査における腫瘍性ポリープの見逃しを防ぐため,CNNを用いてリアルタイムでポリープの発見支援を行うというものでした.結果,感度86%を達成し,ポリープの大きさや形状にかかわらず,発見可能という報告でした.我々はCADxのの研究を行なっておりますが,この研究のようにCADeの研究も今後臨床応用が近いことを強く感じました.本研究は,10月の岐阜県消化器内視鏡懇親会で我々が公演を伺った昭和大学横浜市北部病院の三澤先生による発表でした.
 

発表タイトル       :FEASIBILITY STUDY OF ENDOSCOPIC SUBMUCOSAL DISSECTION USING FLEXIBLE THREE DIMENSION-ENDOSCOPE FOR EARLY GASTROINTESTINAL CANCER
著者                  : Kensuke Shinmura, Tomonori Yano, Yoichi Yamamoto, Yusuke Yoda, Keisuke Hori, Hiroaki Ikematsu
セッション名       : Poster session
Abstract :
Background and aims
The three-dimension (3D) endoscopic system is widely used as rigid endoscope in laparoscopic surgery. The advantage of 3D imaging is to obtain depth perception and spatial orientation. 3D system is reported that it can reduce operation time and technical error, especially in the procedure operated by trainees, comparing with conventional two-dimension system in laparoscopic surgery. However, little is known about the utility of 3D imaging system in the luminal endoscopic procedure. Recently, the prototype of 3D flexible endoscope (GIF-Y0080; Olympus Corporation, Tokyo, Japan) have been developed, and the aim of this study is to investigate the safety of endoscopic submucosal dissection using 3D endoscope (3D-ESD) for early gastrointestinal cancer (EGIC).
Methods
This is a single center, prospective study to evaluate the safety of 3D-ESD for EGIC. From May 2018 to November 2018, patients who had planned 3D-ESD were prospectively enrolled in National Cancer Center Hospital East. The primary endpoint is the incidence rate of adverse events such as the delayed bleeding and the perforation. This study was reviewed and approved by the Institutional Review Board of our hospital.
Results
3 cases of esophageal cancer and 26 cases of gastric cancer were enrolled and analyzed. The median of procedure time in the whole 3D-ESD was 49 min (22-210). The margin-free en bloc resection rate using only 3D scope and curative resection rate was 93.1% (27 cases) and 86.2% (25cases). The incidence rate of the delayed bleeding and the delayed perforation in 3D-ESD for gastric cancer was 3.4% (1 case) and 3.4% (1 case), respectively. In one case, 3D endoscope was changed to conventional two-dimension endoscope because of the location of gastric cancer and the difficulty of manipulation, which is caused by the flexibility of the scope, during ESD. There were no cases of bleeding required transfusion in 3D-ESD for EGIC.
Conclusion
The prototype 3D endoscopic system was validated the safety in ESD for EGIC. Further evaluation is necessary to clarify the utility or efficacy of 3D system in the luminal endoscopic procedure.

この研究は,3次元内視鏡システムを用いて早期の胃腸癌におけるESDの有用性を検討するものでした.実際に本システムを用いたESDの3D映像で見せて頂きました.奥行きがわかるというポイントにおいては血管の位置などが2次元に比べて認識しやすく,素人の私でも驚きました.今後3Dによる診断支援の可能性を感じました.本システムの定量的評価が今後の課題とおっしゃっておりました.
 

発表タイトル       :ARTIFICIAL INTELLIGENCE USING CONVOLUTIONAL NEURAL NETWORK SHOWS HIGH DIAGNOSTIC PERFORMANCE OF MICROVESSELS ON SUPERFICIAL ESOPHAGEAL SQUAMOUS CELL CARCINOMA SIMILAR TO EXPERTS
著者                  : Ryotaro Uema, Yoshito Hayashi, Minoru Kato, Keiichi Kimura, Takanori Inoue, Akihiko Sakatani, Shunsuke Yoshii, Yoshiki Tsujii, Shinichiro Shinzaki, Hideki Iijima, Tetsuo Takehara
セッション名       : Poster session
Abstract :
Introduction: The morphological diagnosis of microvessels on the surface of superficial esophageal squamous cell carcinoma (SESCC) with magnifying endoscopy using narrowband imaging (NBI-ME) is widely used in clinical practice. In particular, the presence of abnormal microvessels without a loop-like formation is very important, because it represents that the tumor invades muscularis mucosae (MM) or deeper, which is associated with lymph node metastasis. We constructed a convolutional neural network (CNN) system which diagnosed microvessels of SESCCs, and we evaluated the diagnostic performance of the system.
Method: Endoscopic images of 261 SESCC lesions which underwent magnified endoscopy in our hospital from Jan 2013 to Dec 2017 were retrospectively collected. Then, 1803 images were cropped from the NBI-ME images into the size of 500×500 pixels or 430×430 pixels, and they were classified into 2 classes based on the Japan esophagus society (JES) classification, ‘TypeB1’ (which is abnormal microvessels with a loop-like formation) or ‘TypeB2/B3’ (which are abnormal microvessel without a loop-like formation), by four expert endoscopists. Images whose diagnoses are matched in three out of four experts or more, were used for the CNN training. The diagnostic system was constructed by fine-tuning of pre-trained VGG19 model using optimizer Adam. The independent test set of 215 images collected from 34 SESCC lesions from Jan 2018 to June 2018 was used to compare the diagnosis of the trained CNN model, four experts, and six trainees. Of the 215 test images, 173 images from the 27 lesions which underwent endoscopic resection were classified into two groups, ‘EP-LPM’ and ‘MM-‘, by verifying with the mapping images of the specimen. These 173 images were used to evaluate the diagnostic accuracy of the depth of invasion.
Result: In the test set, the average kappa statistic of the microvessel diagnosis between the 4 experts was 0.78 and the average kappa statistic between the CNN model and the 4 experts was 0.76. The average kappa statistics between each trainee (Trainee 1, 2, 3, 4, 5, and 6) and the 4 experts were 0.79, 0.73, 0.68, 0.64, 0.52, and 0.51, respectively. The diagnostic time was 5.9s in the CNN model, 15m50s on average in the experts, and 23m11s on average in the trainees. In the 173 images whose depth of invasion were already known, the diagnostic accuracy of the depth of invasion of the CNN model was 85.0%, and the average diagnostic accuracy of experts and trainees were 84.5% and 72.2%, respectively. The area under the receiver operating curve for the diagnosis of invasion depth of the CNN model was 0.953.
Conclusion: The constructed CNN system had a diagnostic performance equivalent to experts with a shorter time. This system would improve diagnoses of inexperienced endoscopists.

この研究は,食道扁平上皮癌のNBI拡大画像からCNNを用いて微小血管の形態学的診断を自動化する研究でした.モデルの診断精度が85%を達成し,熟練医と同等の精度があるという報告でした.NBI拡大を使用して研究している我々としても,微小血管の形状が識別できるようになれば,癌部分の検出に役立つため,参考になる研究でした.分類している定量的な指標を理解することができれば尚嬉しかったです.
 
参考文献
https://ddw.org/attendee-planning/online-planner
 
学会参加報告書

 
報告者氏名
 
清野允貴
発表論文タイトル LCI画像を用いた機械学習に基づくHelicobacter Pylori 感染自動診断システム-除菌後画像の検討-
発表論文英タイトル MACHINE-LEARNING-BASEDAUTOMATIC DIAGNOSTIC SYSTEM USING LINKED COLOR IMAGING FOR HELICOBACTER PYLORI
INFECTION: EXAMINATION OF IMAGE AFTER ERADICATION
著者 清野允貴,安田剛士,市川寛,日和悟,八木信明,廣安知之
主催 Digestive Disease Week
講演会名 DDW2019
会場 San Diego Convention Center
開催日程 2019/05/18-2019/05/21

 
 

  1. 講演会の詳細

2019/05/18から2019/05/21にかけて,San Diego Convention Centerにて開催されましたDigestive Disease Week 2019(DDW2019)に参加いたしました.このDDWは胃腸病学,肝臓病学,内視鏡検査,胃腸外科の分野における医師,研究者およびその業界の最大組織であり,一流の研究に対するアプローチを学びその分野の引導者とコミュニケーションを図ることで,自分の研究に対するフィードバックを得ることを目的に開催されています.本研究室からは他に,廣安先生,M2の学生として奥村(駿)さんが参加しました.
 

  1. 研究発表
    • 発表概要

私は20日のポスターセッションで発表しました.発表はポスター形式で,計2時間参加者の方と議論を行いました.
今回は,MACHINE-LEARNING-BASED AUTOMATIC DIAGNOSTIC SYSTEM USING LINKED COLOR IMAGING FOR HELICOBACTER PYLORI INFECTION: EXAMINATION OF IMAGE AFTER ERADICATIONについて発表しました.内容は,ピロリ菌を除菌した症例に見られる地図状発赤の特徴を定量化したことで,従来のピロリ菌感染診断システムの精度が向上したということでした.以下に抄録を記載致します.

Introduction
As a part of our research, we have developed a system for automatically diagnosing the presence or absence of H.pylori (Hp) infection, from the gastric mucosa image obtained by linked color imaging (LCI), using machine learning. This system aids a doctor’s diagnosis. In this study, an experiment involving Hp eradication cases was formulated,and the results emerging from it have been documented. Irrespective of whether eradication has been carried out or not, it is difficult for medical doctors to diagnose whether a patient has been eradicated of Hp. However, if it is possible to diagnose eradication success only by endoscopic diagnosis without performing additional examination, the burden on the patient can be reduced. In addition to formulating the experiment, we have developed a system to detect the success or failure of Hp eradication.
Aims and methods
The characteristic of a gastric mucosa image, representing Hp eradication, is to have a map-like redness. In the proposed system, we quantify this map-like redness for images of gastric mucosa obtained from LCI, and improve the accuracy of diagnosis of Hp positive or negative (post eradication). By using LCI, the map-like redness is observed as lavender color, while background gastric mucosa is observed as apricot color. Figure 1 shows an image with map-like redness.
First, a region on the image having a high hue value indicating a lavender color is extracted as a region of interest (ROI). Second, the center of gravity of the ROI is identified on the image. Thereafter, a circle, of radius equivalent to the Euclidean distance of the outermost pixel of the ROI from the center of gravity, is depicted as a circle of interest (COI). Finally, if an image having a high ROI ratio for all pixels and a large hue variance value in the COI is observed, it is identified as an image having map-like redness. Cases where map-like redness is detected in the gastric mucosa image, are considered as sterile and Hp negative. Figure 2 shows a schematic diagram of the conventional system and the proposed method. In this study, 200 images (40 cases; 32 cases are Hp positive and 8 cases are after eradication) of endoscopic examination (LCI observation) at Asahi University Hospital were used to evaluate the system.
Result
In the conventional system, 29 of the 40 cases were correctly diagnosed. In comparison, by using the proposed system, 37 of the 40 cases were correctly diagnosed. These results show that quantification of the map-like redness, which is characteristic of Hp eradication, leads to an improvement in the accuracy of the system.
Conclusion
By using the proposed system, the presence or absence of Hp infection can be automatically diagnosed with the same precision as expert doctors. And, this system can support the diagnosis of inexperienced doctors.

 

  • 質疑応答

今回の講演発表では,以下のような質疑を受けました.
 
・質問内容1
質問は,システムのどこの処理で機械学習を用いているのかというものでした.
この質問に対して,色相による画像の2分類をした後に.低色相画像に対して機械学習を用いて識別していると答えました.
 
・質問内容2
質問は,deep learningとmachine learningの違いは何かというものでした.
この質問に対して,deep learningは何万ものデータが必要であり,識別に用いる特徴量を自動定義する.一方で,machine learningは少ないデータでモデルを作成でき,識別に用いる特徴量を自分で設計すると答えました.
 
・質問内容3
質問は,システムのステップアップとして今後何をする必要があるのかというものでした.
この質問に対して,SVMの陽性画像に対する識別精度が低いことが課題である.そこで,学習データセットの再考が必要であると答えました.
 
・質問内容4
質問は,システムがピロリ菌を除菌した症例に対応できるようになったのかというものでした.
この質問に対して,地図状発赤を有している症例には対応できると答えました.
 
・質問内容5
質問は,現状のシステムの診断結果はどうなのかというものでした.
この質問に対して,共同研究先のLCIを用いてピロリ菌感染の診断をしている先生と.遜色のない結果を得ていると答えました.
 
・質問内容6
質問は色情報だけで診断できない画像は存在するのかというものでした.
この質問に対して,色のみで診断できない画像はあります.線状発赤などは発赤部分が赤くなるため,偽陽性となることがあると答えました.
 
・質問内容7
質問は,新たにピロリ菌感染診断システムを構築して,それぞれのシステムの診断結果を多数決的に利用して最終診断を行うのはどうかというものでした.
この質問に対して,する必要はあるかもしれないが,多数決で医学的な判断を行っても良いのか考えるところがあると答えました.
 
・質問内容8
質問はシステムが実際に導入された時,お金はどうするのかというものでした.
この質問に対して,お金の面は私が考えることではないが,考えるとするならば,需要の高い東南アジアなどの発展途上国でのシステムの導入は,金銭面を検討する必要があると答えました.
 

  • 感想

・今回は初の学会かつ海外であったので,発表の仕方や英語での説明の難しさを実感しました.
・私が発表した,AIセッションでは他の部門に比べて聴講者が多く,医学の世界においてAIが注目されているのだと思いました.
・他のAIを用いた研究では,Deep learningを用いた研究が多く,また,質問でもDeep learningの話題を出す人が多かった印象でした.
・リアルタイムで,ガンやポリープの検出をしている研究が多く,私の研究もそのレベルまで到達したいなと思いました.
・他の方のポスターを拝見するにあたり,現在の課題であるデータセットの書き方が参考になりました.
・システムの処理とその手法の部分で同じ質問を何度かされたので,システムの概要図の書き方を再考する必要があると感じました.
・英語の質問に対応できず,市川先生や廣安先生の助けを借りる場面が多々あったので,次回の国際学会までに英語力を高めておく必要があると思いました.
 

  1. 聴講

今回の講演会では,下記の4件の発表を聴講しました.
 

発表タイトル          : AUTOMATIC DETECTION OF GASTRIC CANCER USING SEMANTIC SEGMENTATION BASED ON ARTIFICIAL INTELLIGENCE
著者                     : Tomoyuki Shibata, Kazuma Enomoto, Atsushi Teramoto, Hyuga Yamada1, Naoki Ohmiya, Hiroshi Fujita
セッション名           : Poster session
Abstruct : Objectives: Gastric cancer carries still a high incidence rate worldwide, and early detection and early treatment are important. However, screening endoscopic examination needs accurate diagnostic skills for individual endoscopist. Therefore, we used deep learning to develop a diagnosis support system for automated detection of gastric cancer from endoscopic images using semantic segmentation technology. The accuracy of detection and extraction of the gastric cancer region were evaluated.
Methods: White light endoscopic images were used for this study. These images were taken during endoscopic examination for gastric cancer evaluation at Fujita Health University Hospital. A fully convolutional network (FCN) with 7 convolution layers, 3 pooling layers, and a deconvolution layer was used for the deep learning model used in this study. The FCN can output a segmented image from an input image of arbitrary size. We resized the endoscopic images to 256 × 256 pixel and trimmed within the inscribed circle. The infiltrating area of gastric cancer was surrounded by expert endoscopists with lines using paint tool. Subsequently, in order to prevent overfitting owing to the limited number of images, training data were augmented by inversion and rotation processing. We used 42 normal cases (1149 images) and 98 cases (553 images) of gastric cancer. The FCN was trained using 80 cases and the extraction accuracy of gastric cancer region and detection rate were evaluated using 18 cases. If the gastric cancer area by endoscopists and the gastric cancer detection area of FCN were overlapped, the answer was regarded as correct.
Results: The accuracy rate was 96% and positive predictive value (PPV) was 80.6%. Additionally, the number of false positives per image was 0.03.
Conclusions: This method showed favorable results and confirmed that the proposed method may be useful to detect gastric cancer in endoscopic images. By using this method, it is possible to mitigate overlook during examination.

この研究は, Deep learningを用いて胃がんを自動検出するシステムの開発でした.127800枚のトレーニングデータは,25枚の通常画像,79枚の癌病変が含まれた画像を反転および回転処理することで得ていました.診断結果は,Sensitivityが97.6%,Specificityが94.8%でした.
この発表を聞いて,元画像が癌病変のパターンをどれだけ網羅できているのかは定かではないですが,画像の反転や回転をすることで得たデータセットで,この精度が出たことに驚きました.しかし, Deep learningが面白いのは結果のみで,それ以上の議論に発展しないなと感じました.
 

発表タイトル          :BLI AND LCI IMPROVE POLYP DETECTION AND DELINEATION ACCURACY FOR DEEP LEARNING NETWORKS
著者                     : Tom Eelbode, Cesare Hassan, Ingrid Demedts, Philip Roelandt, Emmanuel Coron, Pradeep Bhandari, Helmut Neumann, Oliver Pech, Alessandro Repici, Frederik Maes, Raf Bisschops
セッション名           : Poster session
Abstruct : Introduction and aim Current state-of-the-art automated polyp detection and delineation techniques use white light imaging as their base modality. Studies have however suggested that polyp detection rates can be improved by using other modalities such as linked color imaging (LCI) from Fujifilm. This might be true for human observers, but it has not yet been investigated how an artificial intelligence (AI) system is influenced by the choice of modality. The aim of this research is to investigate the influence of the modality (WLI, blue light imaging or BLI and LCI) on the performance of an AI system for polyp detection and delineation.
Methods Complete pull-through colonoscopy videos from 120 patients are included with a total of 280 polyps for training, validation and testing of the system (n = 176, 27, 77 respectively with no overlapping patients). Shorter video clips containing the first apparition of each polyp are extracted and for each clip, only a few frames are annotated by three individual experts. These 758 single-frame manual annotations are automatically propagated over the entire clip. The resulting, much larger annotated dataset of 40887 images is then used to train a recurrent convolutional neural network (CNN) for polyp detection and delineation.
Frame-level sensitivity and specificity are reported for evaluation of the detection power of the network. For delineation accuracy, the Dice score is used which is a measure for the amount of overlap between a delineation map and its ground truth.
Results Table 1 shows that BLI significantly improves sensitivity, specificity and Dice score. Similarly, LCI increases detection performance to a lesser extent, however the LCI Dice score for delineation accuracy decreases significantly in comparison to WLI. Pairwise t-tests show that all differences are significant with a p value <0,00001 (significance level of 0,05).
Conclusion The choice of modality has a significant impact on the detection and delineation performance of an AI system. We show that our network performs best for both tasks on BLI and that LCI has a superior detection, but inferior delineation power compared to WLI.
Sample size, sensitivity, specificity and Dice score (mean and stdev) for the three different modalities in the test set.

この研究は,大腸ポリープの検出および描写においてWLI,BLI,LCIで撮影された画像をDeep Learningでそれぞれ学習およびテストした結果を比較していました.結果として,WLI,BLI,LCIのそれぞれの感度は81%,92%,85%,特異度は76%,85%,82%でした.
文献などを見ていても,WLIとLCIの比較はよく見かけますが,BLIとLCIの比較は初めて見たので興味深かったです.
Deep Learningを用いたシステムであるので,BLIの検出精度が良かった原因が不明な部分が残念だと思いました.
 

発表タイトル          :ARTIFICIAL INTELLIGENCE-ASSISTED ENDOSCOPY IN CHARACTERIZATION OF GASTRIC LESIONS USING MAGNIFYING NARROW BAND IMAGING ENDOSCOPY
著者                     : Sergey V. Kashin, Roman Kuvaev, Ekaterina Albertovna Kraynova, Olga Dunaeva, Alexander Rusakov, Evgeny Nikonov
セッション名           : Poster session
Abstruct :Aims of this study were to develop and evaluate an artificial intelligence based system for histology prediction of gastric lesions using magnifying narrow band imaging (M-NBI) endoscopy.
We selected and analyzed 265 endoscopy M-NBI images of gastric lesions from 128 patients who underwent upper M-NBI endoscopy (Olympus Exera GIF Q160Z, Lucera GIF Q260Z). All images were divided into four classes: (1) type A (n=46): non-neoplastic and non-metaplastic lesions with regular circular microsurface (MS) and regular microvascular (MV) patterns; (2) Type B (n=90): intestinal metaplasia with tubulo-villous MS and regular MV patterns; (3) Type C (n=74) neoplastic lesions with irregular MS or MV pattern; (4) artifacts (n=55). During automated classification quadrant areas were calculated on the image, geometrical and topological features were computed for every fragment. Using the greedy forward selection algorithm, the set of five most significant features were selected: three geometric features (the compactness of the MS pattern, the perimeter of the MS pattern, the average of area of the component of the MV pattern) two topological features (the kurtosis of the histogram of the 0-th persistence diagram of the image, the first norm of the 0-th persistence diagram of the signed distance function). Support vector machine (SVM) classifier was used for 4-class automated diagnosis.Training and testing were performed for every image by a k-fold method (k=10).
The average percentage of correctly recognized areas was 91.4%. Classification precision (positive predictive value), recall (sensitivity), F-score for class A were 96.5 90.4 93.3 for class B were 93.7, 92.0, 92.9, respectively, for class C were 83.3, 91.3, 87.1, respectively, and for artifacts were 99.2, 91.7, 95.3, respectively
The designed system based on the extraction of the geometrical and topological features from M-NBI image and analysis by SVM could provide effective recognition of three types of gastric mucosal changes.

この研究は,拡大NBIで撮影された胃病変画像を医学的変化が認められる病変3クラス,どのクラスにも当てはまらない1つのクラスに分けておき,5次元の構造特徴量で学習したSVMでクラス分類し,どのクラスに有効な特徴量が設計されているかを示したものでした.
この研究は,同志社NBI研究と非常に類似しており,NBI研究の現在の課題である癌病変に有効な構造特徴量や,扱い方を学ぶことができました.
自分の研究では,現在色特徴量のみでシステムを構築しているが,今後,システムをリアルタイムで用いる際に,胃の部位などの判別に構造特徴量を用いる可能性があるので,この知識を活かせたらなと思います.
 

発表タイトル:FEASIBILITY STUDY OF ENDOSCOPIC SUBMUCOSAL DISSECTION USING FLEXIBLE THREE DIMENSION-ENDOSCOPE FOR EARLY GASTROINTESTINAL CANCER
著者: Kensuke Shinmura1, Tomonori Yano1, Yoichi Yamamoto1, Yusuke Yoda1, Keisuke Hori1, Hiroaki Ikematsu1
セッション名: Poster session
Abstruct :
Background and aims
The three-dimension (3D) endoscopic system is widely used as rigid endoscope in laparoscopic surgery. The advantage of 3D imaging is to obtain depth perception and spatial orientation. 3D system is reported that it can reduce operation time and technical error, especially in the procedure operated by trainees, comparing with conventional two-dimension system in laparoscopic surgery. However, little is known about the utility of 3D imaging system in the luminal endoscopic procedure. Recently, the prototype of 3D flexible endoscope (GIF-Y0080; Olympus Corporation, Tokyo, Japan) have been developed, and the aim of this study is to investigate the safety of endoscopic submucosal dissection using 3D endoscope (3D-ESD) for early gastrointestinal cancer (EGIC).
Methods
This is a single center, prospective study to evaluate the safety of 3D-ESD for EGIC. From May 2018 to November 2018, patients who had planned 3D-ESD were prospectively enrolled in National Cancer Center Hospital East. The primary endpoint is the incidence rate of adverse events such as the delayed bleeding and the perforation. This study was reviewed and approved by the Institutional Review Board of our hospital.
Results
3 cases of esophageal cancer and 26 cases of gastric cancer were enrolled and analyzed. The median of procedure time in the whole 3D-ESD was 49 min (22-210). The margin-free en bloc resection rate using only 3D scope and curative resection rate was 93.1% (27 cases) and 86.2% (25cases). The incidence rate of the delayed bleeding and the delayed perforation in 3D-ESD for gastric cancer was 3.4% (1 case) and 3.4% (1 case), respectively. In one case, 3D endoscope was changed to conventional two-dimension endoscope because of the location of gastric cancer and the difficulty of manipulation, which is caused by the flexibility of the scope, during ESD. There were no cases of bleeding required transfusion in 3D-ESD for EGIC.
Conclusion
The prototype 3D endoscopic system was validated the safety in ESD for EGIC. Further evaluation is necessary to clarify the utility or efficacy of 3D system in the luminal endoscopic procedure.

この研究は,三次元内視鏡システムを用いて管腔内視鏡手術における有用性を検討したものでした.このシステムは有用性の評価が難しいとおっしゃっていたが,三次元独自の奥行きや空間の認識は必ず手術の診断を支援すると思いました.二次元映像を三次元映像に変換する方法は聞けなかったが,画像処理の世界では三次元データの処理が一般的であると思うので,機会があれば知識をつけたいと思いました.
 
 
参考文献