{"id":2564,"date":"2014-12-11T22:52:36","date_gmt":"2014-12-11T13:52:36","guid":{"rendered":"http:\/\/www.is.doshisha.ac.jp\/news\/?p=2564"},"modified":"2014-12-11T22:52:36","modified_gmt":"2014-12-11T13:52:36","slug":"%e3%80%90%e9%80%9f%e5%a0%b1%e3%80%91%ef%bd%89%ef%bd%85%ef%bd%85%ef%bd%85%e3%80%80%ef%bd%93%ef%bd%93%ef%bd%83%ef%bd%89%e3%80%80%ef%bc%92%ef%bc%90%ef%bc%91%ef%bc%94","status":"publish","type":"post","link":"https:\/\/is.doshisha.ac.jp\/news\/?p=2564","title":{"rendered":"\u3010\u901f\u5831\u3011IEEE  SSCI 2014"},"content":{"rendered":"<p><a href=\"http:\/\/ieee-ssci.org\/\">IEEE SSCI<\/a>\u3068\u3044\u3046\u5b66\u4f1a\u3067\u30d5\u30ed\u30ea\u30c0\u306b\u6765\u3066\u3044\u307e\u3059\u3002<br \/>\n4\u4ef6\u767a\u8868\u3057\u307e\u3059\u3002<br \/>\n\uff11\uff09\u3000Endoscope Image Analysis Method for Evaluating the Extent of Early Gastric Cancer Tomoyuki Hiroyasu, Katsutoshi Hayashinuma, Hiroshi Ichikawa, Nobuyuki Yagi and Utako Yamoto<br \/>\n2)\u3000Gender classification of subjects from cerebral blood flow changes using Deep Learning Tomoyuki Hiroyasu, Kenya Hanawa and Utako Yamoto<br \/>\n\uff13\uff09\u3000A feature transformation method using genetic programming for two-class classification Tomoyuki Hiroyasu, Toshihide Shiraishi, Tomoya Yoshida and Utako Yamamoto<br \/>\n\uff14\uff09\u3000Electroencephalographic Method Using Fast Fourier Transform Overlap Processing for Recognition of Right- or Left-handed Elbow Flexion Motor Imagery Tomoyuki Hiroyasu, Yuuki Ohkubo and Utako Yamamoto<br \/>\n<!--more--><br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">\u6797\u6cbc\u52dd\u5229<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u5185\u8996\u93e1\u753b\u50cf\u306b\u304a\u3051\u308b\u65e9\u671f\u80c3\u764c\u306e\u9032\u5c55\u7bc4\u56f2\u8a55\u4fa1\u306e\u305f\u3081\u306e\u89e3\u6790\u624b\u6cd5\u306e\u691c\u8a0e<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Endoscope Image Analysis Method for Evaluating the Extent of Early Gastric Cancer<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u5ee3\u5b89\u77e5\u4e4b, \u6797\u6cbc\u52dd\u5229, \u5e02\u5ddd\u5bdb, \u516b\u6728\u4fe1\u660e, \u5c71\u672c\u8a69\u5b50<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">IEEE Computational Intelligence Society<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">IEEE SSCI 2014<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Caribe Royale All-Suite Hotel and Convention Center<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2014\/12\/09-2014\/12\/12<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2014\/12\/09\u304b\u30892014\/12\/12\u306b\u304b\u3051\u3066\uff0c\u30a2\u30e1\u30ea\u30ab\u30fb\u30d5\u30ed\u30ea\u30c0\u5dde\u306eCaribe Royale All-Suite Hotel and Convention Center\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f2014 IEEE Symposium Series on Computational Intelligence \uff08IEEE SSCI 2014\uff09\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306eSSCI 2014\u306f\uff0cIEEE Computational Intelligence Society\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u5b66\u4f1a\u3067\uff0cComputational Intelligence\u306b\u95a2\u3059\u308b\u69d8\u3005\u306a\u30b8\u30e3\u30f3\u30eb\u306e29\u306e\u30b7\u30f3\u30dd\u30b8\u30a6\u30e0\u304c1\u3064\u306e\u4f1a\u5834\u3067\u958b\u50ac\u3055\u308c\u307e\u3057\u305f\uff0e<br \/>\n\u79c1\u306f10\u65e5\u304b\u308912\u65e5\u306b\u304b\u3051\u3066\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u5927\u4e45\u4fdd\uff0c\u767d\u77f3\uff0c\u5859\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f10\u65e5\u306e\u5348\u524d\u306e\u30bb\u30c3\u30b7\u30e7\u30f3\u300cCIMSIVP&#8217;14 Session 2: Application\u300d\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u53e3\u982d\u767a\u8868\u3067\uff0c15\u5206\u306e\u8b1b\u6f14\u6642\u9593\u30685\u5206\u306e\u8cea\u7591\u5fdc\u7b54\u6642\u9593\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u30c6\u30af\u30b9\u30c1\u30e3\u89e3\u6790\u3092\u7528\u3044\u305fNBI\u5185\u8996\u93e1\u753b\u50cf\u306e\u89e3\u6790\u624b\u6cd5\u306e\u63d0\u6848\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">In this study, a system is proposed to help physicians perform processing on images taken with a magnifying endoscopy with narrow band imaging. In our proposed system, the transition from lesion to normal zone is quantitatively analyzed and presented by texture analysis. Eleven feature values are calculated, i.e., six from a co-occurrence matrix and five from a run length matrix with a scanning window. Integrating these feature values formulates an effective and representative feature value, which is used to draw a color map, so the transition from lesion to normal zone can be visibly illustrated. In this paper, the proposed method is applied to images, and the efficacy is considered. This method is also applied to some rotated images to examine whether it could work effectively on such images.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><br \/>\n\u851a\u5c71\u5927\u5b66\u306eMyeongsu Kang\u3055\u3093\u304b\u3089\u306e\u8cea\u554f\u3067\u3059\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u7279\u5fb4\u91cf\u306f\u753b\u50cf\u304c\u56de\u8ee2\u3057\u3066\u3082\u5909\u5316\u3057\u306a\u3044\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u56de\u8ee2\u3057\u305f\u753b\u50cf\u3067\u5b9f\u9a13\u3092\u884c\u3063\u3066\u307f\u305f\u3068\u3053\u308d\uff0c\u5927\u304d\u306a\u5dee\u306f\u898b\u3089\u308c\u306a\u304b\u3063\u305f\u3068\u56de\u7b54\u3057\u305f\u304b\u3063\u305f\u306e\u3067\u3059\u304c\uff0c\u3046\u307e\u304f\u7b54\u3048\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u305b\u3093\u3067\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u4eca\u56de\uff0c\u521d\u3081\u3066\u56fd\u969b\u5b66\u4f1a\u306b\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u307e\u3067\u3044\u308d\u3044\u308d\u3068\u6e96\u5099\u3092\u3057\u305f\u306e\u3067\u3059\u304c\uff0c\u3042\u307e\u308a\u4e0a\u624b\u306b\u767a\u8868\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u305a\uff0c\u307e\u305f\u8cea\u554f\u306b\u3082\u7b54\u3048\u308b\u3053\u3068\u304c\u3067\u304d\u305a\uff0c\u307e\u3060\u307e\u3060\u6e96\u5099\u304c\u8db3\u308a\u3066\u3044\u306a\u304b\u3063\u305f\u306a\u3068\u53cd\u7701\u70b9\u306e\u591a\u3044\u767a\u8868\u3068\u306a\u308a\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e4\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Counting, Detecting and Tracking of People in Crowded Scenes\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Mubarak Shah\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Keynote Talk: CIMSIVP\u201914Abstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a In this talk, first I will present a new approach for counting people in extremely dense crowds. Our approach relies on multiple sources of information such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. In addition, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales.<br \/>\nNext, I will discuss how we explore context for human detection in dense crowds in the form of locally-consistent scale prior which captures the similarity in scale in local neighborhoods and its smooth variation over the image. Using the scale and confidence of detections obtained from an underlying human detector, we infer scale and confidence priors using Markov Random Field. In an iterative mechanism, the confidences of detections are modified to reflect consistency with the inferred priors, and the priors are updated based on the new detections. The final set of detections obtained are then reasoned for occlusion using Binary Integer Programming where overlaps and relations between parts of individuals are encoded as linear constraints.<br \/>\nFinally, I will present a method for tracking in dense crowds using prominence and neighborhood motion concurrence. Our method begins with the automatic identification of prominent individuals from the crowd that are easy to track. Then, we use Neighborhood Motion Concurrence to model the behavior of individuals in a dense crowd, this predicts the position of an individual based on the motion of its neighbors.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u7fa4\u8846\u306e\u4e2d\u304b\u3089\u4eba\u306e\u6570\u306e\u30ab\u30a6\u30f3\u30c8\u65b9\u6cd5\u3084\u4eba\u691c\u51fa\uff0c\u30c8\u30e9\u30c3\u30ad\u30f3\u30b0\u624b\u6cd5\u306b\u95a2\u3059\u308b\u5185\u5bb9\u306b\u3064\u3044\u3066\u3067\u3057\u305f\uff0e\u3053\u306e\u30b8\u30e3\u30f3\u30eb\u306e\u7814\u7a76\u306f\u30ab\u30e1\u30e9\u3092\u7528\u3044\u305f\u30de\u30a4\u30cb\u30f3\u30b0\u3084\u9632\u72af\u306a\u3069\u3067\u6ce8\u76ee\u3055\u308c\u3066\u304a\u308a\uff0c\u3068\u3066\u3082\u8208\u5473\u6df1\u3044\u5185\u5bb9\u3067\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Single Frame Super Resolution: Gaussian Mixture Regression and Fuzzy Rule-Based Approaches\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Nikhil R. Pal\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Plenary TalkAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a High quality image zooming is an important problem. There are many methods that use multiple low resolution (LR) frames of the same scene with different sub-pixel shifts as input to generate the high resolution (HR) images. Now a days single frame super resolution (SR) methods that use just one LR image to obtain the HR image has become popular. In this talk we shall discuss a novel fuzzy rule based single frame super resolution method. This is a patch based method, where each LR patch is replaced by a HR patch generated by a Takagi-Sugeno type fuzzy rule-based system. We shall discuss in details the generations of the training data, the initial generation of the fuzzy rules, their refinement and how to use the rules for generation of SR images. In this context we shall also develop a Gaussian Mixture Regression (GMR) model for the same problem. Both the fuzzy rule based system and GMR are found to be quite effective. Comparison of performance of the fuzzy rule-based system with five existing methods as well as with the GMR method in terms of the several quality criteria demonstrates the superior performance of the fuzzy rule-based system.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u30d5\u30a1\u30b8\u30a3\u63a8\u8ad6\u3084\u30ac\u30a6\u30b9\u6df7\u5408\u56de\u5e30\u3092\u7528\u3044\u305fsingle image\u304b\u3089\u306e\u8d85\u89e3\u50cf\u6280\u8853\u306b\u95a2\u3059\u308b\u5185\u5bb9\u3067\u3057\u305f\uff0e\u8fd1\u5e74\uff0c\u8d85\u89e3\u50cf\u6280\u8853\u306f\u30c6\u30ec\u30d3\u3084\u9855\u5fae\u93e1\uff0c\u533b\u7528\u753b\u50cf\u306a\u3069\u69d8\u3005\u306a\u65b9\u9762\u3078\u306e\u5fdc\u7528\u304c\u9032\u3093\u3067\u304a\u308a\uff0c\u753b\u50cf\u51e6\u7406\u306e\u5206\u91ce\u3067\u306f\u6ce8\u76ee\u306e\u6280\u8853\u306e\u4e00\u3064\u3067\u3042\u308b\u305f\u3081\uff0c\u3068\u3066\u3082\u52c9\u5f37\u306b\u306a\u308b\u767a\u8868\u3067\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Finding Optimal Transformation Function for Image Thresholding Using Genetic Programming\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Shaho Shahbazpanahi, Shahryar Rahnamayan\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a CIMSIVP&#8217;14 Session 4: Algorithms IAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a In this paper, Genetic Programming (GP) is employed to obtain an optimum transformation function for bi-level image thresholding. The GP utilizes a user prepared gold sample to learn from. A magnificent feature of this method is that it does not require neither a prior knowledge about the modality of the image nor a large training set to learn from. The performance of the proposed approach has been examined on 147 X-ray lung images. The transformed images are thresholded using Otsu\u2019s method and the results are highly promising. It performs successfully on 99% of the tested images. The proposed method can be utilized for other image processing tasks, such as, image enhancement or segmentation.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u907a\u4f1d\u7684\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u3092\u7528\u3044\u3066X\u7dda\u30ec\u30f3\u30c8\u30b2\u30f3\u753b\u50cf\u306e\u80ba\u91ce\u3092\u62bd\u51fa\u3059\u308b\u305f\u3081\u306e\u95be\u5024\u3092\u6c7a\u5b9a\u3059\u308b\u65b9\u6cd5\u306b\u95a2\u3059\u308b\u5185\u5bb9\u3067\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u3067\u3082GP\u3092\u7528\u3044\u305f\u7d30\u80de\u9818\u57df\u5206\u5272\u306b\u95a2\u3059\u308b\u7814\u7a76\u304c\u884c\u308f\u308c\u3066\u3044\u307e\u3059\u304c\uff0c\u305d\u308c\u4ee5\u5916\u306eGP\u3092\u7528\u3044\u305f\u753b\u50cf\u51e6\u7406\u306f\u3042\u307e\u308a\u306a\u3058\u307f\u304c\u306a\u304b\u3063\u305f\u305f\u3081\u52c9\u5f37\u306b\u306a\u308a\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Disguised face detection and recognition under the complex background\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Jing Li, Bin Li, Yong Xu, Kaixuan Lu, Ke Yan, Lunke Fei\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a CIBIM&#8217;14 Session 3: Face Detection and RecognitionAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a In this paper, we propose an effective method for disguised face detection and recognition under the complex background. This method consists of two stages. The first stage determines whether the object is a person. In this stage, we propose the first-dynamic-then-static foreground object detection strategy. This strategy exploits the updated learning-based codebook model for moving object detection and uses the Local Binary Patterns (LBP) + Histogram of Oriented Gradients (HOG) feature-based head-shoulder detection for static target detection. The second stage determines whether the face is disguised and the classes of disguises. Experiments show that our method can detect disguised faces in real time under the complex background and achieve acceptable disguised face recognition rate.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u30b5\u30f3\u30b0\u30e9\u30b9\u3084\u5e3d\u5b50\uff0c\u30de\u30b9\u30af\u306a\u3069\u3067\u5909\u88c5\u3055\u308c\u305f\u9854\u3092\u691c\u51fa\u3084\u5206\u985e\u306e\u7814\u7a76\u306b\u95a2\u3059\u308b\u5185\u5bb9\u3067\u3057\u305f\uff0e\u9854\u691c\u51fa\u306e\u7814\u7a76\u306f\u3088\u304f\u898b\u304b\u3051\u307e\u3059\u304c\uff0c\u5909\u88c5\u3057\u305f\u9854\u306e\u691c\u51fa\u306e\u7814\u7a76\u306f\u898b\u304b\u3051\u305f\u3053\u3068\u304c\u306a\u304b\u3063\u305f\u305f\u3081\u65ac\u65b0\u306a\u611f\u3058\u304c\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>2014 IEEE Symposium Series on Computational Intelligence Proceedings<\/li>\n<\/ul>\n<p><strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">\u5927\u4e45\u4fdd\u7950\u5e0c<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u8098\u95a2\u7bc0\u5c48\u66f2\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u306eFFT\u30aa\u30fc\u30d0\u30fc\u30e9\u30c3\u30d7\u51e6\u7406\u3092\u7528\u3044\u305f\u8133\u6ce2\u306e\u5de6\u53f3\u8b58\u5225\u624b\u6cd5<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Electroencephalographic Method Using Fast Fourier Transform Overlap Processing for Recognition of Right- or Left-handed Elbow Flexion Motor Imagery<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u5ee3\u5b89\u77e5\u4e4b, \u5927\u4e45\u4fdd\u7950\u5e0c, \u5c71\u672c\u8a69\u5b50,<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">IEEE<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">2014 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">THE CARIBE\u2122 HOTELS<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2014\/12\/09 &#8211; 2014\/12\/12<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2014\/12\/09\u304b\u30892014\/12\/12\u306b\u304b\u3051\u3066\uff0c\u30a2\u30e1\u30ea\u30ab\u30fb\u30d5\u30ed\u30ea\u30c0\u30fb\u30aa\u30fc\u30e9\u30f3\u30c9\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f2014 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI2014)<sup>1) <\/sup>\u306b\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e\u3053\u306eSSCI2014\u306f\uff0cIEEE\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u5b66\u4f1a\u3067\uff0c\u591a\u304f\u306e\u5b66\u751f\u3084\u6559\u54e1\uff0c\u4f01\u696d\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u5b66\u4f1a\u3067\u306f\uff0c\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u3084\u753b\u50cf\u51e6\u7406\uff0c\u6a5f\u68b0\u5b66\u7fd2\uff0c\u3055\u3089\u306b\u8a8d\u77e5\u6a5f\u69cb\u306e\u30e2\u30c7\u30eb\u5316\uff0c\u30d6\u30ec\u30a4\u30f3\u30fb\u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u30fb\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30a4\u30b9\u306a\u3069\u306e\u77e5\u7684\u30b3\u30f3\u30d4\u30e5\u30fc\u30c6\u30a3\u30f3\u30b0\u306b\u95a2\u3059\u308b\u7814\u7a76\u304c\u53e3\u982d\u3084\u30dd\u30b9\u30bf\u30fc\u306b\u3088\u308a\u767a\u8868\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n\u79c1\u306f12\u67089\u65e5\u304b\u308912\u65e5\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u767d\u77f3\uff0c\u6797\u6cbc\uff0c\u5859\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u5ee3\u5b89\u5148\u751f\u304c\u79c1\u9054\u306e\u53e3\u982d\u767a\u8868\u3084\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3092\u8074\u304d\u306b\u6765\u3066\u4e0b\u3055\u308a\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f12\u670812\u65e5\u306e\u300cCIBCI&#8217;14 session2\u300d\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f15\u5206\u306e\u53e3\u982d\u767a\u8868\uff0c4\u5206\u306e\u8cea\u7591\u5fdc\u7b54\u3067\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u3092\u884c\u3063\u305f\u3068\u304d\u306e\u8133\u6ce2\u304b\u3089\u8098\u95a2\u7bc0\u5c48\u66f2\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u6642\u306e\u5de6\u53f3\u8b58\u5225\u306b\u304a\u3051\u308b\u7279\u5fb4\u91cf\u62bd\u51fa\u624b\u6cd5\u3068\u89e3\u6790\u30a6\u30a3\u30f3\u30c9\u30a6\u5e45\u306e\u6bd4\u8f03\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">Recently, systems using motor imagery (MI) have been developed as practical examples of brain-computer interface (BCI). Electroencephalography (EEG) was used to generate an electroencephalogram of elbow flexion. In addition, a method was proposed to extract the feature values that would enable the recognition right- or left-handed elbow flexion MI. In the proposed method, fast Fourier transform overlap processing was used to determine the time period required to extract feature values. In this study, the following two experiments were performed. 1) the recognition of right- or left-handed elbow flexion by analyzing only the MI time period and 2) recognition of the right- or left-handed when the MI time period was presumed. In the first experiment, right- or left-handed elbow flexion MI was processed for 20 subjects using support vector machine and the proposed method was used to extract the feature values. In the second experiment, the presumed MI time was determined using the channels in which the highest accuracy was obtained in the first experiment, and then, right- or left-handed recognition was processed for the time period presumed. In the first experiment, the recognition accuracy of the proposed method was superior to that of the previous method in 15 of 20 the subjects. In the second experiment, the mean accuracy was 7.2%. Therefore, the recognition accuracy can be improved by improving the MI detection method.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3084\u30b3\u30e1\u30f3\u30c8\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u3092\u4f7f\u7528\u3057\u305fBCI\u30b7\u30b9\u30c6\u30e0\u306f\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u3092\u884c\u3063\u3066\u3044\u308b\u533a\u9593\u3068\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u3092\u884c\u3063\u3066\u3044\u306a\u3044\u533a\u9593\u3092\u8b58\u5225\u3059\u308b\u3053\u3068\u3082\u91cd\u8981\u3067\u3042\u308b\uff0e\u305d\u306e\u305f\u3081\uff0c\u4eca\u5f8c\u306f\u305d\u306e\u691c\u8a0e\u3092\u3059\u308b\u3079\u304d\u3067\u3042\u308b\u3068\u3044\u3046\u30a2\u30c9\u30d0\u30a4\u30b9\u3092\u9802\u304d\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u4eca\u56de\uff0c\u89e3\u6790\u3092\u884c\u3046\u305f\u3081\u306e\u30a6\u30a3\u30f3\u30c9\u30a6\u5e45\u3092\u77ed\u304f\u3057\u305f\u6642\u306e\u8b58\u5225\u7387\u306b\u3064\u3044\u3066\u691c\u8a0e\u3092\u884c\u3063\u305f\u304c\uff0c\u3053\u308c\u3092\u30b7\u30b9\u30c6\u30e0\u306b\u5fdc\u7528\u3059\u308b\u969b\u306b\u306f\u3069\u306e\u3088\u3046\u306b\u3059\u308b\u306e\u304b\u3068\u3044\u3046\u8cea\u554f\u3092\u9802\u304d\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u8cea\u554f\u5185\u5bb91\u3092\u691c\u8a0e\u3059\u308b\u969b\uff0cEEG\u88c5\u7740\u8005\u306e\u7cbe\u795e\u72b6\u614b\uff08\u4f8b\u3048\u3070\uff0c\u96c6\u4e2d\u529b\u3084\u75b2\u52b4\u5ea6\u306a\u3069\uff09\u3092\u8003\u616e\u3057\u305f\u65b9\u6cd5\u3092\u8003\u3048\u308b\u3068\u9762\u767d\u3044\u3068\u3044\u3046\u30a2\u30c9\u30d0\u30a4\u30b9\u3092\u9802\u304d\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u79c1\u306e\u767a\u8868\u306fBCI\u5c02\u9580\u306e\u30bb\u30c3\u30b7\u30e7\u30f3\u3067\u884c\u308f\u308c\u307e\u3057\u305f\uff0e\u8cea\u7591\u5fdc\u7b54\u306e\u969b\u306b\u306f\u3053\u306e\u7814\u7a76\u306e\u65b9\u5411\u6027\u3084\u6539\u5584\u70b9\u3092BCI\u306e\u30d7\u30ed\u30d5\u30a7\u30c3\u30b7\u30e7\u30ca\u30eb\u306e\u65b9\u3005\u306b\u6559\u6388\u3057\u3066\u9802\u3044\u305f\u305f\u3081\uff0c\u4eca\u5f8c\u306e\u7814\u7a76\u306b\u5bfe\u3059\u308b\u30e2\u30c1\u30d9\u30fc\u30b7\u30e7\u30f3\u306e\u5411\u4e0a\u306b\u3064\u306a\u304c\u308a\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u79c1\u81ea\u8eab\u521d\u3081\u3066\u306e\u82f1\u8a9e\u3067\u306e\u767a\u8868\u3067\u3042\u308a\u79c1\u306f\u82f1\u8a9e\u304c\u82e6\u624b\u3067\u3042\u308b\u305f\u3081\uff0c\u8ad6\u6587\u4f5c\u6210\u304b\u3089\u767a\u8868\u307e\u3067\u975e\u5e38\u306b\u82e6\u52b4\u3057\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u767a\u8868\u6642\u306e\u8cea\u7591\u5fdc\u7b54\u3084Break\u3000Time\uff0cBanquet\u306a\u3069\u3067\u591a\u304f\u306e\u7814\u7a76\u8005\u306e\u65b9\u3005\u3068\u8a71\u3059\u6a5f\u4f1a\u304c\u3042\u308a\u307e\u3057\u305f\u304c\uff0c\u6d88\u6975\u7684\u306b\u306a\u308a\u4e0a\u8a18\u306e\u3088\u3046\u306a\u65b9\u3005\u3068\u3042\u307e\u308a\u8a71\u3059\u3053\u3068\u304c\u51fa\u6765\u306a\u304b\u3063\u305f\u3053\u3068\u3092\u5f8c\u6094\u3057\u3066\u3044\u307e\u3059\uff0e<br \/>\n\u307e\u305f\uff0cBCI\u306e\u5206\u91ce\u4ee5\u5916\u306e\u7814\u7a76\u767a\u8868\u3067\u306f\u30b9\u30e9\u30a4\u30c9\u4e2d\u306b\u6570\u5f0f\u304c\u66f8\u304b\u308c\u3066\u304a\u308a\uff0c\u5168\u304f\u305d\u308c\u3092\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u51fa\u6765\u307e\u305b\u3093\u3067\u3057\u305f\uff0e\u4eca\u5f8c\u7814\u7a76\u3092\u9032\u3081\u3066\u3044\u304f\u4e0a\u3067\u6570\u5b66\u306e\u77e5\u8b58\u306f\u5fc5\u8981\u3067\u3042\u308a\uff0c\u79c1\u81ea\u8eab\u305d\u306e\u77e5\u8b58\u304c\u8db3\u308a\u306a\u3044\u3053\u3068\u3092\u75db\u611f\u3055\u305b\u3089\u308c\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e6\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDevelopment of an Autonomous BCI Wheelchair\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDanny Wee-Kiat Ng, Ying-Wei Soh and Sing-Yau Goh, UTAR, Malaysia\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 1<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aRestoration of mobility for the movement impaired is one of the important goals for numerous Brain Computer Interface (BCI) systems. In this study, subjects used a steady state visual evoked potential (SSVEP) based BCI to select a desired destination. The selected destination was communicated to the wheelchair navigation system that controlled the wheelchair autonomously avoiding obstacles on the way to the destination. By transferring the responsibility of controlling the wheel chair from the subject to the navigation software, the number of BCI decisions needed to be completed by the subject to move to the desired destination is greatly reduced.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306fSSVEP\u3092\u7528\u3044\u305fBCI\u30b7\u30b9\u30c6\u30e0\u306b\u95a2\u3059\u308b\u7814\u7a76\u767a\u8868\u3067\u3057\u305f\uff0e\u79c1\u9054\u306e\u7814\u7a76\u5ba4\u3067SSVEP\u3092\u7528\u3044\u305fBCI\u30b7\u30b9\u30c6\u30e0\u306fB4\u306e\u68ee\u4e0b\u541b\u304c\u73fe\u5728\u884c\u3063\u3066\u3044\u308b\u304c\uff0c\u5f7c\u304c\u884c\u3063\u3066\u3044\u308b\u7814\u7a76\u3068\u306f\u5225\u306e\u65b9\u5411\u6027\u3067\u7814\u7a76\u3092\u884c\u3063\u3066\u3044\u307e\u3057\u305f\uff0e\u68ee\u4e0b\u541b\u306e\u8003\u3048\u308b\u30b7\u30b9\u30c6\u30e0\u306f\u5149\u523a\u6fc0\u306b\u5bfe\u5fdc\u3059\u308b\u884c\u52d5\u81ea\u4f53\u3092\u6a5f\u5668\u306b\u4f1d\u9054\u3057\u307e\u3059\uff0e\u3057\u304b\u3057\uff0c\u3053\u306e\u767a\u8868\u306b\u304a\u3051\u308b\u30b7\u30b9\u30c6\u30e0\u306f\u5149\u523a\u6fc0\u306b\u4e88\u3081\u76ee\u7684\u5730\u3092\u8a2d\u5b9a\u3057\u3066\u304a\u304d\uff0c\u5149\u523a\u6fc0\u3092\u898b\u308b\u3053\u3068\u3067\u305d\u306e\u523a\u6fc0\u306b\u5bfe\u5fdc\u3059\u308b\u76ee\u7684\u5730\u306b\u81ea\u52d5\u3067\u79fb\u52d5\u3059\u308b\u3053\u3068\u60f3\u5b9a\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u3053\u306e\u767a\u8868\u306b\u304a\u3051\u308b\u30b7\u30b9\u30c6\u30e0\u306e\u30e1\u30ea\u30c3\u30c8\u306f\u30e6\u30fc\u30b6\u306e\u8ca0\u62c5\u304c\u3088\u308a\u8efd\u3044\u3053\u3068\u3067\u3042\u308b\u304c\uff0c\u30c7\u30e1\u30ea\u30c3\u30c8\u3068\u3057\u3066\u306f\u884c\u52d5\u7bc4\u56f2\u304c\u72ed\u304f\u306a\u3063\u3066\u3057\u307e\u3046\u3053\u3068\u304c\u8003\u3048\u3089\u308c\u307e\u3059\uff0e\u307e\u305f\uff0cEEG\u88c5\u7f6e\u3084\u30bb\u30f3\u30b5\u306a\u3069\u306e\u30cf\u30fc\u30c9\u30a6\u30a7\u30a2\u3092\u81ea\u4f5c\u3057\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aAcross-subject estimation of 3-back task performance using EEG signals\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aJinsoo Kim, Min-Ki Kim, Christian Wallraven and Sung-Phil Kim, Department of Brain and Cognitive Engineering, Korea University, Korea (South); Department of Human and Systems Engineering, Ulsan National Institute of Science and Technology, Korea (South)\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 1<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aThis study was aimed at estimating subjects&#8217; 3-back working memory task error rate using electroencephalogram (EEG) signals. Firstly, spatio-temporal band power features were selected based on statistical significance of across- subject correlation with the task error rate. Method-wise, ensemble network model was adopted where multiple artificial neural networks were trained independently and produced separate estimates to be later on aggregated to form a single estimated value. The task error rate of all subjects were estimated in a leave-one-out cross-validation scheme. While a simple linear method underperformed, the proposed model successfully obtained highly accurate estimates despite being restrained by very small sample size.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u3067\u306f3-back\u3092\u884c\u3063\u3066\u3044\u308b\u6642\u306e\u8133\u6ce2\u3092\u8a08\u6e2c\u3057\uff0c\u88ab\u9a13\u8005\u306e\u72b6\u614b\u3092\u63a8\u5b9a\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u30bf\u30b9\u30af\u533a\u9593\u3060\u3051\u3067\u306a\u304f\u30ec\u30b9\u30c8\u533a\u9593\u3082\u89e3\u6790\u3092\u884c\u3063\u3066\u304a\u308a\uff0c\u30aa\u30f3\u30e9\u30a4\u30f3\u51e6\u7406\u3092\u610f\u8b58\u3057\u305f\u89e3\u6790\u624b\u6cd5\u3092\u4f7f\u7528\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u4eca\u56de\u306e\u5b9f\u9a13\u3067\u306f\u89e3\u6790\u3092\u884c\u3046\u305f\u3081\u306e\u30a6\u30a3\u30f3\u30c9\u30a6\u5e45\u304c5\u79d2\u3067\u3042\u3063\u305f\u305f\u3081\uff0c\u4eca\u5f8c\u306f\u305d\u306e\u30a6\u30a3\u30f3\u30c9\u30a6\u5e45\u306b\u3064\u3044\u3066\u3082\u691c\u8a0e\u3059\u308b\u3068\u7d50\u8ad6\u4ed8\u3051\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aAbnormal Event Detection in EEG Imaging &#8211; Comparing Predictive and Model-based Approaches\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aJayanta Dutta, Banerjee Bonny, Ilin Roman and Kozma Robert, U of Memphis, United States; Air Force Research Lab, United States\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 1<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aThe detection of abnormal\/unusual events based on dynamically varying spatial data has been of great interest in many real world applications. It is a challenging task to detect abnormal events as they occur rarely and it is very difficult to predict or reconstruct them. Here we address the issue of the detection of propagating phase gradient in the sequence of brain images obtained by EEG arrays. We compare two alternative methods of abnormal event detection. One is based on prediction using a linear dynamical system, while the other is a model-based algorithm using expectation minimization approach. The comparison identifies the pros and cons of the different methods, moreover it helps to develop an integrated and robust algorithm for monitoring cognitive behaviors, with potential applications including brain-computer interfaces (BCI).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u7814\u7a76\u3067\u306f\u3046\u3055\u304e\u306e\u8133\u306b\u96fb\u6975\u3092\u57cb\u3081\u3066\u30c7\u30fc\u30bf\u3092\u53d6\u5f97\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u51e6\u7406\u65b9\u6cd5\u3068\u3057\u3066\u306f\u30d5\u30a3\u30eb\u30bf\u30ea\u30f3\u30b0(FIR filter)\u3068\u30d2\u30eb\u30d0\u30fc\u30c8\u5909\u63db\u3092\u4f7f\u7528\u3057\uff0c\u3046\u3055\u304e\u306b\u3068\u3063\u3066\u7a81\u7136\u306e\u51fa\u6765\u4e8b\u304c\u8d77\u304d\u305f\u6642\u306e\u8133\u6ce2\u306b\u901a\u5e38\u6642\u3068\u306f\u7570\u306a\u308b\u50be\u5411\u306e\u8133\u6ce2\u304c\u6df7\u5728\u3059\u308b\u3053\u3068\u304c\u6319\u3052\u3089\u308c\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aSensitivity Analysis of Hilbert Transform with Band-Pass FIR Filters for Robust Brain Computer\u3000Interface\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aJeffery Davis and Kozma Robert, CLION, U of Memphis, United States; U of Memphis, United States\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 2<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aTransient cortical oscillations in the form of rapid synchronization-desynchronization transitions are key candidates of neural correlates of higher\u3000cognitive activity monitored by scalp EEG and intracranial ECoG arrays. The transition period is in the order of 20-30 ms, and standard signal\u3000processing methodologies such as Fourier analysis are inadequate for proper characterization of the phenomenon. Hilbert transform- based (HT)\u3000analysis has shown great promise in detecting rapid changes in the synchronization properties of the cortex measured by high-density EEG\u3000arrays. Therefore, HT is a primary candidate of operational principles of brain computer interfaces (BCI). Hilbert transform over narrow frequency\u3000bands has been applied successfully to develop robust BCI methods, but optimal filtering is a primary concern. Here we systematically evaluate\u3000the performance of FIR filters over various narrow frequency bands before applying Hilbert transforms. The conclusions are illustrated using\u3000rabbit ECoG data. The results are applicable for the analysis of scalp EEG data for advanced BCI devices.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u4e0a\u8a18\u306e\u300cAbnormal Event Detection in EEG Imaging &#8211; Comparing Predictive and Model-based Approaches\u300d\u3068\u540c\u3058\u7814\u7a76\u3092\u884c\u3063\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDevelopment of SSVEP-based BCI using Common Frequency Pattern to Enhance System Performance\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aLi-Wei Ko, Shih-Chuan Lin, Wei-Gang Liang, Oleksii Komarov and Meng-Shue Song, Institute of Bioinformatics and Systems Biology, NCTU, Taiwan; Department of Physics, NTHU, Taiwan; Institute of Molecular Medicine andBioengineering, NCTU, \u00a0aiwan; Brain Research Center, NCTU, Taiwan<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 2<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aBrain Computer Interface(BCI) systems provide an additional way for people to interact with external environment without using peripheral nerves or muscles. In a variety of BCI systems, a BCI system based on the steady-state visual evoked potentials (SSVEP) is one most common system known for application, because of its ease of use and good performance with little user training. In this study, we employed the common frequency pattern method (CFP) to improve the accuracy of our EEG-based SSVEP BCI system. We used four basic classifiers (SVM, KNNC, PARZENDC, LDC) to estimate the accuracy of our SSVEP system. Without using CFP, the highest accuracy of the EEG-based SSVEP system was 80%. By using CFP, the accuracy could be upgraded to 95%..<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u7814\u7a76\u3082SSVEP\u306e\u8b58\u5225\u306b\u95a2\u3059\u308b\u3082\u306e\u3067\u3057\u305f\uff0eCFP\u3068\u3044\u3046\u63d0\u6848\u3059\u308b\u8b58\u5225\u624b\u6cd5\u3092\u7528\u3044\u308b\u3053\u3068\u306795%\u307e\u3067\u8b58\u5225\u7387\u3092\u5f15\u304d\u4e0a\u3052\u308b\u3053\u3068\u304c\u3067\u304d\u305f\u3068\u5831\u544a\u304c\u3042\u308a\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aIdentification of Three Mental States Using a Motor Imagery Based Brain Machine Interface Machine\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aTrongmun Jiralerspong, Chao Liu and Jun Ishikawa, Tokyo Denki University, Japan\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aCIBCI&#8217;14 Session 3<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aThe realization of robotic systems that understands human intentions and produces accordingly complex behaviors is needed particularly for disabled persons, and would consequently benefit the aged. For this purpose, a control technique that recognizes human intentions from neural responses called brain machine interface (BMI) have been suggested. The unique ability to communicate with machines by brain signals opens a wide area of applications for BMI. Recently, combination of BMI capabilities with assistive technology has provided solutions that can benefit patients with disabilities and many others. This paper proposes a BMI system that uses a consumer grade electroencephalograph (EEG) acquisition device. The aim is to develop a low cost BMI system suitable for households and daily applications. As a preliminary study, an experimental system has been prototyped to classify user intentions of moving an object up or down, which are basic instructions needed for controlling most electronic devices by using only EEG signals. In this study, an EEG headset equipped with 14 electrodes is used to acquire EEG signals but only 8 electrodes are used to identify user intentions. The features of EEG signals are extracted based on power spectrum and artificial neural network are used as classifiers. To evaluate the system performance, online identification experiments for three subjects are conducted. Experiment results show that the proposed system has worked well and could achieve an overall correct identification rate of up to 72 % with 15 minutes of training time by a user with no prior experience in BMI.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u7814\u7a76\u3067\u306f\u79c1\u3068\u540c\u69d8\u306b\u904b\u52d5\u30a4\u30e1\u30fc\u30b8\u3092\u4f7f\u7528\u3057\u305fBMI\u30b7\u30b9\u30c6\u30e0\u306e\u69cb\u7bc9\u3092\u76ee\u7684\u3068\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u3053\u306e\u5b9f\u9a13\u3067\u306f\u8a08\u6e2c\u90e8\u4f4d\u3068\u3057\u3066\u904b\u52d5\u91ce\u3092\u4f7f\u7528\u3057\u3066\u3044\u306a\u3044\u3068\u3053\u308d\u306b\u7591\u554f\u3092\u6301\u3061\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>2014 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE\uff0chttp:\/\/ieee-ssci.org\/<\/li>\n<\/ul>\n<p><strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">\u767d\u77f3\u99ff\u82f1<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">2\u30af\u30e9\u30b9\u5206\u985e\u306e\u70ba\u306e\u907a\u4f1d\u7684\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u3092\u7528\u3044\u305f\u7279\u5fb4\u91cf\u5909\u63db\u624b\u6cd5\u306e\u63d0\u6848<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">A Feature Transformation Method using GeneticProgramming for Two-Class Classification<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u5ee3\u5b89\u77e5\u4e4b\uff0c\u767d\u77f3\u99ff\u82f1, \u5409\u7530\u502b\u4e5f\uff0c\u5c71\u672c\u8a69\u5b50<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">IEEE<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">2014 IEEE Symposium onComputational Intelligence and Data Mining<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Florida\uff0cOrlando\uff0cU.S.A<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2014\/12\/09-2014\/12\/12<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2012\/12\/09\u304b\u30892012\/12\/12\u306b\u304b\u3051\u3066, \u30d5\u30ed\u30ea\u30c0\u5dde\u30aa\u30fc\u30e9\u30f3\u30c9\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305fSSCI 2014<sup>1)<\/sup>\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306eSSCI\u306f\uff0cIEEE\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u3066\u304a\u308a\uff0cComputational Intelligence\u306b\u3064\u3044\u3066\u306e\u69d8\u3005\u306a\u5206\u91ce\u3092\u542b\u3093\u3060\u56fd\u969b\u4f1a\u8b70\u3067\u3059\uff0e\u79c1\u306f12\/10\uff5e12\/12\u306e3\u65e5\u9593\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u5927\u4e45\u4fdd\u3055\u3093\uff0c\u5859\u541b\uff0c\u6797\u6cbc\u541b\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f11\u65e5\u306e\u5348\u5f8c\u306e\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e1\u6642\u9593\u534a\u306e\u9593\uff0c\u69d8\u3005\u306a\u56fd\u7c4d\u3092\u6301\u3064\u65b9\u3005\u3068\u8b70\u8ad6\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u82f1\u8a9e\u3067\u767a\u8868\u3059\u308b\u3053\u3068\u306f\u521d\u3081\u3066\u306e\u6a5f\u4f1a\u3067\u7dca\u5f35\u3057\u307e\u3057\u305f\u304c\uff0c\u81ea\u5206\u306e\u7814\u7a76\u5185\u5bb9\u306b\u3064\u3044\u3066\u306e\u8cea\u554f\u3092\u805e\u304d\u53d6\u308a\uff0c\u8a71\u5408\u3046\u3053\u3068\u306f\u3067\u304d\u305f\u306e\u304b\u306a\u3068\u601d\u3044\u307e\u3059\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c2\u30af\u30e9\u30b9\u5206\u985e\u306b\u3088\u308b\u8b58\u5225\u30fb\u6c4e\u5316\u80fd\u529b\u3092\u9ad8\u3081\u308b\u70ba\u306e\u907a\u4f1d\u7684\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u3092\u7528\u3044\u305f\u7279\u5fb4\u91cf\u5909\u63db\u6cd5\u3067\u3042\u308bGPMFC\u3068\u3044\u3046\u624b\u6cd5\u306e\u6539\u826f\u306b\u3064\u3044\u3066\u306e\u767a\u8868\u3067\u3057\u305f\uff0eGPMFC\u306b\u304a\u3044\u3066\uff0c\u8a55\u4fa1\u95a2\u6570\u306e\u554f\u984c\u70b9\u3092\u793a\u3057\uff0c\u305d\u306e\u554f\u984c\u70b9\u3092\u89e3\u6c7a\u3059\u308b\u8a55\u4fa1\u95a2\u6570\u3092\u63d0\u6848\u3057\uff0c\u65e2\u5b58\u624b\u6cd5\u3068\u63d0\u6848\u624b\u6cd5\u306e\u6bd4\u8f03\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"530\">\u672c\u8ad6\u6587\u3067\u306f\uff0c\u907a\u4f1d\u7684\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\uff08GP\uff09\u3092\u4f7f\u7528\u3057\u3066\uff0c2\u30af\u30e9\u30b9\u5206\u985e\u306e\u305f\u3081\u306e\u7279\u5fb4\u91cf\u5909\u63db\u6cd5\u3092\u63d0\u6848\u3059\u308b\uff0e\u672c\u624b\u6cd5\u3067\u7528\u3044\u308bGP\u306f\uff0cSVM\u306e\u5206\u985e\u7cbe\u5ea6\u3092\u5411\u4e0a\u3055\u305b\u308b\u305f\u3081\u306e\u5909\u63db\u5f0f\u3092\u5c0e\u51fa\u3059\u308b\uff0e\u672c\u8ad6\u6587\u306e\u76ee\u7684\u306f\uff0c\u5909\u63db\u3055\u308c\u305f\u7279\u5fb4\u7a7a\u9593\u3092\u8a55\u4fa1\u3059\u308b\u305f\u3081\u306b\u91cd\u307f\u95a2\u6570\u3092\u63d0\u6848\u3057\uff0cGP\u306e\u8a55\u4fa1\u95a2\u6570\u3068\u3057\u3066\u5b9f\u88c5\u3057\u3066\u8a55\u4fa1\u3057\u305f\uff0e\u63d0\u6848\u3057\u305f\u8a55\u4fa1\u95a2\u6570\u306e\u6a5f\u80fd\u306f\uff0c\u30b5\u30f3\u30d7\u30eb\u306e\u7406\u60f3\u7684\u306a2\u30af\u30e9\u30b9\u5206\u5e03\u304c\u4eee\u5b9a\u3055\u308c\uff0c\u5b9f\u969b\u306b\u5f97\u3089\u308c\u308b\u5206\u5e03\u3068\u306e\u9593\u306e\u8ddd\u96e2\u3092\u7b97\u51fa\u3059\u308b\uff0e \u91cd\u307f\u306f\u3053\u308c\u3089\u306e\u8ddd\u96e2\u306b\u8ab2\u3055\u308c\u308b\u3053\u3068\u306b\u306a\u308b\uff0e\u63d0\u6848\u624b\u6cd5\u306e\u6709\u52b9\u6027\u3092\u78ba\u8a8d\u3059\u308b\u305f\u3081\uff0c\u6570\u5024\u5b9f\u9a13\u3092\u884c\u3063\u305f\uff0e\u305d\u306e\u7d50\u679c\u3001\u63d0\u6848\u624b\u6cd5\u306e\u5206\u985e\u7cbe\u5ea6\u306f\u5f93\u6765\u306e\u65b9\u6cd5\u3088\u308a\u3082\u826f\u597d\u3067\u3042\u3063\u305f\uff0e<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><br \/>\n\u672c\u5b66\u4f1a\u3067\u306f\uff0c\u69d8\u3005\u306a\u5206\u91ce\u306e\u7814\u7a76\u8005\u306e\u65b9\u3005\u304c\u6765\u3066\u3044\u307e\u3057\u305f\uff0e\u305d\u306e\u70ba\uff0c\u9032\u5316\u8a08\u7b97\u306b\u5bfe\u3057\u3066\u3042\u307e\u308a\u77e5\u8b58\u306e\u306a\u3044\u65b9\u3082\u304a\u3089\u308c\uff0c\u57fa\u672c\u7684\u306a\u5185\u5bb9\u306e\u8aac\u660e\u3092\u304a\u3053\u306a\u3044\u307e\u3057\u305f\uff0e\u5168\u3066\u82f1\u8a9e\u3067\u56de\u7b54\u3059\u308b\u3053\u3068\u304c\u3080\u305a\u304b\u3057\u304f\uff0c\u4f55\u5ea6\u3082\u8aac\u660e\u3092\u3084\u308a\u76f4\u3059\u5834\u9762\u304c\u3042\u308a\u307e\u3057\u305f\u304c\uff0c\u5fc5\u6b7b\u3067\uff0c\u76f8\u624b\u304c\u8cea\u554f\u3057\u3066\u304d\u305f\u30d5\u30ec\u30fc\u30ba\u3092\u805e\u304d\u53d6\u3063\u305f\u308a\uff0c\u81ea\u5206\u3067\u30ea\u30d4\u30fc\u30c8\u3057\u3066\u805e\u304d\u306a\u304a\u3057\u305f\u308a\u3057\u3066\u7406\u89e3\u306b\u52aa\u3081\uff0c\u9069\u5207\u306a\u56de\u7b54\u306e\u8868\u73fe\u3092\u884c\u3046\u3088\u3046\u306b\u3064\u3081\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u63d0\u6848\u624b\u6cd5\u306b\u3064\u3044\u3066\u306e\u8aac\u660e\u3092\u6c42\u3081\u3089\u308c\u308b\u3053\u3068\u304c\u3042\u308a\uff0c\u305d\u3053\u306f\u3046\u307e\u304f\u5207\u308a\u8fd4\u305b\u305f\u3068\u601d\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u767a\u8868\u306f\u521d\u3081\u3066\u306e\u56fd\u969b\u5b66\u4f1a\u767a\u8868\u3067\u3042\u308a\uff0c\u304b\u306a\u308a\u7dca\u5f35\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u969b\u306b\u3042\u308b\u7a0b\u5ea6\u306e\u8cea\u554f\u306b\u7b54\u3048\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\uff0e\u305f\u3060\uff0c\u8aac\u660e\u3059\u308b\u3053\u3068\u306b\u5fc5\u6b7b\u3067\uff0c\u9006\u8cea\u554f\u3092\u884c\u3063\u305f\u308a\u3059\u308b\u3053\u3068\u307e\u3067\u306f\u3067\u304d\u307e\u305b\u3093\u3067\u3057\u305f\uff0e\u4eca\u5f8c\uff0c\u3082\u3063\u3068\u8b70\u8ad6\u304c\u3067\u304d\u308b\u3088\u3046\u306a\u7d4c\u9a13\u3092\u7a4d\u307f\u305f\u3044\u3068\u601d\u3044\u307e\u3057\u305f\uff0e\u3053\u306e\u7d4c\u9a13\u3092\u751f\u304b\u3057\u3066\uff0c\u4fee\u58eb\u8ad6\u6587\u306b\u3064\u306a\u3052\u3066\u3044\u304d\u305f\u3044\u3068\u601d\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u7279\u306b\u4e0b\u8a18\u306e3\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"530\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0 \uff1aTwo key properties of dimensionality reduction methods\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aJohn A. Lee, Michel Verleysen<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0 \uff1aCIDM&#8217;14 Session 5: High Dimensional Data Analysis<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDimensionality reduction aims at providing faithful low-dimensional representations of high-dimensional data. Its general principle is to attempt to reproduce in a low-dimensional space the salient characteristics of data, such as proximities. A large\u3000variety of methods exist in the literature, ranging from principal component analysis to deep neural networks with a bottleneck layer. In this cornucopia, it is rather difficult to find out\u3000why a few methods clearly outperform others. This paper identifies two important\u3000properties that enable some recent methods like stochastic neighborhood embedding and its variants to produce improved visualizations of high-dimensional data. The first property is a low sensitivity to the phenomenon of distance concentration. The second one is plasticity, that is, the capability to forget about some data characteristics to better reproduce the other ones. In a manifold learning perspective, breaking some proximities typically allow for a better unfolding of data. Theoretical developments as well as experiments support our<br \/>\nclaim that both properties have a strong impact. In particular, we show that equipping classical methods with the missing properties significantly improves their results.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u9ad8\u6b21\u5143\u306e\u30c7\u30fc\u30bf\u306b\u5bfe\u3057\u3066\uff0c\u6559\u5e2b\u306a\u3057\u4e3b\u6210\u5206\u5206\u6790\u306b\u3088\u308b\u5b66\u7fd2\u3092\u884c\u3044\uff0c\u30d2\u30e5\u30fc\u30ea\u30b9\u30c6\u30a3\u30b9\u30c6\u30a3\u30af\u30b9\u306a\u89e3\u6cd5\u3067\u4f4e\u6b21\u5143\u306e\u30af\u30e9\u30b9\u30bf\u3092\u4f5c\u308b\u3068\u3044\u3063\u305f\u3082\u306e\u3067\u3057\u305f\uff0e\u69d8\u3005\u306a\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u7d39\u4ecb\u3055\u308c\u91cd\u8981\u306a\u30dd\u30a4\u30f3\u30c8\u3068\u3057\u3066\u78ba\u7acb\u7684\u306a\u5c40\u6240\u63a2\u7d22\u3068\u63a2\u7d22\u904e\u7a0b\u3067\u751f\u3058\u305f\u5909\u7570\u3078\u306e\u5bfe\u5fdc<br \/>\n\u304c\u6319\u3052\u3089\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u767a\u8868\u3092\u805e\u3044\u305f\u3060\u3051\u3067\u306f\u3042\u307e\u308a\u308f\u304b\u308a\u307e\u305b\u3093\u3067\u3057\u305f\u304c\uff0c\u4f55\u304b\u9ad8\u6b21\u5143\u30c7\u30fc\u30bf\u3092\u4f4e\u6b21\u5143\u30c7\u30fc\u30bf\u306b\u843d\u3068\u3057\u8fbc\u3080\u969b\u306e\u91cd\u8981\u306a\u77e5\u898b\u304c\u5f97\u3089\u308c\u305f\u3068\u601d\u308f\u308c\u307e\u3059\uff0e\u767a\u8868\u8005\u306f3\u65e5\u76ee\u306e\u30d0\u30f3\u30b1\u30c3\u30c8\u306e\u6642\u9593\u3067\u8868\u5f70\u3055\u308c\u3066\u304a\u308a\uff0c\u9ad8\u3044\u30ec\u30f4\u30a7\u30eb\u306e\u7814\u7a76\u3092\u3055\u308c\u3066\u3044\u308b\u3068\u601d\u3044\u307e\u3057\u305f\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"530\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0 \uff1aValid Interpretation of Feature Relevance for Linear Data Mappings\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aBeno Fr\u00b4enay,Daniela Hofmann,Alexander Schulz, Michael Biehlz, and Barbara Hammer<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0 \uff1aCIDM&#8217;14 Session 5: High Dimensional Data Analysis<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aLinear data transformations constitute essential operations in various machine learning algorithms, ranging from linear regression up to adaptive metric transformation. Often, linear scalings are not only used to improve the model accuracy, rather feature coefficients as provided by the mapping are interpreted as an indicator for the relevance of the feature for the task at hand. This principle, however, can be misleading in particular for high-dimensional or correlated features, since it easily marks irrelevant features as relevant or vice versa. In this contribution, we propose a mathematical formalisation of the minimum and maximum feature relevance for a given linear transformation which can efficiently be solved by means of linear programming. We evaluate the method in several benchmarks, where it becomes apparent that the minimum and maximum relevance closely resembles what is often referred to as weak and strong relevance of the features; hence unlike the mere scaling provided by the linear mapping, it ensures valid interpretability.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u3067\u306f\uff0c\u7dda\u5f62\u5909\u63db\u3067\uff0c\u30c7\u30fc\u30bf\u306e\u7279\u5fb4\u91cf\u9593\u306e\u95a2\u4fc2\u3092\u5c0e\u3053\u3046\u3068\u3059\u308b\u3053\u3068\u304c\u691c\u8a0e\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u63d0\u6848\u3059\u308b\u7dda\u5f62\u5909\u63db\u306b\u3088\u3063\u3066\u7279\u5fb4\u91cf\u9593\u306e\u95a2\u4fc2\u6027\u304c\u826f\u304f\u306a\u308a\uff0c\u5f93\u6765\u306e\u3082\u306e\u3068\u6bd4\u3079\uff0c\u53ef\u8aad\u6027\u304c\u826f\u304f\u306a\u3063\u305f\u3068\u3044\u3046\u3088\u3046\u306b\u8a55\u4fa1\u3057\u3066\u3044\u307e\u3059\uff0e\u767a\u8868\u306e\u4e2d\u3067\u7528\u3044\u3089\u308c\u3066\u3044\u308b\u5909\u63db\u306e\u6d41\u308c\u306b\u3064\u3044\u3066\u7406\u89e3\u304c\u53ca\u3070\u305a\uff0c\u9014\u4e2d\u304b\u3089\u3064\u3044\u3066\u3044\u304f\u3053\u3068\u304c\u3067\u304d\u306a\u304b\u3063\u305f\u306e\u3067\uff0c\u305d\u3053\u3092\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u5927\u4e8b\u3060\u3068\u611f\u3058\u307e\u3057\u305f\uff0e\u79c1\u306f\u5909\u63db\u5f8c\u306e\u8b58\u5225\u7cbe\u5ea6\u306e\u307f\u306b\u6ce8\u76ee\u3057\u3066\u7cbe\u5ea6\u8a55\u4fa1\u3092\u884c\u3063\u3066\u3044\u307e\u3059\u304c\uff0c\u7279\u5fb4\u91cf\u9593\u306e\u76f8\u95a2\u95a2\u4fc2\u3092\u898b\u308b\u306e\u3082\u91cd\u8981\u6027\u3092\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"530\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0 \uff1aReview of Coevolutionary Developments of Evolutionary Multi-Objective and Many-Objective Algorithms and Test Problems\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aHisao Ishibuchi, Hiroyuki Masuda, Yuki Tanigaki and Yusuke Nojima<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0 \uff1aMCDM&#8217;14 Session 6: Evolutionary Multi-Objective Optimization<br \/>\nAbstruct \uff1aIn the evolutionary multi-objective optimization (EMO) community, some well-known test problems have been frequently and repeatedly used to evaluate the performance of EMO algorithms. When a new EMO algorithm is proposed, its performance is evaluated on those test problems. Thus algorithm development can be viewed as being guided by test problems. A number of test problems have already been designed in the literature. Since the difficulty of designed test problems is usually evaluated by existing EMO algorithms through computational experiments, test problem design can be viewed as being guided by EMO algorithms. That is, EMO algorithms and test problems have been developed in a coevolutionary manner. The goal of this paper is to clearly illustrate such a coevolutionary development.<br \/>\nWe categorize EMO algorithms into four classes: non-elitist, elitist, many-objective, and combinatorial algorithms. In each category of EMO algorithms, we examine the relation between developed EMO algorithms and used test problems. Our examinations of test problems suggest the necessity of strong diversification mechanisms in many-objective EMO algorithms such as SMS-EMOA, MOEA\/D and NSGA-II<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u5927\u962a\u5e9c\u7acb\u5927\u5b66\u306e\u77f3\u6e15\u5148\u751f\u306e\u767a\u8868\u3067\u3057\u305f\uff0e\u5185\u5bb9\u306f\u591a\u76ee\u7684\u907a\u4f1d\u7684\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u591a\u304f\u306e\u76ee\u7684\u95a2\u6570\u304c\u5b58\u5728\u3059\u308b\u554f\u984c\u306b\u5bfe\u3059\u308b\u8abf\u67fb\u3068\u3044\u3046\u984c\u3067\u3057\u305f\uff0e\u591a\u76ee\u7684\u907a\u4f1d\u7684\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a55\u4fa1\u306b\u306f\u305d\u308c\u306b\u9069\u3057\u305f\u30c6\u30b9\u30c8\u30d7\u30ed\u30b0\u30e9\u30e0\u304c\u5fc5\u9808\u3067\u3042\u308a\uff0c\u4eca\u56de\u306e\u76ee\u7684\u3068\u3057\u3066\u3088\u308a\u591a\u6570\u306e\u76ee\u7684\u95a2\u6570\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\uff0c\u3069\u306e\u3088\u3046\u306a\u30c6\u30b9\u30c8\u554f\u984c\u304c\u826f\u3044\u306e\u304b\u3068\u3044\u3046\u3053\u3068\u3068\uff0c\u305d\u306e\u76ee\u7684\u306b\u5247\u3057\u305f\u30c6\u30b9\u30c8\u554f\u984c\u306e\u4f5c\u6210\u304c\u91cd\u8981\u3067\u3042\u308b\u3068\u306e\u3053\u3068\u3067\u3057\u305f\uff0e\u79c1\u306f\u591a\u76ee\u7684\u907a\u4f1d\u7684\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u306b\u53d6\u308a\u7d44\u3093\u3067\u3044\u307e\u3059\u304c\uff0c\u89e3\u306e\u591a\u69d8\u6027\u306b\u3064\u3044\u3066\u306f\u975e\u5e38\u306b\u91cd\u8981\u3067\u3042\u308b\u3068\u611f\u3058\uff0c\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306b\u95a2\u3057\u3066\u591a\u7a2e\u591a\u69d8\u306a\u3082\u306e\u304c\u5b58\u5728\u3059\u308b\u306e\u3067\uff0c\u8a66\u3057\u3066\u307f\u305f\u3044\u3068\u601d\u3044\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u5859\u8ce2\u54c9<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u8133\u8840\u6d41\u5909\u5316\u91cf\u304b\u3089Deep Learning\u3092\u7528\u3044\u305f<br \/>\n\u88ab\u9a13\u8005\u306e\u6027\u5225\u5206\u985e<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Gender classification of subjects from<br \/>\ncerebral blood flow changes using Deep Learning<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u5ee3\u5b89\u77e5\u4e4b\uff0c\u5859\u8ce2\u54c9\uff0c\u5c71\u672c\u8a69\u5b50<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">IEEE Symposium Series on Computational Intelligence<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">SSCI 2014<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Caribe Royale Hotel<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2014\/12\/10-2014\/12\/12<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2014\/12\/10\u304b\u30892014\/12\/12\u306b\u304b\u3051\u3066\uff0c\u30d5\u30ed\u30ea\u30c0\u306e\u30aa\u30fc\u30e9\u30f3\u30c9\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305fSSCI 2014\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u5b66\u4f1a\u306f\uff0cIEEE Symposium Series on Computational Intelligence\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u5b66\u4f1a\u3067\uff0c\u8a08\u7b97\u77e5\u80fd(CI)\u306b\u95a2\u3059\u308b\u3059\u3079\u3066\u306e\u5074\u9762\u3092\u4fc3\u9032\u3055\u305b\u308b\u3053\u3068\u3092\u76ee\u7684\u306b\u958b\u50ac\u3055\u308c\u3066\u3044\u307e\u3059\uff0e<br \/>\n\u79c1\u306f10\uff0c11\uff0c12\u65e5\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u5927\u4e45\u4fdd\u3055\u3093\uff0c\u767d\u77f3\uff0c\u6797\u6cbc\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f11\u65e5\u306e\u5348\u5f8c\u306e\u30bb\u30c3\u30b7\u30e7\u30f3\u300cSSCI\u201914 Poster Session\u300d\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c1\u6642\u959335\u5206\u306e\u8b1b\u6f14\u6642\u9593\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u2026\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u672c\u7a3f\u3067\u306ffNIRS\u306b\u3088\u3063\u3066\u8a08\u6e2c\u3055\u308c\u305f\u8133\u8840\u6d41\u5909\u5316\u91cf\u3092\u7528\u3044\u3066\u88ab\u9a13\u8005\u306e\u6027\u5225\u3092Deep Learning\u3092\u7528\u3044\u3066\u5206\u985e\u3059\u308b\u3053\u3068\u3092\u8003\u3048\u308b\uff0e\u8133\u8840\u6d41\u5909\u5316\u91cf\u306f\u8133\u6d3b\u52d5\u3068\u95a2\u4fc2\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u5831\u544a\u3055\u308c\u3066\u3044\u308b\uff0e\u305d\u308c\u3067\uff0c\u3082\u3057\u3053\u306e\u5206\u985e\u304c\u9ad8\u3044\u8b58\u5225\u304c\u53ef\u80fd\u3067\u3042\u308b\u306a\u3089\uff0c\u6027\u5225\u306e\u5206\u985e\u306f\u8133\u6d3b\u52d5\u306b\u95a2\u4fc2\u3057\u3066\u3044\u308b\u306f\u305a\u3067\u3042\u308b\uff0e\u5b9f\u9a13\u3067\u306f\u30db\u30ef\u30a4\u30c8\u30ce\u30a4\u30ba\u306e\u74b0\u5883\u4e0b\u3067\u8a18\u61b6\u8ab2\u984c\u3092\u884c\u3063\u305f\u88ab\u9a13\u8005\u304b\u3089fNIRS\u30c7\u30fc\u30bf\u304c\u8a08\u6e2c\u3055\u308c\u305f\uff0e\u7d50\u679c\u304b\u3089\uff0c\u5b66\u7fd2\u3055\u308c\u305f\u5206\u985e\u5668\u306f\u9ad8\u3044\u8b58\u5225\u7387\u3067\u3042\u3063\u305f\u3053\u3068\u304c\u78ba\u8a8d\u3055\u308c\u305f\uff0e\u3053\u306e\u7d50\u679c\u306e\u8981\u56e0\u3068\u3057\u3066\u8133\u6d3b\u52d5\u3068\u8133\u8840\u6d41\u5909\u5316\u91cf\u306e\u9593\u306b\u95a2\u4fc2\u304c\u5b58\u5728\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u8003\u3048\u3089\u308c\u308b\uff0e<br \/>\n&nbsp;<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n\u30fb\u8cea\u7591\u5fdc\u7b54<br \/>\n\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3000\u8cea\u554f\u5185\u5bb9\u306ffNIRS\u4ee5\u5916\u306b\u3082fMRI\u3067\u3082\u540c\u69d8\u306e\u3053\u3068\u3092\u884c\u3063\u3066\u3044\u306a\u3044\u306e\u304b\u3068\u3044\u3046\u3053\u3068\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3059\u308b\u56de\u7b54\u3067\u3059\u304c\uff0cfNIRS\u306e\u307f\u3067\u884c\u3063\u3066\u3044\u307e\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3000\u8cea\u554f\u5185\u5bb9\u306f\u9055\u3046deep learning\u306e\u624b\u6cd5\u3067\u3082\u884c\u3063\u3066\u7cbe\u5ea6\u306e\u6bd4\u8f03\u3092\u3057\u3066\u3044\u306a\u3044\u304b\u3068\u3044\u3046\u3053\u3068\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3059\u308b\u56de\u7b54\u3067\u3059\u304c\uff0c\u524d\u6bb5\u968e\u3067\u306f\u3053\u306e\u624b\u6cd5\u306e\u307f\u3067\u884c\u3063\u3066\u3044\u307e\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u82f1\u8a9e\u3067\u306e\u767a\u8868\u306f\u521d\u3081\u3066\u3060\u3063\u305f\u306e\u3067\u3059\u304c\uff0c\u524d\u3082\u3063\u3066\u6e96\u5099\u306b\u53d6\u308a\u7d44\u3093\u3060\u3053\u3068\u3082\u3042\u308a\uff0c\u76f8\u624b\u306b\u4f1d\u308f\u308b\u767a\u8868\u304c\u3067\u304d\u3066\u3068\u3066\u3082\u826f\u304b\u3063\u305f\u3067\u3059\uff0e\u3053\u306e\u7d4c\u9a13\u3092\u751f\u304b\u3057\u4eca\u5f8c\u3082\u82f1\u8a9e\u529b\u306e\u5411\u4e0a\u3068\u7814\u7a76\u3092\u9811\u5f35\u3063\u3066\u3044\u304d\u305f\u3044\u3068\u601d\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e4\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Adaptive Particle Swarm Optimization Learning in a Time Delayed Recurrent Neural Network for Multi-Step Prediction<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a <em>Kostas Hatalis, Basel Alnajjab, Shalinee Kishore and Alberto Lamadrid<\/em><br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Neural Networks<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a In this study we propose the development of an adaptive particle swarm optimization (APSO) learning algorithm to train a non-linear autoregressive (NAR) neural network, which we call PSONAR, for short term time series prediction of ocean wave elevations. We also introduce a new stochastic inertial weight to the APSO learning algorithm. Our work is motivated by the expected need for such predictions by wave energy farms. In particular, it has been shown that the phase resolved predictions provided in this paper could be used as inputs to novel control methods that hold promise to at least double the current efficiency of wave energy converter (WEC) devices. As such, we simulated noisy ocean wave heights for testing. We utilized our PSONAR to get results for 5, 10, 30, and 60 second multistep predictions. Results are compared to a standard backpropagation model. Results show APSO can outperform backpropagation in training a NAR neural network.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306fNeual Network\u306e\u5b66\u7fd2\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u63d0\u6848\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u6d77\u306e\u6ce2\u306e\u4e0a\u6607\u306e\u77ed\u671f\u6642\u7cfb\u5217\u4e88\u6e2c\u3092\u7528\u3044\u3066\u8a55\u4fa1\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u6570\u5f0f\u304c\u308f\u304b\u308b\u90e8\u5206\u3082\u3042\u3063\u305f\u306e\u3067\u3059\u304c\uff0c\u307b\u3068\u3093\u3069\u96e3\u3057\u304f\u308f\u304b\u3089\u306a\u304b\u3063\u305f\u3067\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aAttractor Flow Analysis for Recurrent Neural Network with Back-to-Back Memristors<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a <em>Gang Bao and Zhigang Zeng<\/em><br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Neural Networks<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Memristor is a nonlinear resistor with the character of memory and is proved to be suitable for simulating synapse of neuron. This paper introduces two memristors in series with the same polarity (back-to-back) as simulator for neuron&#8217;s synapse and presents the model of recurrent neural networks with such back-to-back memristors. By analysis techniques and fixed point theory, some sufficient conditions are obtained for recurrent neural network having single attractor flow and multiple attractors flow. At last, simulation with numeric examples verify our results.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u3067\u306f\u30ea\u30ab\u30ec\u30f3\u30c8\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u305f\u30e2\u30c7\u30eb\u3092\u63d0\u6848\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u30d1\u30ef\u30dd\u3067\u306f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u69cb\u9020\u304c\u8a18\u3055\u308c\u3066\u3044\u307e\u3057\u305f\u304c\uff0c\u82f1\u8a9e\u304c\u3042\u307e\u308a\u805e\u304d\u53d6\u308c\u306a\u304b\u3063\u305f\u305f\u3081\u306b\u7406\u89e3\u3067\u304d\u306a\u304b\u3063\u305f\u3067\u3059\uff0e\u79c1\u306e\u7814\u7a76\u3067\u3082\u30ea\u30ab\u30ec\u30f3\u30c8\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u3066\u307f\u305f\u3044\u3068\u601d\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Fingerprint multilateration for automatically classifying evolved Prisoner&#8217;s Dilemma agents<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a <em>Jeffrey Tsang<\/em><br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Neural Networks<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a We present a novel tool for automatically analyzing evolved Prisoner&#8217;s Dilemma agents, based on combining two existing techniques: fingerprinting, which turns a strategy into a representation-independent functional summary of its behaviour, and multilateration, which finds the location of a point in space using measured distances to a known set of anchor points. We take as our anchor points the space of 2-state deterministic transducers; using this, we can emplace an arbitrary strategy into 7-dimensional real space by computing numerical integrals and solving a set of linear equations, which is sufficiently fast to be doable online. Several new aspects of evolutionary behaviour, such as the velocity of evolution and population diversity, can now be directly quantified.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u3068\u3066\u3082\u96e3\u3057\u3044\u5185\u5bb9\u3067\u3057\u305f\uff0e\u9014\u4e2d\u3067\u78ba\u7387\u8ad6\u3068\u6c7a\u5b9a\u8ad6\u306e\u8a71\u3092\u3057\u3066\u3044\u308b\u3068\u304d\u306b\u7d71\u8a08\u5b66\u306e\u77e5\u8b58\u304c\u4e0d\u8db3\u3057\u3066\u3044\u305f\u305f\u3081\u3064\u3044\u3066\u3044\u3051\u307e\u305b\u3093\u3067\u3057\u305f\uff0e\u4eca\u5f8c\u306f\u7d71\u8a08\u5b66\u306e\u52c9\u5f37\u3092\u5c11\u3057\u305a\u3064\u3057\u3066\u78ba\u7387\u8ad6\u7684\u306aNeural Network\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u7406\u89e3\u3057\u3066\u3044\u304f\u5fc5\u8981\u304c\u3042\u308b\u3068\u601d\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Visual Analytics for Neuroscience-Inspired Dynamic Architectures<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a <em>Margaret Drouhard, Catherine Schuman, J. Douglas Birdwell and Mark Dean<\/em><br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Neural Networks<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a We introduce a visual analytics tool for neuroscience-inspired dynamic architectures (NIDA), a network type that has been previously shown to perform well on control, anomaly detection, and classification tasks. NIDA networks are a type of spiking neural network, a non-traditional network type that captures dynamics throughout the network. We demonstrate the utility of our visualization tool in exploring and understanding the structure and activity of NIDA networks. Finally, we describe several extensions to the visual analytics tool that will further aid in the development and improvement of NIDA networks and their associated design method.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306fNeural Network\u306e\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u306b\u3064\u3044\u3066\u3067\u3057\u305f\uff0e\u3053\u306e\u767a\u8868\u306b\u3064\u3044\u3066\u3082\u82f1\u8a9e\u304c\u307b\u307c\u805e\u304d\u53d6\u308c\u306a\u304b\u3063\u305f\u305f\u3081\u306b\u30b9\u30e9\u30a4\u30c9\u3092\u307f\u308b\u304b\u30b9\u30e9\u30a4\u30c9\u306b\u66f8\u3044\u3066\u3042\u308b\u7c21\u5358\u306a\u82f1\u8a9e\u3092\u8aad\u3080\u3053\u3068\u3067\u3057\u304b\u5185\u5bb9\u3092\u63a8\u6e2c\u3067\u304d\u305a\uff0c\u3042\u307e\u308a\u3088\u304f\u308f\u304b\u3089\u306a\u304b\u3063\u305f\u3067\u3059\uff0e\u82f1\u8a9e\u3092\u805e\u304d\u53d6\u308c\u308b\u3088\u3046\u306b\u306a\u308c\u3070\u3082\u3046\u5c11\u3057\u7406\u89e3\u304c\u6df1\u307e\u308b\u3068\u601d\u3046\u306e\u3067\u82f1\u8a9e\u306e\u52c9\u5f37\u306f\u5927\u5207\u3060\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>IEEE SSCI\u3068\u3044\u3046\u5b66\u4f1a\u3067\u30d5\u30ed\u30ea\u30c0\u306b\u6765\u3066\u3044\u307e\u3059\u3002 4\u4ef6\u767a\u8868\u3057\u307e\u3059\u3002 \uff11\uff09\u3000Endoscope Image Analysis Method for Evaluating the Extent of Early Gast &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/is.doshisha.ac.jp\/news\/?p=2564\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;\u3010\u901f\u5831\u3011IEEE  SSCI 2014&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-2564","post","type-post","status-publish","format-standard","hentry","category-6"],"_links":{"self":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts\/2564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2564"}],"version-history":[{"count":0,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts\/2564\/revisions"}],"wp:attachment":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}