{"id":5213,"date":"2018-06-17T23:38:31","date_gmt":"2018-06-17T14:38:31","guid":{"rendered":"http:\/\/www.is.doshisha.ac.jp\/news\/?p=5213"},"modified":"2018-06-17T23:38:31","modified_gmt":"2018-06-17T14:38:31","slug":"%e3%80%90%e9%80%9f%e5%a0%b1%e3%80%91the-24th-annual-meeting-of-the-organization-for-human-brain-mapping","status":"publish","type":"post","link":"https:\/\/is.doshisha.ac.jp\/news\/?p=5213","title":{"rendered":"\u3010\u901f\u5831\u3011The 24th Annual Meeting of the Organization for Human Brain Mapping"},"content":{"rendered":"<p>2018\/6\/17-2018\/6\/21\u306e\u65e5\u7a0b\u3067\u3001\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb\u3067\u958b\u50ac\u3055\u308c\u305f\u3000<a href=\"https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821\">The 24th Annual Meeting of the Organization for Human Brain Mapping<\/a>\u3000\u306b\u3066\u4e0b\u8a18\u306e\u767a\u8868\u3092\u884c\u3044\u307e\u3057\u305f\u3002<\/p>\n<ul>\n<li>Generating individual brain atlases reflecting structural and functional characteristics \u4e2d\u6751\u572d\u4f51(M2)<\/li>\n<li>Mindful Driving: Detecting driver\u2019s attention and distraction using fNIRS\u3000\u85e4\u539f\u4f91\u4eae(M2)<\/li>\n<li>Extracting functional network differences from multiple brain states based on graph theory metrics \u77f3\u7530\u7fd4\u4e5f(M2)<\/li>\n<li>Intra-individual variation in graph theoretical properties of functional networks during meditation\u3000\u5927\u585a\u53cb\u6a39\uff08\uff2d\uff11\uff09<\/li>\n<li>Mapping the brain state behavior during meditation in low dimensional feature spac\u00a0\u4e09\u597d\u5de7\u771f(M2)<\/li>\n<li>FNIRS study of attentional states based on dynamic functional connectivity analysis\u3000\u897f\u6fa4\u7f8e\u7d50\uff08\uff2d2\uff09<\/li>\n<li>Functional connectivity network of Kanizsa illusory contour perception\u3000\u6749\u91ce\u68a8\u7dd2\uff08\uff2d\uff11\uff09<\/li>\n<\/ul>\n<p><!--more--><br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u4e2d\u6751\u572d\u4f51<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u69cb\u9020\u3068\u6a5f\u80fd\u306e\u7279\u5fb4\u3092\u53cd\u6620\u3057\u305f\u500b\u4eba\u8133\u30a2\u30c8\u30e9\u30b9\u306e\u751f\u6210<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Generating individual brain atlases reflecting structural and functional characteristics<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u4e2d\u6751\u572d\u4f51\uff0c\u65e5\u548c\u609f\uff0c\u5ee3\u5b89\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">Organization for human brain mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">2018 OHBM annual meeting<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Suntec City<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/6\/17-2018\/6\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/6\/17\u304b\u30892018\/6\/21\u306b\u304b\u3051\u3066\uff0c\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f2018 OHBM annual meeting\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e2018 OHBM annual meeting\u306f\uff0cOrganization for human brain mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u308b\u56fd\u969b\u4f1a\u8b70\u3067\uff0c\u30d2\u30c8\u306e\u8133\u7d44\u7e54\u304a\u3088\u3073\u8133\u6a5f\u80fd\u306e\u30de\u30c3\u30d4\u30f3\u30b0\u306b\u95a2\u3059\u308b\u7814\u7a76\u306b\u643a\u308f\u308b\u69d8\u3005\u306a\u80cc\u666f\u306e\u7814\u7a76\u8005\u3092\u96c6\u3081\uff0c\u3053\u308c\u3089\u306e\u79d1\u5b66\u8005\u306e\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\uff0c\u304a\u3088\u3073\u6559\u80b2\u3092\u4fc3\u9032\u3059\u308b\u3053\u3068\u3092\u76ee\u7684\u306b\u958b\u50ac\u3055\u308c\u3066\u3044\u307e\u3059<sup>(1<\/sup>\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\uff0c\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0cM2\u306e\u5b66\u751f\u3068\u3057\u3066\u77f3\u7530\uff0c\u4e09\u597d\uff0c\u897f\u6fa4\uff0c\u85e4\u539f\uff0cM1\u306e\u5b66\u751f\u3068\u3057\u3066\u5927\u585a\uff0c\u6749\u91ce\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f19\uff0c20\u65e5\u306e\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\u304a\u3088\u307321\u65e5\u306e\u30dd\u30b9\u30bf\u30fc\u30ec\u30bb\u30d7\u30b7\u30e7\u30f3\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c\u8a08\u7d043\u6642\u9593\u81ea\u7531\u306b\u53c2\u52a0\u8005\u306e\u65b9\u3068\u8b70\u8ad6\u3092\u884c\u3044\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u300cGenerating individual brain atlas reflecting structural and functional characteristics\u300d\u3068\u984c\u3057\u3066\u767a\u8868\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">Introduction: Recently, group atlases are being used to define nodes in researching brain functional network analysis. There exist two issues concerning most conventional parcellation methods: (1) they do not consider individual differences in functional\/structural features, and (2) the number of parcellations cannot be changed. This study proposes a framework for generating individual brain atlases in order to solve these issues and compare individual differences in the generated results. Additionally, brain regions that are geometrically similar between individuals have been compared with conventional group atlases.<br \/>\nMethods: The proposed framework generates an atlas that has a user-defined number of regions based on input individual structural\/functional brain images (Fig. 1). First, the structural image is parcellated into a predetermined number of regions using SLIC (Simple Linear Iterative Clustering) [1]. Next, a Region of Interest (ROI)-to-ROI functional connectivity matrix is generated based on the parcellated regions. A graph defined by this matrix is divided into user-defined number of subgraphs using spectral clustering [2]. Finally, the set of regions composing the subgraph is labeled as the same region. To evaluate the utility of the proposed framework, individual brain atlases of nine healthy adults (30.1 \u00b1 8.2 years; six females, 3 males) randomly selected from the WU-Minn HCP dataset (downloaded from Connectome DB) were generated from their T1w and resting-state functional MRI (rs-fMRI) images. The number of parcellations was set to 100 regions in this experiment.<br \/>\nResults: Because of the parcellation, few spatially separated regions were merged into one based on brain functional features, and regions were generated whose shapes differed among subjects (Fig. 2). The rate of spatial overlap of each region among the subjects was evaluated based on their similarity defined by the Dice coefficient. The average similarity of all regions was 51%. Individual differences may exist in regions with approximately 50% similarity. This suggests that inter-individual differences affect parcellation results. Additionally, regions having &gt;70% similarity were extracted as common regions among subjects. The common regions were compared with the Power 264 functional atlas compiled by Power et al. [3]. The common areas displaying spatial overlap with the Power 264 atlas were classified based on the brain functional network described for each region of the atlas. The common regions were identified as part of the default mode network (DMN), the auditory network, or the cingulo-opercular network. Because the DMN is supposed to appear in the resting state [4], it is ideal that the regions comprising the DMN be extracted. This suggests that parcellation using the proposed framework reflects common brain activity among the subjects.<br \/>\nConclusions: This study proposes a framework for generating an atlas based on individual brain functional and structural data. The framework generates atlases of each subject parcellated into 100 regions. Some of the regions in the atlases show different shapes among subjects. The common regions among the subjects are part of the network related to the resting state. These results suggest that the atlas generated using the proposed framework reflects different functional brain features and common functional networks among individuals.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u8907\u6570\u306e\u65b9\u304b\u3089SLIC\u306b\u304a\u3051\u308b\u521d\u671f\u306e\u5206\u5272\u6570\u306e\u50242048\u306f\u3069\u306e\u3088\u3046\u306b\u6c7a\u5b9a\u3057\u305f\u306e\u304b\u3068\u3044\u3046\u8cea\u554f\u304c\u3042\u308a\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u3064\u3044\u3066\u306f\uff0c\u4eca\u56de\u306f\u30e6\u30fc\u30b6\u5b9a\u7fa9\u306b\u3088\u308b\u6c7a\u3081\u6253\u3061\u3067\u6c7a\u5b9a\u3057\u305f\u3068\u8fd4\u7b54\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u6a5f\u80fd\u7684\u30b3\u30cd\u30af\u30c6\u30a3\u30d3\u30c6\u30a3\u3092\u91cd\u8996\u3057\u305f\u30a2\u30c8\u30e9\u30b9\u3092\u4f5c\u6210\u3057\u305f\u3044\u5834\u5408\u306f\uff0c\u3053\u306e\u5024\u3092\u5927\u304d\u304f\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3042\u308b\uff0c\u3068\u4f1d\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d<br \/>\nSLIC\u3067\u5206\u5272\u3057\u305f\u9818\u57df\u306b\u542b\u307e\u308c\u308b\u90e8\u5206\u306f\u6a5f\u80fd\u7684\u30b3\u30cd\u30af\u30c6\u30a3\u30d3\u30c6\u30a3\u306b\u3088\u308b\u5f71\u97ff\u304c\u8003\u616e\u3067\u304d\u3066\u3044\u308b\u306e\u304b\u3068\u3044\u3046\u8cea\u554f\u3092\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u3064\u3044\u3066\u306f\uff0c\u8a72\u5f53\u7b87\u6240\u306f\u6a5f\u80fd\u7684\u30b3\u30cd\u30af\u30c6\u30a3\u30d3\u30c6\u30a3\u306b\u3088\u308b\u5f71\u97ff\u306f\u8003\u616e\u3067\u304d\u306a\u3044\uff0c\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u3053\u306e\u8cea\u554f\u306b\u3064\u3044\u3066\u3082\uff0c\u8cea\u554f\u5185\u5bb9\uff11\u306e\u5834\u5408\u3068\u540c\u69d8\u306b\uff0cSLIC\u306e\u521d\u671f\u5206\u5272\u6570\u306e\u8abf\u6574\u306b\u3088\u3063\u3066\u751f\u6210\u3059\u308bSuper voxels\u306e\u30b5\u30a4\u30ba\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\uff0c\u6a5f\u80fd\u7684\u30b3\u30cd\u30af\u30c6\u30a3\u30d3\u30c6\u30a3\u304c\u5f71\u97ff\u3059\u308b\u30d5\u30a1\u30af\u30bf\u30fc\u3092\u5927\u304d\u304f\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3068\u4f1d\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u8907\u63d0\u6848\u624b\u6cd5\u306b\u3066\u751f\u6210\u3055\u308c\u305f\u500b\u4eba\u30a2\u30c8\u30e9\u30b9\u306b\u3064\u3044\u3066\uff0c\u96c6\u56e3\u30a2\u30c8\u30e9\u30b9\u3092\u500b\u4eba\u8133\u306b\u30d5\u30a3\u30c3\u30c6\u30a3\u30f3\u30b0\u3055\u305b\u305f\u5834\u5408\u3068\u30d1\u30d5\u30a9\u30fc\u30de\u30f3\u30b9\u306e\u6bd4\u8f03\u3092\u884c\u3063\u305f\u306e\u304b\u3068\u3044\u3046\u8cea\u554f\u3092\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u3064\u3044\u3066\u306f\uff0c\u4eca\u56de\u306f\u884c\u3063\u3066\u3044\u306a\u3044\u3068\u4f1d\u3048\u307e\u3057\u305f\uff0e\u672c\u4ef6\u306b\u3064\u3044\u3066\u306f\uff0c\u4fee\u58eb\u8ad6\u6587\u4f5c\u6210\u6642\u306b\u304a\u3044\u3066\uff0c\u8003\u616e\u3059\u3079\u304d\u4e8b\u6848\u306e\u4e00\u3064\u3067\u3042\u308b\u3068\u8003\u3048\u307e\u3059\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>4<\/strong><br \/>\nSLIC\u3084Spectral clustering\u306b\u304a\u3051\u308b\u5206\u5272\u6570\u306a\u3069\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u306b\u3064\u3044\u3066\uff0c\u500b\u4eba\u5185\u518d\u73fe\u6027\u304a\u3088\u3073\u500b\u4eba\u9593\u3070\u3089\u3064\u304d\u3092\u76ee\u7684\u95a2\u6570\u3068\u3057\u305f\u6700\u9069\u5316\u3092\u884c\u3063\u3066\u306f\u3069\u3046\u304b\u3068\u3044\u3046\u30a2\u30c9\u30d0\u30a4\u30b9\u3092\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff0e\u3053\u308c\u306b\u95a2\u3057\u3066\u306f\uff0c\u5b9f\u65bd\u3059\u308b\u5fc5\u8981\u6027\u3092\u8003\u3048\u307e\u3059\u304c\uff0c\u30a2\u30c8\u30e9\u30b9\u306e\u751f\u6210\u306b\u8981\u3059\u308b\u8a08\u7b97\u8ca0\u8377\u304a\u3088\u3073\u30a2\u30c8\u30e9\u30b9\u9593\u985e\u4f3c\u5ea6\u306e\u8a08\u7b97\u306b\u304b\u304b\u308b\u8a08\u7b97\u8ca0\u8377\u304c\u5927\u304d\u3044\u3053\u3068\u3092\u8003\u616e\u3057\uff0c\u3088\u308a\u9ad8\u901f\u306b\u8a08\u7b97\u3067\u304d\u308b\u8a08\u7b97\u74b0\u5883\u304a\u3088\u3073\u51e6\u7406\u904e\u7a0b\u306e\u52b9\u7387\u5316\u304c\u5fc5\u8981\u3067\u3042\u308b\u3068\u8003\u3048\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306f2\u56de\u76ee\u306e\u56fd\u969b\u5b66\u4f1a\u3067\u3042\u308a\uff0c\u6628\u5e74\u306eOHBM\u3067\u30dd\u30b9\u30bf\u30fc\u3084\u984c\u76ee\u3067\u3042\u307e\u308a\u7814\u7a76\u3092\u30a2\u30d4\u30fc\u30eb\u3067\u304d\u306a\u304b\u3063\u305f\u3068\u3044\u3046\u53cd\u7701\u3092\u8e0f\u307e\u3048\uff0c\u30dd\u30b9\u30bf\u30fc\u306e\u30c7\u30b6\u30a4\u30f3\u30fb\u7d50\u679c\u306e\u898b\u305b\u65b9\u3067\u5f37\u3044\u30d5\u30a1\u30fc\u30b9\u30c8\u30a4\u30f3\u30d7\u30ec\u30c3\u30b7\u30e7\u30f3\u3092\u4e0e\u3048\u3089\u308c\u308b\u3088\u3046\u306b\u6e96\u5099\u3092\u884c\u3063\u3066\u304d\u307e\u3057\u305f\uff0e\u4eca\u56de\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\u306f\uff0c\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\u306e\u958b\u59cb\u524d\u306b\u3082\u95a2\u308f\u3089\u305a\uff0c\u81ea\u4fe1\u306e\u30dd\u30b9\u30bf\u30fc\u306b\u8208\u5473\u3092\u793a\u3057\u3066\u3044\u305f\u3060\u3044\u305f\u65b9\u3082\u4f55\u540d\u304b\u304a\u308a\uff0c\u307e\u305f\uff0c\u30dd\u30b9\u30bf\u30fc\u3092\u898b\u3066\u7acb\u3061\u6b62\u307e\u3063\u3066\u8aac\u660e\u3092\u304d\u3044\u305f\u308a\uff0c\u8cea\u554f\u3092\u3057\u3066\u3044\u305f\u3060\u304f\u4eba\u3082\u591a\u304f\uff0c\u6628\u5e74\u3088\u308a\u3082\u6709\u610f\u7fa9\u306a\u6642\u9593\u3092\u904e\u3054\u3059\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\uff0e\u3053\u308c\u306b\u3064\u3044\u3066\u306f\uff0c\u3057\u3063\u304b\u308a\u6e96\u5099\u3092\u3057\u3066\u3044\u305f\uff0c\u3068\u3044\u3046\u3053\u3068\u3060\u3051\u3067\u306f\u306a\u304f\uff0c\u73fe\u5728\u306e\u8133\u6a5f\u80fd\u30de\u30c3\u30d4\u30f3\u30b0\u306b\u304a\u3044\u3066Parcellation\u3084Individual based analysis\u304c\u6ce8\u76ee\u3055\u308c\u3066\u304d\u3066\u3044\u308b\u305f\u3081\u3067\u306f\u306a\u3044\u304b\u3068\u8003\u3048\u3066\u3044\u307e\u3059\uff0e\u4eca\u56de\u306e\u5b66\u4f1a\u3092\u901a\u3058\u3066\uff0c\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3092\u901a\u3058\u3066\u7814\u7a76\u304c\u81ea\u5206\u306e\u3082\u306e\u306b\u306a\u3063\u3066\u3044\u308b\uff0c\u3068\u611f\u3058\u305f\u3060\u3051\u3067\u306a\u304f\uff0c\u8133\u6a5f\u80fd\u30de\u30c3\u30d4\u30f3\u30b0\u3068\u3044\u3046\u5206\u91ce\u306b\u304a\u3044\u3066\u81ea\u5206\u306e\u7814\u7a76\u304c\u3069\u306e\u3088\u3046\u306a\u7acb\u3061\u4f4d\u7f6e\u306b\u3042\u308b\u306e\u304b\u3092\u518d\u78ba\u8a8d\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\uff0e\u4eca\u56de\u5f97\u305f\u77e5\u898b\u3092\uff0c\u7814\u7a76\u306e\u9032\u6b69\uff0c\u304a\u3088\u3073\u826f\u3044\u4fee\u58eb\u8ad6\u6587\u3078\u7e4b\u3052\u3066\u3044\u304d\u305f\u3044\u3068\u8003\u3048\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e5\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aProgress in multivariate analysis in brain imaging with Nilearn<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aKamalaker Dadi, jerome dockes, Andres Hoyos Idrobo, et al.<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aORAL SESSION: Modeling and Analysis Methods I<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction: The application of statistical learning techniques to human neuroimaging data is increasingly popular in the study of brain diseases and cognition. Emerging data-processing needs are addressed by Nilearn (http:\/\/nilearn.github.io), a statistical learning package for neuroimaging written in Python. Nilearn is developed to easily apply statistical learning techniques on a vast amount of human functional brain data [1]. This facilitates communication between statistical learning scientists and imaging neuroscientists. Nilearn leverages the main Python machine learning package, Scikit-Learn [2], making it possible to apply almost any machine learning technique to neuroimaging data. It is a complete package with visualization utilities. Nilearn can handle both task functional MRI and resting state fMRI for predictive modelling, classification, decoding, or connectivity analysis. This submission is focused on the current development in Nilearn including decoding, estimation of functional biomarkers from Rest-fMRI, automatic Neurovault data download for meta-analyses, and surface visualizations.<br \/>\nMethods: Nilearn offers a variety of state-of-the-art methods in a ready-to-go pipeline for challenging imaging datasets. The processing pipeline greatly facilitates building a feature matrix from Nifti image data (4D), analyzing it with any scikit-learn based machine learning method, and visualizing the results. The data input steps can easily perform various signal pre-processing steps, such as standardization, frequency-domain filtering, and resampling. Methods for decoding are being added to Nilearn, to make it easy for non-expert users to use high-dimensional regression and classification models [3]. They combine masking, feature selection, and estimation of a model with parameter selection. Moreover, we provide methods for parcellating the brain into ROIs for learning functional connectomes: decomposition methods such as independent component analysis or dictionary learning or clustering methods such as Ward or k-means. Code for easy download of collections of statistical from the Neurovault repository [4] has been added recently. A typical example where the downloader is helpful is image-based meta-analysis: decoding cognitive labels from collections of brain maps. Another major addition is the projection of volume data onto the cortical surface. This visualization method can be used for efficient visualization of atlases or statistical results on the cortical surface [6].<br \/>\nResults: All the results shown in [Figure 1 &amp; 2] are live examples in Nilearn which can be easily adapted to other models for decoding [Figure 1 (Top left)] and parcellations [Figure 1 (Top right)] Other alternative methods for parcellations are Agglomerative Clustering with several criterions.<br \/>\nConclusions: Nilearn provides the neuroimaging community tools to analyze brain imaging data and test hypotheses on brain organization or to extract biomarkers to predict and better understand brain diseases. Nilearn has so far 68 contributors world-wide and is a community effort. Ongoing contributions make it even more handy to run complex pipelines and improve analysis reproducibility.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u672c\u767a\u8868\u306f\uff0cneuroimaging\u30c7\u30fc\u30bf\u306b\u5bfe\u3059\u308b\u7d71\u8a08\u7684\u5b66\u7fd2\u3092\u53ef\u80fd\u3068\u3059\u308b\uff0cPython\u30d1\u30c3\u30b1\u30fc\u30b8\u201dNilearn\u201d\u306e\u7d39\u4ecb\uff0c\u304a\u3088\u3073\u6a5f\u80fd\u306e\u89e3\u8aac\u3092\u884c\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u767a\u8868\u306b\u304a\u3044\u3066\u306f\uff0cNilearn\u306e\u6a5f\u80fd\u306b\u306f\u69d8\u3005\u306a\u3082\u306e\u304c\u3042\u308a\uff0c\u30c7\u30b3\u30fc\u30c7\u30a3\u30f3\u30b0\uff0c\u6a5f\u80fd\u7684\u30b3\u30cd\u30af\u30c6\u30a3\u30d3\u30c6\u30a3\u89e3\u6790\uff0cParcellation\uff0c\u30e1\u30bf\u30a2\u30ca\u30ea\u30b7\u30b9\u306e\u305f\u3081\u306e\u30c7\u30fc\u30bf\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u6a5f\u80fd\uff0c\u304a\u3088\u3073\u53ef\u8996\u5316\u306e\u305f\u3081\u306e\u30c4\u30fc\u30eb\u306a\u3069\u304c\u542b\u307e\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u81ea\u8eab\u306f\u4e3b\u306b\u7814\u7a76\u3067Python\u3092\u7528\u3044\u3066\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u4f5c\u6210\u3057\u3066\u3044\u308b\u305f\u3081\uff0c\u3053\u306e\u30c4\u30fc\u30eb\u306e\u8907\u6570\u306e\u6a5f\u80fd\u3092\u5229\u7528\u3057\uff0c\u7814\u7a76\u306b\u304a\u3044\u3066\u6d3b\u7528\u3067\u304d\u308b\u306e\u3067\u306f\u3068\u8003\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aA probabilistic method for modelling cortical layer composition in sub-voxel resolution<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aOmri Tomer, Zvi Baratz, Ittai Shamir, et al.<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aORAL SESSION: Modeling and Analysis Methods II<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction: The ability to investigate the cortical layers is greatly hindered by limitation resolution. We recently developed a method for probabilistic classification of the cortex into sub-populations of grey matter based on multi-component sub-voxel T1 analysis [1] and mixture modelling. We used multiple Inversion Recovery (IR) scans with varying Inversion Times (TIs) from which we calculated the barin T1 distribution. We were then able to identify several sub-populations of cortical grey matter. Some of the statistical considerations we faced concern using mixture models on classifying the whole-brain T1 distributions. We focus here on the advantages of using a mixture of t-distributions [2] instead of a Gaussian mixture model. The reason for that is that the IR optimization analysis is noisy, and t-distributions are more robust against noise in the data. Another area we focus on is the number of components in the mixture model, which is difficult to determine when the distribution modes are not well separated. While our method has spatial limitations, as it is a probabilistic and not a volumetric method, it enables us to break the resolution barrier, which will allow for research into cortical layers and their clinical and behavioral implications.<br \/>\nMethods: Data acquisition: 66 healthy subjects underwent a protocol consisting of 60 IR-EPI scans with a range of TIs. Data analysis: The IR data was fitted on a voxel-by-voxel basis to a multi-component-IR function using non-linear least squares optimization in order to calculate the multi T1 components and their partial volume, and generate a whole-brain T1 distribution. In order to find the best mixture model that fits the data, Gaussian mixture models (GMM) and mixture of t-distribution (MoT) with varying number of mixtures were fitted to the whole-brain T1-distribution, in order to find the best fit. We performed the same analysis using a Gaussian mixture model (GMM) with various number of mixtures, and examined the models&#8217; Bayesian information criteria (BIC), peak dispersion and robustness across subjects. Probability maps: Probability maps for each mixture in each voxel were calculated for mixtures identified in the grey matter range. These maps were registered to the MNI brain, and the average layer composition across subjects was calculated and projected on surface maps.<br \/>\nResults:<br \/>\nThe whole-brain T1 histogram shows a multi-peak pattern that befits a mixture model. These peaks were previously shown to correspond well with the cytoarchitecture layers. We then examined several criteria in order to select the best mixture model to describe the data. Some of the criteria considered were: model goodness-of-fit (BIC), distribution separation and inter-subject variability.<br \/>\nWe found that the optimal model is a MoT with 11 mixtures. It combines a good fit (low BIC ) (figure 1A), an appropriate number of mixtures in the grey matter range that correspond to cytoarchitectonical layers (figure 1C, top), and was robust across subjects (figures 2B-D). It can also be seen that in general, the MoT peaks are more robust and distinct across subjects, while the GMM peaks are more variable and therefore the average peaks overlap (figures 1B-D), suggesting that a MoT is a stabler way to model the different tissues in the noisy whole-brain T1 distribution than a GMM. The average results are projected on a surface map in figure 2A.<br \/>\nConclusions:<br \/>\nOur results show that using a mixture of t-distribution, which is more robust to outliers, instead of a Gaussian mixture model, significantly improves our ability to model the different tissue classes from the whole-brain T1 distribution. We are able to identify various sub-population of cortical grey matter, and this method may help us investigate their different roles in the human brain. While this method has some spatial limitations that must be resolved, as the segmentation of the layers is probabilistic and not geometric, it paves a way for many clinical and behavioral directions of research.<br \/>\n&nbsp;<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u672c\u767a\u8868\u306f\uff0cT1\u5f37\u8abf\u753b\u50cf\u304b\u3089\u76ae\u8cea\u306e\u30ec\u30a4\u30e4\u30fc\u3092\u63a8\u5b9a\u3059\u308b\u969b\u306b\u767a\u751f\u3059\u308b\uff0cpartial volume effect\u3092\u30e2\u30c7\u30eb\u5316\u3059\u308b\u3053\u3068\u3067\uff0c\u30b5\u30d6\u30dc\u30af\u30bb\u30eb\u3067\u76ae\u8cea\u306e\u30ec\u30a4\u30e4\u30fc\u306e\u63a8\u5b9a\u3092\u53ef\u80fd\u3092\u3068\u3059\u308b\u7814\u7a76\u3067\u3057\u305f\uff0e\u672c\u767a\u8868\u3067\u306f\uff0cmixture of t-distribution model\u3092\u63a1\u7528\u3059\u308b\u3053\u3068\u3067\uff0c\u76ae\u8cea\u306e\u30ec\u30a4\u30e4\u30fc\u3092\u78ba\u7acb\u7684\u306b\u63a8\u5b9a\u3059\u308b\uff0c\u3068\u3044\u3046\u30a2\u30d7\u30ed\u30fc\u30c1\u3067\u3057\u305f\uff0e\u672c\u767a\u8868\u306b\u5bfe\u3057\uff0c\u79c1\u306e\u7814\u7a76\u306b\u3064\u3044\u3066\u306f\uff0cT1\u753b\u50cf\u304b\u3089\u76ae\u8cea\u306e\u30ec\u30a4\u30e4\u30fc\u3092\u63a8\u5b9a\u3057\uff0cSLIC\u306b\u304a\u3044\u3066\uff0c\u69cb\u9020\u7684\u7279\u5fb4\u3068\u3057\u3066\u5229\u7528\u3067\u304d\u308b\u306e\u306f\u306a\u3044\u304b\u3068\u8003\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aSpatial Topography of Individual-Specific Cortical Networks as a Fingerprint of Human Behavior<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aRu Kong, Jingwei Li, Nanbo Sun, et al.<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aLOC Symposium: Mapping Brain Functional Connectivity to Behavior in Young and Old: Methods, Illustrations and Challenges<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction: Individual-specific resting-state fMRI (rs-fMRI) networks exhibit unique features not present in group-average networks [1,2,3]. However, their behavioral relevance is currently unknown. Here, we generate high-quality individual-specific parcellations and show that individual differences in the spatial configuration of cortical networks can be used to predict cognition, personality and emotion.<br \/>\nMethods: We propose a multi-session hierarchical Bayesian model (MS-HBM) to estimate individual-specific rs-fMRI networks. The multiple layers of the model enable explicit separation of inter-subject (between-subject) and intra-subject (within-session) variability. By ignoring intra-subject variability, previous individual-specific network mappings might confuse intra-subject variability for inter-subject differences, resulting in sub-optimal parcellations. See [4] for model details. First, we compare MS-HBM with a group-level (Yeo2011 [5]) and two individual-specific (YeoBackProject [4]; Gordon2017 [3]) parcellation approaches. Rs-fMRI from 30 subjects (10 sessions each) in the CoRR-HNU dataset [6] were processed using a standard pipeline [7]. We parcellate each subject into 17 networks using one or more rs-fMRI sessions, and evaluate their homogeneity (defined as pairwise correlations among vertices of the same network) with the remaining sessions. Higher homogeneity indicates better parcellations. Second, we test whether the spatial configuration of individual-specific parcellations is behaviorally meaningful. We consider ICA-FIX rs-fMRI from the HCP S900 release [8]. 58 behavioral phenotypes were selected and individual-specific MS-HBM parcellations were estimated for subjects with four runs and no missing behavior (N = 579). The parcellations were used to predict individuals&#8217; behaviors using kernel regression [9] within a 20-fold cross-validation procedure. For example, let y be the behavioral measure (e.g., fluid intelligence) and p be the individual-specific parcellation of a test subject. Let yi be the behavioral measure and pi be the individual-specific parcellation of the i-th training subject. Kernel regression predicts the test subject&#8217;s behavior as the linear combination of the training subjects&#8217; behaviors: y\u221d\u2211i\u2208Training set yiS(pi, p). Here S(pi, p) is the similarity (Dice) between parcellations pi and p. Thus, kernel regression makes the assumption that subjects with more similar parcellations have similar behavior.<br \/>\nResults: MS-HBM parcellations achieve the highest homogeneity in new (out-of-sample) rs-fMRI data from the same subjects (Fig. 1A). Using just one fMRI session (10 min), MS-HBM matches the homogeneity achieved by Gordon2017 using five sessions (50 min). Fig. 1B shows the parcellations of four HCP subjects. Black and green arrows indicate individual-specific features replicated on different days. Fig. 2 shows the out-of-sample prediction accuracy of 58 behavioral measures. Average prediction accuracies of cognitive (Fig. 2A), NEO5 personality (Fig. 2B) and emotional (all items in Fig. 2C except emotional recognition) measures are r = 0.15 (p = 1.7e-8), r = 0.10 (p = 0.0018), and r = 0.10 (p = 5.9e-4) respectively. While modest, accuracies are better than HCP MegaTrawl [10] using functional connectivity strength (as opposed to spatial topography of individual-specific networks) for behavioral prediction.<br \/>\nConclusions: We develop a MS-HBM approach to estimate individual-specific parcellations. Compared with other approaches, MS-HBM cortical parcellations generalize better to new rs-fMRI data from the same subjects. MS-HBM parcellations are highly reproducible within individuals, while capturing inter-subject network variations. Finally, inter-subject variation in the spatial configuration of cortical networks are strongly related to inter-subject variation in behavior, suggesting their potential utility as fingerprints of human behavior.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u672c\u767a\u8868\u306f\uff0c \u500b\u4eba\u306eresting-state\u306e\u6a5f\u80fd\u7684\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304b\u3089\uff0c\u8a8d\u77e5\u30fb\u4eba\u683c\u30fb\u611f\u60c5\u3092\u63a8\u5b9a\u3059\u308b\u3053\u3068\u3092\u8a66\u307f\u308b\u7814\u7a76\u306e\u7d39\u4ecb\u3067\u3057\u305f\uff0e\u672c\u767a\u8868\u3067\u306f\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u751f\u6210\u306e\u969b\u306b\uff0c\u500b\u4eba\u3054\u3068\u306bParcellation\u3092\u884c\u3044\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u751f\u6210\u306e\u305f\u3081\u306eROI\u3092\u6c7a\u5b9a\u3057\u3066\u304a\u308a\uff0c\u307e\u305f\uff0c\u500b\u4eba\u3054\u3068\uff0c\u304a\u3088\u3073\u65e5\u3054\u3068\u306e\u5909\u52d5\u306b\u95a2\u3059\u308b\u691c\u8a0e\u3092\u884c\u3063\u3066\u3044\u307e\u3057\u305f\uff0e\u8a72\u5f53\u7814\u7a76\u306f\uff0c\u81ea\u8eab\u306e\u884c\u3063\u3066\u3044\u308b\u7814\u7a76\u3068\u30b3\u30f3\u30bb\u30d7\u30c8\u304c\u8fd1\u3044\u305f\u3081\uff0c\u4eca\u5f8cindividual prediction\u306b\u3064\u3044\u3066\u3082\u8abf\u67fb\u3092\u884c\u3046\u5fc5\u8981\u6027\u3092\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aReproIn: automatic generation of shareable, version-controlled BIDS datasets from MR scanners<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aMatteo Visconti di Oleggio Castello, James Dobson, Terry Sackett, et al.<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aORAL SESSION: Informatics<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction:\u3000Lack of reproducibility is a problem in neuroimaging. Adherence to open standards, easy data sharing, and automation of data conversion and analysis steps are some of the key solutions to address it. The formalization of the Brain Imaging Data Structure (BIDS) [1] made it easier for researchers to collaborate on shared data and to benefit from standardized processing using various BIDS-aware application. At Dartmouth, following the philosophy that science should be open by design [2], we automated the collection of neuroimaging data as a hierarchy of BIDS datasets right from the MR scanner. Thus, individual research groups do not have to convert their data to BIDS manually, eliminating one of the biggest barriers to data sharing. Adherence to the BIDS standard allows investigators to immediately use BIDS-aware applications for data QA (e.g., bids-validator, MRIQC [3]) to catch obvious problems with data acquisition, and to automate preprocessing and analysis. Because the entire process occurs in a Singularity container [4], and all data is version-controlled with DataLad and git, our approach eliminates virtually any ambiguity in data provenance. Here we present details of our setup, named ReproIn (Reproducible Input). All system configurations, software, and material are released under open-source licenses and are provided in a container, so that any institution can easily implement this solution at their imaging centers.<br \/>\nMethods:\u3000Our approach required three key components: 1. A consistent naming scheme for subjects (anonymized), studies and sequences at the scanner, and DICOM storage naming scheme (see Figure 2). This naming scheme was created to flexibly accommodate all use-cases at DBIC.\u30002. A heuristic definition for heudiconv [6,7] to incrementally convert collected data from DICOM into BIDS without human intervention. The heuristic automates\u3000- identification of the location for a particular accession within the hierarchy of studies\u3000- identification of the session for multi-session studies\u3000- identification and annotation of the canceled runs so they could later be reviewed and removed from the study dataset repository, while allowing to revert back in case of mistakes thanks to git\/DataLad.\u3000The automation eliminates manual interactions with the acquired data during conversion, and only requires the user to validate the dataset using the BIDS validator at the end to detect possible anomalies.\u30003. DataLad to provide a system for version control, meta-data annotation, and access to the data. DataLad allowed to\u3000- incrementally update clones of the dataset on local lab workstations from the central conversion box as more data comes in\u3000- prepopulate datasets with templates for BIDS metadata files (e.g., dataset_description.json, *_events.tsv), while allowing for their immediate modification in local clones\u3000- annotate files with possibly sensitive information as non-distributable to prevent future public sharing<br \/>\n&#8211; flexibly fetch portions of the dataset to the processing machines (e.g., only anatomicals for freesurfer parcellation)\u3000- incorporate acquired datasets into a larger institutional repository as DataLad sub-datasets for version control and provenance<br \/>\nResults:\u3000Every study at the Dartmouth Brain Imaging Center, including weekly QA (shared publicly and updated via DataLad [9]), follows the established naming scheme; so far, over 25 studies (including multi-session studies) have been collected, converted to BIDS, and verified to conform to BIDS 1.x standard, thus they are potentially ready for sharing on OpenfMRI [5].<br \/>\nConclusions:\u3000We are actively working on a monitoring tool to fully automate the process by starting the conversion into BIDS without human interaction. The pipeline can be extended to reduce the user&#8217;s workload further. For example, anatomical scans can be automatically defaced (e.g., using mridefacer [9]), and passed through MRIQC [3] to provide users with quality control measures right off the scanner.<br \/>\n&nbsp;<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u672c\u767a\u8868\u306f\uff0c \u5b9f\u9a13\u3092\u4f34\u3046neuroimaging\u30c7\u30fc\u30bf\u3092\u683c\u7d0d\u3059\u308b\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u3067\u3042\u308b\uff0cBrain Imaging Data Structure (BIDS)\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306e\u7d39\u4ecb\uff0c\u304a\u3088\u3073BIDS\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u30c7\u30fc\u30bf\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u7ba1\u7406\u30c4\u30fc\u30eb\u306e\u7d39\u4ecb\u304c\u884c\u308f\u308c\u307e\u3057\u305f\uff0eBIDS\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306f\uff0c\u5171\u6709\u30c7\u30fc\u30bf\u306e\u5171\u540c\u4f5c\u696d\u3084BIDS\u5bfe\u5fdc\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u306b\u3088\u308b\u6a19\u6e96\u5316\u3055\u308c\u305f\u51e6\u7406\u3092\u5bb9\u6613\u306b\u5229\u7528\u3067\u304d\u308b\u3068\u3044\u3046\u3053\u3068\u304b\u3089\uff0c\u8fd1\u5e74\uff0cneuroimaging\u30c7\u30fc\u30bf\u306e\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u306e\u30c7\u30d5\u30a1\u30af\u30c8\u30b9\u30bf\u30f3\u30c0\u30fc\u30c9\u3068\u306a\u3063\u3066\u304d\u3066\u3044\u308b\u3068\u306e\u3053\u3068\u3067\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u3067\u3082\u30c7\u30fc\u30bf\u306e\u683c\u7d0d\u5f62\u5f0f\u3092\u691c\u8a0e\u3059\u308b\u5fc5\u8981\u6027\u3092\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aSpatiotemporal Neonatal Cortical Surface Atlases Construction from 39 to 44 Weeks Using 764 Subjects<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aZhengwang Wu, Gang Li, Li Wang, et al.<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aORAL SESSION: Informatics<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction:<br \/>\nHuman brain undergoes exceptionally dynamic development during first postnatal weeks. Therefore, high quality neonatal cortical surface atlases are highly needed for neonatal brain analysis but still remain scarce (Hill J. 2010;Bozek J. 2016). To address this issue, we unprecedentedly construct a set of neonatal cortical surface atlases from 764 term-born neonates, which is the largest neonatal dataset to our knowledge. To better characterize the dynamic cortical development during this stage, instead of constructing a single atlas, we construct spatiotemporal atlases at each week from 39 to 44 gestational weeks. Rather than averaging co-registered surfaces to construct atlas, which generally leads to over-smoothed cortical folding patterns, we adopt a spherical patch-based group-wise sparse representation to overcome noises and potential registration errors. Our atlases preserve sharp cortical folding patterns, thus lead to better alignment of new subjects onto the atlases.<br \/>\nMethods:<br \/>\n764 term-born neonates&#8217; T2w MRIs are acquired from 39 to 44 gestational weeks. Subject number and gender at each age are reported in Fig. 1, with M\/F indicating male\/female. All images are processed by UNC Infant Cortical Surface Pipeline (Li G. 2015;Wang L. 2015). The topology-correct and geometry-accurate inner cortical surfaces are constructed (Li G. 2012), and then smoothed and inflated to a sphere (Fischl B. 1999) to facilitate surface registration.<br \/>\nOur method includes 3 steps. 1) we establish unbiased cortical correspondences across subjects using group-wise spherical surface registration (Yeo B.T. 2010) and then resample registered spherical surfaces using the same mesh tessellation. 2) for each local spherical patch in atlas space, we build a dictionary, which includes not only the corresponding patches from age-matched co-registered cortical surfaces, but also the spatially neighboring comparable patches (Wu Z. 2017) to account for possible registration errors. 3) with built dictionaries, the problem of estimating cortical folding attributes (e.g., average convexity and mean curvature) on atlas patch becomes finding the best representation of the population folding attribute using the respective dictionary. Notably, each cortical folding attribute can be regarded as a specific view of the cerebral cortex and they should be consistent on atlas. Therefore, instead of representing them independently, we jointly represent them using a multi-task sparse representation with group-wise sparsity constraint (Jalali A. 2010), where each task corresponds to a specific attribute representation. Using above method, we can not only preserve sharp cortical folding patterns, but also maintain consistency across different folding attributes on the atlas.<br \/>\nResults:<br \/>\nFig. 1 shows atlases with color-coded average convexity and mean curvature at each week on: (a) spherical surfaces; (b) average inner surfaces. It can be seen that cerebral cortex has a considerable development from 39 to 44 gestational weeks (with zoomed up views provided in (c) for better inspection).<br \/>\nFig. 2 shows comparison of average atlas and our atlas. Fig. 2(a) shows curvature patterns of two atlases on spherical surface and average inner cortical surface. Clearly, our atlas has a sharper pattern. Fig. 2(b) shows the quantitative comparison. Atlas with sharper cortical folding pattern is expected to have better registration accuracy when aligning individual surfaces onto it, which is measured using correlation coefficient of the average convexity maps after registration. As can be seen, our atlases lead to better registration accuracy than average atlases.<br \/>\nConclusions: We construct a set of neonatal cortical surface atlases at each week from 39 to 44 gestational weeks using a large dataset. Our atlases preserve sharp cortical folding patterns, which lead to better registration accuracy when aligning new subjects. We will publically release the neonatal atlases as a complementary for our UNC 4D infant surface atlas [1].<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u672c\u767a\u8868\u306f\uff0c \u65b0\u751f\u5150\u306e\u751f\u5f8c\u6570\u9031\u9593\u306e\u8133\u306e\u767a\u9054\u306b\u304a\u3044\u3066\uff0c39\u9031\u76ee\u304b\u308944\u9031\u76ee\u306b\u304b\u3051\u3066\u306e\u8133\u306e\u72b6\u614b\u3092\u76ae\u8cea\u8868\u9762\u30a2\u30c8\u30e9\u30b9\u3092\u751f\u6210\u3059\u308b\u3053\u3068\u3067\uff0c\u305d\u306e\u767a\u9054\u306e\u5206\u6790\u306b\u5229\u7528\u3059\u308b\uff0c\u3068\u3044\u3063\u305f\u5185\u5bb9\u306e\u767a\u8868\u3067\u3057\u305f\uff0e\u5e73\u5747\u5316\u3057\u305f\u30a2\u30c8\u30e9\u30b9\u3092\u69cb\u7bc9\u3059\u308b\u306e\u3067\u306f\u306a\u304f\uff0c\u500b\u4eba\u306e\u5909\u52d5\u3092\u8003\u616e\u3057\u305fpatch-based sparse representation\u306b\u3088\u3063\u3066\u8133\u8868\u9762\u3092\u8868\u73fe\u3059\u308b\u624b\u6cd5\u304c\u63a1\u7528\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u306b\u95a2\u3057\u3066\u3082\uff0c\u500b\u4eba\u9593\u306e\u5909\u52d5\u304a\u3088\u3073\uff0c\u6642\u7cfb\u5217\u306b\u4f9d\u5b58\u3059\u308b\u5909\u52d5\u306b\u5bfe\u3057\u3066\u7740\u76ee\u3057\u3066\u304a\u308a\uff0c\u81ea\u8eab\u306e\u7814\u7a76\u306b\u751f\u304b\u305b\u308b\u90e8\u5206\u304c\u3042\u308a\u305d\u3046\u3060\u3068\u8003\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>OHBM 2018 Annual meeting, https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821<\/li>\n<\/ul>\n<p><strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u85e4\u539f\u4f91\u4eae<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30c9\u30e9\u30a4\u30d3\u30f3\u30b0\uff1afNIRS\u3092\u7528\u3044\u305f\u904b\u8ee2\u6642\u306e\u6ce8\u610f\u72b6\u614b\u3068\u4e0d\u6ce8\u610f\u72b6\u614b\u306e\u691c\u51fa<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Mindful Driving: Detecting driver\u2019s attention and distraction using fNIRS<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u85e4\u539f\u4f91\u4eae, \u65e5\u548c\u609f\uff0c\u5ee3\u5b89\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">The Organization for Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">OHBM2018\uff08https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageid=3821\uff09<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Suntec Singapore<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/21\u306b\u304b\u3051\u3066\uff0cSuntec Singapore\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305fOHBM2018\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306eOHBM2018\u306f\uff0cThe Organization for Human Brain Mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u5b66\u4f1a\u3067\uff0c\u30d2\u30c8\u306e\u8133\u306e\u9ad8\u6b21\u6a5f\u80fd\u3092\u69d8\u3005\u306a\u30a4\u30e1\u30fc\u30b8\u30f3\u30b0\u88c5\u7f6e\u306b\u3088\u3063\u3066\u89e3\u660e\u3059\u308b\u305f\u3081\u306b\uff0c\u6700\u65b0\u304b\u3064\u9769\u65b0\u7684\u306a\u7814\u7a76\u306e\u60c5\u5831\u3092\u4ea4\u63db\u3059\u308b\u3053\u3068\u3084\u7814\u7a76\u6210\u679c\u306b\u3064\u3044\u3066\u8b70\u8ad6\u3059\u308b\u3053\u3068\u3092\u76ee\u7684\u306b\u591a\u304f\u306e\u7814\u7a76\u8005\u304c\u53c2\u52a0\u3055\u308c\u307e\u3057\u305f\uff0e\u79c1\u306f17\u65e5\u304b\u308921\u65e5\u306b\u304b\u3051\u3066\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0cM2\u4e09\u597d\uff0c\u77f3\u7530\uff0c\u4e2d\u6751\uff08\u572d\uff09\uff0c\u897f\u6fa4\uff0cM1\u5927\u585a\uff0c\u6749\u91ce\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f19-21\u65e5\u306e\u5348\u5f8c\u306e\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\uff0812:45~14:45\uff09\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c1\u6642\u9593\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0cMindful Driving: Detecting driver\u2019s attention and distraction using fNIRS\u3068\u3044\u3046\u984c\u76ee\u3067\u884c\u3044\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u6284\u9332\u4e2d\u8eab<br \/>\n\u3010Introduction\u3011It is said that we are a state of mind wandering for approximately 50% of the day. Mind wandering during driving, including being distracted by pedestrians, road signs, and other cars, causes accidents. While driving, our attention should be appropriately directed toward these objects, but should not be captured by them. That is, we need to perform mindful driving. One of the crucial components of mindful driving is to be aware of mind wandering. In order to ensure that the driver avoids distraction and pays attention to the correct objects, there is a need for a driving support system that evaluates the attention status of the driver and leads their attention in the appropriate direction. In this study, a driver\u2019s mind wandering was defined from their behavior while using a driving simulator and their brain activity was measured and investigated using functional near-infrared spectroscopy (fNIRS). \u3010Methods\u3011In this experiment, we used dual-task, which simultaneously imposes two different tasks to induce a state of mind wandering using the driving simulator. The main task was driving on the simulator and the subtask comprised the psychomotor vigilance task (PVT). We measured cerebral blood flow change during the dual-task using fNIRS (OEG16, Spectratech). Ten men (22.6 \u00b1 1.4 years) with driving experience participated in the experiment. The measurement region comprised of 16 channels in the forehead. All measurement channels of fNIRS were associated with brain regions based on automated anatomical labeling. The analysis section was defined as the section before the fastest RT and the slowest RT among all the recorded RTs. We calculated the fractional amplitude of low-frequency fluctuation (fALFF), an index of local spontaneous brain activity, from the time series data of cerebral blood flow change obtained from each channel. The data was z-transformed (zfALFF) for comparing each subject. In the section defined by the PVT, the variation of the steering angle and the zfALFF were compared. \u3010Results\u3011Following the comparison of the fastest and slowest RT sections, the value of zfALFF in the left middle frontal gyrus was significantly higher in the fastest RT section (p &lt; 0.05). The left middle frontal gyrus is associated with selective attention and is activated when judging whether attention focusing on an object. In the section where RT is fast, it is likely that attention is not being directed to the driving task. Furthermore, a certain degree of adjustment in the steering angle is necessary to successfully operate the car through the course, and the smaller variation in the steering angle in the fastest RT section is presumed to be an indication of mind wandering. On the contrary, the value of zfALFF in the right superior frontal gyrus was significantly higher in the slowest RT section (p &lt; 0.05). The right superior frontal gyrus is activated during attention maintenance and conversion. The large variation in the steering angle in the slowest RT section suggests that the driver was focusing on the driving task in this section. These conclusions also correspond with the high zfALFF of the right superior frontal gyrus, which is associated with maintaining attention. \u3010Conclusions\u3011In this paper, we detected mind wandering while driving using dual-task, with driving as the main task and the PVT as the subtask. cerebral blood flow change was measured using fNIRS, and zfALFF was calculated as an indicator of brain activity. Significant differences were observed in the value of zfALFF in left middle frontal gyrus in the fastest RT section and in the value of zfALFF in the right superior frontal gyrus in the slowest RT section. A significant difference was also observed in the variation of the steering angle in the two sections. These findings suggest that mind wandering while driving can be detected by variations in the steering angle along with brain activity in the forehead.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u30c9\u30e9\u30a4\u30d0\u306e\u6ce8\u610f\u72b6\u614b\u304c\u5fc5\u305a\u3057\u3082\u904b\u8ee2\u8ab2\u984c\u304bPVT\u306b\u5411\u3044\u3066\u3044\u308b\u3068\u306f\u9650\u3089\u306a\u3044\u306e\u3067\u306f\u306a\u3044\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u305d\u306e<br \/>\n\u53ef\u80fd\u6027\u306f\u8003\u3048\u3089\u308c\u308b\u304c\uff0c\u4eca\u56de\u306fPVT\u306e\u6700\u3082\u901f\u3044or\u9045\u3044\u533a\u9593\u306e\u30b9\u30c6\u30a2\u30ea\u30f3\u30b0\u306e\u5909\u52d5\u3092\u7528\u3044\u3066\u6ce8\u610f\u72b6\u614b\u3092\u5b9a\u7fa9\u3057\u3066\u3044\u308b\u306e\u3067\u3001\u4e8c\u3064\u306b\u5411\u3044\u3066\u3044\u308b\u53ef\u80fd\u6027\u304c\u9ad8\u3044\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u306a\u305c\u30b5\u30d6\u30bf\u30b9\u30af\u306bPVT\u3092\u7528\u3044\u3066\u3044\u308b\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u3042\u307e\u308a\u8ca0\u8377\u306e\u9ad8\u3044\u30bf\u30b9\u30af\u3092\u7528\u3044\u3066\u3057\u307e\u3046\u3068\uff0c\u904b\u8ee2\u30bf\u30b9\u30af\u304c\u4e3b\u30bf\u30b9\u30af\u3068\u3057\u3066\u6210\u308a\u7acb\u305f\u306a\u3044\u3053\u3068\u306b\u52a0\u3048\uff0c\u904b\u8ee2\u6642\u306b\u884c\u3048\u308b\u30bf\u30b9\u30af\u3092\u7528\u3044\u308b\u5fc5\u8981\u304c\u3042\u3063\u305f\u305f\u3081\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u53cd\u5fdc\u6642\u9593\u304c\u901f\u304f\u3066\u30b9\u30c6\u30a2\u30ea\u30f3\u30b0\u306e\u5909\u52d5\u304c\u5927\u304d\u3044\u88ab\u9a13\u8005\uff0c\u533a\u9593\u306f\u306a\u304b\u3063\u305f\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u4f55\u4eba\u304b\u306f\u5f53\u3066\u306f\u307e\u308b\u88ab\u9a13\u8005\u304c\u3044\u308b\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u3053\u306e\u533a\u9593\u306e\u89e3\u6790\u306b\u304a\u3044\u3066\u306f\u4eca\u5f8c\u306e\u8ab2\u984c\u3067\u3042\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>4<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cDorsal attention network\u306e\u9078\u629e\u7684\u6ce8\u610f\u306e\u5bfe\u8c61\u306fdriving\u3068PVT\u4ee5\u5916\u306b\u8003\u3048\u3089\u308c\u306a\u3044\u304b\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u53ef\u80fd\u6027\u306f\u8003\u3048\u3089\u308c\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e\u305d\u308c\u306f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u89e3\u6790\u3084\u4ed6\u306e\u533a\u9593\u3092dynamic\u89e3\u6790\u3059\u308b\u3053\u3068\u3067\u691c\u8a0e\u3067\u304d\u308b\u306e\u3067\u306f\u3068\u3082\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>5<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cAttention\u8a55\u4fa1\u306b\u306a\u3063\u3066\u3044\u308b\u306e\u3067\u306f\u306a\u3044\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u4eca\u56de\u6709\u610f\u5dee\u306e\u898b\u3089\u308c\u305f\u8133\u90e8\u4f4d\u306f\u6ce8\u610f\u306b\u95a2\u3059\u308b\u90e8\u4f4d\u3067\u3042\u308b\u304c\uff0c\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u3057\u3066\u3044\u308b\u6642\u306b\u306f\u6ce8\u610f\u3092\u5411\u3051\u308b\u3079\u304d\u5bfe\u8c61\u306b\u6ce8\u610f\u304c\u5411\u3051\u3089\u308c\u3066\u3044\u306a\u3044\u3068\u8003\u3048\u3066\u3044\u308b\u3053\u3068\u304b\u3089\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u72b6\u614b\u3067\u3042\u308b\u53ef\u80fd\u6027\u3082\u8003\u3048\u3089\u308c\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u30fb\u4eca\u56de\uff0c\u4e8c\u56de\u76ee\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u767a\u8868\u9762\u3067\u306f\uff0c\u524d\u56de\u3088\u308a\u3082\u30b9\u30e0\u30fc\u30ba\u306b\u8aac\u660e\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u3063\u3066\u3044\u305f\u3068\u611f\u3058\u307e\u3057\u305f\uff0e\u3057\u304b\u3057\uff0c\u3084\u306f\u308a\u81ea\u8eab\u306e\u30ea\u30b9\u30cb\u30f3\u30b0\u80fd\u529b\u304c\u8db3\u308a\u306a\u3044\u3053\u3068\u304b\u3089\uff0c\u30c7\u30a3\u30b9\u30ab\u30c3\u30b7\u30e7\u30f3\u3092\u3059\u308b\u3068\u3044\u3046\u30ec\u30d9\u30eb\u306b\u306f\u9054\u3057\u3066\u3044\u306a\u3044\u3068\u75db\u611f\u3057\u307e\u3057\u305f\uff0e\u79c1\u306e\u7814\u7a76\u3067\u306f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u89e3\u6790\u3092\u3057\u3066\u3044\u307e\u305b\u3093\u304c\uff0cOHBM\u53c2\u52a0\u8005\u306e\u591a\u304f\u306f\u305d\u306e\u5148\u306edynamic\u89e3\u6790\u3092\u884c\u3063\u3066\u3044\u308b\u3053\u3068\u304b\u3089\uff0c\u5352\u696d\u307e\u3067\u306b\u8ffd\u3044\u3064\u304f\u3088\u3046\u306b\u53d6\u308a\u7d44\u307f\uff0c\u4fee\u58eb\u8ad6\u6587\u306b\u53cd\u6620\u3055\u305b\u305f\u3044\u3068\u611f\u3058\u307e\u3057\u305f\uff0e\u4eca\u56de\u306e\u7d4c\u9a13\u3092\u81ea\u8eab\u306e\u7814\u7a76\uff0c\u7814\u7a76\u5ba4\u306b\u9084\u5143\u3055\u305b\uff0c\u5352\u696d\u307e\u3067\u3057\u3063\u304b\u308a\u7814\u7a76\u306b\u52b1\u307f\u305f\u3044\u3068\u601d\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e4\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Mapping Dynamics of Emotional Brain States and Memory Consolidation: From Circuitry to Network and Behavior<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a \u00a0Shaozheng Qin Ph.D., The State Key Laboratory of Cognitive Neuroscience and Learning, IDG\/McGovern Institute for Brain Research at BNU<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0 \uff1a \u00a0Cognitive &amp; Affective Neuroscience: From Circuitry to Network and Behavior<br \/>\nAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a \u00a0Brain regions engage and disengage constantly with each other to support rapid and flexible changes in emotional states and access to memories. Conventional approaches on analyzing regional brain activity and static functional connectivity patterns provide little information about transient dynamics in brain functional organization. Novel approaches are needed to investigate how transient brain dynamics contribute to human emotion and memory with rapid and flexible access to disparate aspects of information. I will present a series of task- and resting-state fMRI studies with concurrent skin conductance recording and advanced analytic approaches (i.e., K-means, HMM and network dynamics) to investigate how dynamic states of emotion-related brain circuitry evolve over time at both encoding and at rest, and how these brain dynamics contribute to emotional memory consolidation. We found that: (1) emotion-related amygdala circuitry undergoes rapid changes in integration and segregation of functional connectivity with other brain regions critical for attention, salience detection and emotion regulation; (2) emotion-charged reactivation of hippocampus-based memory system at encoding enhances subsequent episodic memories; (3) re-occurrence of emotion-charged brain states at post-encoding rest predicts better episodic memories; (4) large-scale brain functional networks among neocortical regions are gradually building up to support long-term memory retention after 24 hours. Altogether, our findings point toward the dynamic nature of brain emotional states and memory systems, and rapid changes in emotional memory organization with consolidation.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0cfMRI\u3092\u7528\u3044\u3066\u60c5\u52d5\u3068\u8a18\u61b6\u306e\u95a2\u9023\u6027\u3092\u691c\u8a0e\u3059\u308b\u7814\u7a76\u3067\u3057\u305f\uff0e\u89e3\u6790\u306b\u306f\uff0cdynamic\u89e3\u6790\u3084k-means\u306a\u3069\u306e\u624b\u6cd5\u3092\u7528\u3044\u3066\u304a\u308a\uff0c\u305d\u308c\u305e\u308c\u306e\u624b\u6cd5\u306f\u5358\u4f53\u3067\u306f\u77e5\u3063\u3066\u3044\u308b\u304c\uff0c\u5408\u308f\u305b\u3066\u3069\u306e\u3088\u3046\u306b\u89e3\u6790\u3059\u308b\u306e\u304b\u3092\u77e5\u3089\u306a\u304b\u3063\u305f\u306e\u3067\u8208\u5473\u6df1\u304b\u3063\u305f\u3067\u3059\uff0e\u81ea\u8eab\u306e\u7814\u7a76\u3067\u3082\uff0c\u53d6\u308a\u5165\u308c\u3066\u3044\u304f\u5fc5\u8981\u304c\u3042\u308b\u306e\u304b\u3092\u541f\u5473\u3057\u3066\u3044\u304d\u305f\u3044\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Using NIRS-EEG as a clinical tool in children<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Anne Gallagher, University de Montreal<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Multi-Modality Symposium<br \/>\nAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aFunctional Near Infrared Spectroscopy (fNIRS) and electroencephalography (EEG) are non-invasive neuroimaging techniques that are highly suitable for clinical and pediatric populations. Although further research and developments are still needed for fNIRS-EEG to be included in the standard clinical care, several applications have been developed and are currently used in clinic. In this presentation, I will show how combined fNIRS-EEG can provide useful information in the presurgical assessment of children with epilepsy, notably for functional brain network mapping and localization of the epileptogenic zone.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0cfNIRS\u3068EEG\u306e\u540c\u6642\u8a08\u6e2c\u306b\u95a2\u3057\u3066\u3067\u3057\u305f\uff0e\u81ea\u8eab\u306e\u7814\u7a76\u3067\u3082\u3055\u3089\u306a\u308b\u4f53\u52d5\u9664\u53bb\u304c\u5fc5\u8981\u304b\u3068\u8003\u3048\u3066\u3044\u305f\u306e\u3067\uff0c\u4eca\u56de\u306e\u767a\u8868\u306f\u8208\u5473\u6df1\u304b\u3063\u305f\u3067\u3059\uff0efNIRS\u3068EEG\u3067\u8a08\u6e2c\u3059\u308b\u5229\u70b9\u306btolerant to movement\u304c\u3042\u308b\u3068\u8aac\u660e\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u51e6\u7406\u306b\u306fICA\u3092\u7528\u3044\u3066\u3044\u308b\u3088\u3046\u3067\u3057\u305f\u304c\uff0c\u6df1\u304f\u307e\u3067\u7406\u89e3\u3067\u304d\u3066\u3044\u306a\u3044\u306e\u3067\u4eca\u5f8cfNIRS\u3067ICA\u3092\u3057\u3066\u3044\u308b\u6587\u732e\u306a\u3069\u306e\u8abf\u67fb\u304c\u5fc5\u8981\u3067\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000All Resolution Inference: Increasing Spatial Specificity of fMRI with Valid Circular Inference<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Wouter Weeda, Leiden University<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION: Modeling and Analysis Methods I<br \/>\nAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3010Introduction\u3011:Most neuroimaging studies identify brain activation as clusters of contiguous supra-threshold voxels corrected for multiple comparisons using random field theory (RFT). This approach suffers from a spatial specificity paradox (Woo et al., 2014): the larger the cluster detected, the less we know about the location of activation within that cluster. It is a consequence of cluster inference by which a detected cluster means: &#8220;there exists at least one truly active voxel in the cluster&#8221; and not &#8220;all voxels in the cluster are active&#8221;. One solution to this problem is to change the cluster-forming threshold and run the RFT analysis again. However, since this is a circular inference, false positives are no longer controlled, and there is no information on which threshold is optimal. We propose a solution to the spatial specificity paradox and threshold selection termed &#8216;All Resolution Inference&#8217; (ARI). \u3010Methods\u3011:ARI (Goeman et al. 2011; Rosenblatt et al., submitted) is based on the Proportion of True Discoveries (PTD), a measure of the amount of expected true positives in a set of hypotheses. ARI can estimate the PTD of any user-specified cluster, even after looking at the data, without losing false positive control. That is, ARI allows a user to specify any cluster to get an estimate of the proportion of truly active voxels within that cluster. This procedure can even be repeated to &#8220;drill-down&#8221; into a cluster to pinpoint the exact location of activation. ARI thus allows inference on the proportion of activation in all voxel sets, no matter how large or small, and however these have been selected, all from the same data. A sufficient condition for the validity of ARI is the Positive Regression Dependency on Subsets (PRDS), which has been established for fMRI statistical maps by Nichols et al., (2003). To validate ARI, we used a dataset collected by Pernet et al., (2015, openfmri.org accession ds000158). It consists of 218 subjects listening to vocal and non-vocal sounds. We used 33 subjects to show the PTD across two cluster-forming thresholds, and a different set of 66 subjects to inspect the replicability of our PTD estimates. We do so by comparing the PTD of the first analysis with the actual number of supra-threshold voxels in the second dataset. \u3010Results\u3011:Group analysis on the first set of 33 subjects (see Figure 1) of the Vocal &gt; Non-vocal contrast (Z&gt;3.2), showed activation in the right temporal cortex (TC, 6907 voxels, PTD=74.9%) and precentral gyrus (PG, 249 voxels, PTD=6.0%). In the left TC we found three active regions: Superior Temporal Gyrus (STG, 4607 voxels, PTD=73.9%), Inferior Frontal Gyrus (IFG, 385 voxels, PTD=0%), and the amygdala (168 voxels, PTD=0%). For Z&gt;4, the large cluster in the right TC separated into smaller clusters, all with increased PTD values. The smaller clusters in the left TC had increased PTD values, except the amygdala and IFG which had a consistent PTD of 0%. Analysis of the next 66 subjects showed 90.5% supra-threshold voxels (STV) in the right HG\/STG (first dataset PTD=74.9%). For the other regions results were as follows: left HG\/STG=98.2% STV (PTD=73.9%), left IFG = 45.5% STV (PTD=0%), right PG = 98.4% STV (PTD=6.0%), left amygdala 61.3% STV (PTD=0%). Activated clusters with associated PTD values for two cluster-forming thresholds (Z&gt;3.2 and Z&gt;4).\u3000\u3010Conclusions\u3011:We propose ARI as a remedy for the spatial specificity paradox. The first innovation is in replacing the &#8220;active\/inactive cluster&#8221; inference, by inference on the proportion of activation within the cluster. The second innovation is in estimating this proportion on any cluster, even if selected after inspecting the data. Results show that ARI produces sensible and reproducible estimates of the PTD across different cluster-forming thresholds. Validation analysis shows that regions with a high PTD tend to have a high percentage of supra-threshold voxels in the validation set, and vice-versa. The PTD bounds are informative, provided that the region is large enough. To conclude, ARI is a flexible and more informative alternative to cluster-wise analysis of fMRI data.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306fMRI\u306e\u8ce6\u6d3b\u306b\u95a2\u3059\u308b\u7814\u7a76\u3067\uff0c\u30af\u30e9\u30b9\u30bf\u304c\u5927\u304d\u3044\u3068\u3069\u3053\u304c\u8ce6\u6d3b\u3057\u3066\u3044\u308b\u304b\u308f\u304b\u3089\u306a\u3044\u3068\u3044\u3046\u8ab2\u984c\u3092\u691c\u8a0e\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u624b\u6cd5\u3068\u3057\u3066\u306f\uff0csub voxel\u306e\u30d1\u30bf\u30fc\u30f3\u3092\u7528\u3044\u308b\u3053\u3068\u3067\u8a55\u4fa1\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u30af\u30e9\u30b9\u30bf\u5185\u306e\u5168\u90e8\u304c\u8ce6\u6d3b\u3057\u3066\u3044\u308b\u308f\u3051\u3058\u3083\u306a\u3044\u3053\u3068\u304b\u30894\u3064\u306evoxel set\u3092\u6d3b\u7528\u3057\u3066\u306e\u8a55\u4fa1\u3082\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0eMRI\u306b\u8a73\u3057\u304f\u306a\u3044\u304c\uff0c\u305f\u3060\u8ce6\u6d3b\u3057\u3066\u3044\u308b\u304b\u3060\u3051\u3067\u306a\u304f\uff0c\u3053\u3046\u3044\u3046\u8996\u70b9\u3067\u898b\u308b\u3053\u3068\u306e\u91cd\u8981\u6027\u3092\u5b66\u3073\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000A generative model for inferring whole-brain effective connectivity<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Stefan Fr\u00e4ssle, Ekaterina Lomakina, Lars Kasper, Zina Manjaly4, Alex Leff, Klaas Pruessmann, Joachim Buhmann, Klaas Stephan<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION: Modeling and Analysis Methods II<br \/>\nAbstract\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a \u3010Introduction\u3011:Developing whole-brain models that infer the effective (directed) connectivity among neuronal populations from neuroimaging data represents a central challenge for computational neuroscience. Dynamic causal models (DCMs; Friston et al., 2003) of functional magnetic resonance imaging (fMRI) data have been used frequently for inferring effective connectivity,<br \/>\nbut are presently restricted to small graphs (up to 10 regions) to keep model inversion feasible. Here, we introduce regression DCM (rDCM; Fr\u00e4ssle et al., 2017, Fr\u00e4ssle et al., under review) as a novel variant of DCM for fMRI that enables whole-brain effective connectivity analyses. \u3010Methods\u3011:In brief, rDCM converts the numerically costly estimation of coupling parameters in differential equations of a linear DCM in the time domain into an efficiently solvable Bayesian linear regression in the frequency domain. This necessitates several modifications to the original DCM implementation, including: (i) translation from time to frequency domain by exploiting the differential property of the Fourier transform, (ii) linearization of the hemodynamic forward model, (iii) mean-field approximation (across regions), and (iv) use of a Gamma prior on noise precision. These changes allow us to derive a highly efficient variational Bayesian update scheme. Additionally, by incorporating sparsity constraints, rDCM does not require any a priori assumptions about the network&#8217;s connectivity structure but prunes fully (all-to-all) connected networks as part of model inversion. \u3010Results\u3011:First, we used simulations to test how accurately rDCM could recover the known network architecture (i.e., the connections present in the data-generating model) for large networks with 66 regions. We mapped sensitivity and specificity for various settings of the signal-to-noise ratio of the fMRI data and the a priori assumptions about the sparseness of the network (Fig. 1). These simulations suggest that rDCM is a suitable tool for inferring sparse effective connectivity patterns. In previous work, we already demonstrated face validity of the approach with respect to parameter recovery and model selection (Fr\u00e4ssle et al., 2017). We then applied rDCM to several empirical fMRI datasets. We showed that it is feasible to infer effective connection strengths from fMRI data using a network with more than 100 regions and 10,000 connections. Sparse rDCM was applied to fMRI data from a motor paradigm (visually paced fist closings) (Fig. 2A). Model inversion resulted in a sparse graph with biologically plausible connectivity (Fig. 2B, left) and driving input patterns (Fig. 2B, right): we observed pronounced driving inputs to and functional integration among motor regions (precentral, SMA, cerebellum), visual regions (cuneus, occipital), regions associated with the somatosensory and proprioceptive aspects of the task which are essential for visuomotor integration (postcentral, parietal), and frontal regions engaging in top-down control. The inferred effective connectivity pattern and its sparseness become most apparent when visualized as a connectogram (Fig. 2C) or projected onto the whole-brain volume. Notably, inversion of this whole-brain model was computationally highly efficient and took only 10 minutes on standard hardware. \u3010Conclusions\u3011:Regression DCM allows one to infer effective connectivity, with connection-specific estimates, in whole-brain networks from fMRI data. Importantly, in contrast to functional connectivity measures, effective connectivity discloses the directionality of functional couplings. We anticipate that rDCM will find useful application in connectomics (e.g., enabling graph theoretical approaches to effective connectivity; Bullmore &amp; Sporns, 2009) and could offer tremendous opportunities for clinical application.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0cRegression DCM\u30928~10\u9818\u57df\u3067\u306f\u306a\u304f\u3001whole-brain\u3067\u884c\u3046\u7814\u7a76\u3067\u3057\u305f\uff0e 66\u306e\u9818\u57df\u3092\u6709\u3059\u308b\u5927\u898f\u6a21\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306erDCM\u304c\u3001\u65e2\u77e5\u306e\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u3092\u3069\u308c\u304f\u3089\u3044\u6b63\u78ba\u3067\u3042\u308b\u304b\u3092\u30c6\u30b9\u30c8\u3057\uff0c\u305d\u306e\u7d50\u679c\u3068\u3057\u3066\uff0crDCM\u304c\u6709\u52b9\u306a\u30c4\u30fc\u30eb\u3067\u3042\u308b\u3053\u3068\u3092\u793a\u5506\u3057\u305f\u3068\u5831\u544a\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u3053\u308c\u304b\u3089\u306e\u81ea\u4fe1\u306e\u7814\u7a76\u306e\u65b9\u5411\u6027\u3068\u3057\u3066\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u305f\u30e2\u30c7\u30eb\u5316\u3068\u3044\u3046\u3082\u306e\u304c\u4e00\u3064\u3068\u3057\u3066\u8003\u3048\u3089\u308c\u308b\u306e\u3067\uff0c\u8a73\u3057\u3044\u8abf\u67fb\u304c\u5fc5\u8981\u3067\u3042\u308b\u3068\u611f\u3058\u305f\uff0e<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>OHBM2018, https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageid=3821<\/li>\n<\/ul>\n<p><strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u5927\u585a\u53cb\u6a39<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u7791\u60f3\u4e2d\u8133\u6a5f\u80fd\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u30b0\u30e9\u30d5\u7406\u8ad6\u7279\u6027\u306b\u304a\u3051\u308b\u500b\u4eba\u5185\u5909\u52d5<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Intra-individual variation in graph theoretical properties of functional networks during meditation<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u5927\u585a\u53cb\u6a39, \u65e5\u548c\u609f, \u5ee3\u5b89\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">Organization for human brain mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">OHBM2018<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Suntec City<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/21\u306b\u304b\u3051\u3066\uff0c\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f2018 OHBM annual meeting\u306b\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e\u3053\u306e2018 OHBM annual meeting\u306f\uff0cOrganization for human brain mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u56fd\u969b\u4f1a\u8b70\u3067\uff0c\u30d2\u30c8\u306e\u8133\u7d44\u7e54\u304a\u3088\u3073\u8133\u6a5f\u80fd\u306e\u30de\u30c3\u30d4\u30f3\u30b0\u306b\u95a2\u3059\u308b\u7814\u7a76\u306b\u643a\u308f\u308b\u69d8\u3005\u306a\u80cc\u666f\u3092\u6301\u3064\u7814\u7a76\u8005\u3092\u96c6\u3081\uff0c\u3053\u308c\u3089\u306e\u79d1\u5b66\u8005\u306e\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\uff0c\u304a\u3088\u3073\u6559\u80b2\u3092\u4fc3\u9032\u3059\u308b\u3053\u3068\u3092\u76ee\u7684\u306b\u958b\u50ac\u3055\u308c\u3066\u3044\u307e\u3059<sup>\uff08\uff11<\/sup>\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\uff0c\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0cM2\u306e\u5b66\u751f\u3068\u3057\u3066\u77f3\u7530\u3055\u3093\uff0c\u4e09\u597d\u3055\u3093\uff0c\u897f\u6fa4\u3055\u3093\uff0c\u85e4\u539f\u3055\u3093\uff0c\u4e2d\u6751\u3055\u3093\uff0cM1\u306e\u5b66\u751f\u3068\u3057\u3066\u6749\u91ce\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f19\uff0c20\u65e5\u306e\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\u304a\u3088\u307321\u65e5\u306e\u30dd\u30b9\u30bf\u30fc\u30ec\u30bb\u30d7\u30b7\u30e7\u30f3\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c\u8a083\u6642\u9593\u53c2\u52a0\u8005\u306e\u65b9\u3068\u8b70\u8ad6\u3092\u884c\u3044\u307e\u3044\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u300cIntra-individual variation in graph theoretical properties of functional networks during meditation\u300d\u306b\u3064\u3044\u3066\u767a\u8868\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">Introduction<br \/>\nStudies on the function of nervous system in mindfulness meditation have not considered intra-individual variations in brain state. These studies assume that intra-individual variation in brain activity is small compared with inter-individual variation. However, as shown in Fig.1, in a brain region where intra-individual variation is significantly larger than inter-individual variation, result reproducibility would be poor because group analyses are strongly influenced by intra-individual variation. Here inter- and intra-individual variation during meditation were investigated for evaluating the reliability of estimates on the brain functional network involved in meditation when the same subject was repeatedly measured on different days.<br \/>\n&nbsp;<br \/>\nMethods<br \/>\nTwenty-one healthy adult beginner meditators (22.9 \u00b1 2.5 years; 6 females, 15 males) performed breath-counting meditation for 5 minutes, which involves attentively counting breaths, following a 5-minute resting state in the fMRI scanner. The same task was performed 10 times on different days in 3 subjects. The functional connectivity matrix between brain regions was calculated for 116 regions defined by automated anatomical labeling. Degree centrality, betweenness centrality, and eigenvector centrality were calculated using graph theoretical analysis. The unbiased variance values in each graph theoretical feature value were calculated to evaluate variations in the brain functional network and were compared between the 21 participants, as well as among the measurements of each of the 3 participants tested on 10 different days.<br \/>\n&nbsp;<br \/>\nResults<br \/>\nA test for equality of variance was performed on the graph theoretical index during meditation for the entire group and the three subjects, A, B, and C. The brain region where the intra-individual variation was smaller than inter-individual variation is presented in Table 1 for each of the three indicators. The betweenness centrality of Pallidum_R had a small intra-individual variation in the 3 subjects repeatedly tested. This brain region may function as a hub during meditation for counting breathing because the Pallidum_R is involved in motor function and decision making [1] and betweenness centrality is an indicator of hub regions in a network. Contrarily, the betweenness centrality in the Heschl_R and Vermis_6 areas as well as eigenvector centrality in the Putamen_L areas had a large intra-individual variation. It is necessary to ensure the test group is large enough for a group analysis of network characteristics in these brain regions. It is not critical to consider individual fluctuations in those regions during group analyses because there is no significant difference in the intra- and inter-individual variation in other brain regions.<br \/>\n&nbsp;<br \/>\nConclusion<br \/>\nHere, we investigated the intra- and inter-individual variations in brain function networks during meditation using graph theoretical indicators. The betweenness centrality in the Pallidum_R makes it a potentially useful region for capturing brain states during meditation because of the intra-individual variation being significantly smaller than the inter-individual variation. It also stably functions as a hub for the brain function network during repeat measurements in beginner meditators. However, it is necessary to have appropriately large groups for in-group analyses of these brain regions because intra-individual variation in the betweenness centrality of the Heschl_R and Vermis_6 areas and the eigenvector centrality in the Putamen_L are larger than the inter-individual variation.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u8cea\u554f\u306f\uff0c\u7791\u60f3\u306f\u4f55\u5206\u9593\u884c\u306a\u3063\u3066\u3044\u308b\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f5\u5206\u9593\u884c\u306a\u3063\u3066\u3044\u308b\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u8cea\u554f\u306f\uff0c\u3069\u3046\u3084\u3063\u3066\u89e3\u6790\u3057\u3066\u3044\u308b\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0c\u5404\u88ab\u9a13\u8005\u306eFC matrix\u3092MDS\u3092\u7528\u3044\u3066\u4e8c\u6b21\u5143\u306b\u843d\u3068\u3057\u8fbc\u307f\u305d\u308c\u305e\u308c\u306eFC matrix\u306e\u8ddd\u96e2\u304b\u3089\u3070\u3089\u3064\u304d\u3092\u691c\u8a0e\u3057\u3066\u3044\u307e\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u8cea\u554f\u306f\uff0c\u76f8\u95a2\u3092\u6c42\u3081\u308b\u969b\u306b\u30de\u30a4\u30ca\u30b9\u306e\u5024\u306f\u8003\u616e\u3057\u3066\u3044\u308b\u306e\u304b\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0c\u8003\u616e\u3057\u3066\u3044\u307e\u305b\u3093\uff0e\u4eca\u5f8c\uff0c\u5bfe\u5fdc\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>4<\/strong><br \/>\n\u500b\u4eba\u5185\u306eFC matrix\u306f\u4f3c\u3066\u3044\u305f\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0c\u500b\u4eba\u5185\u306b\u304a\u3044\u3066\u306f\u985e\u4f3c\u6027\u304c\u9ad8\u3044\u3067\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u00a0<\/strong><br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>5<\/strong><br \/>\n\u6955\u5186\u3092\u5f15\u3044\u305f\u6642\u306ep-value\u306f\u3044\u304f\u3064\u306a\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0cp-value\u306f0.05\u3067\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>6<\/strong><br \/>\nMDS\u306e\u89e3\u6790\u65b9\u6cd5\u306f\u4f55\u304b\u5225\u306e\u7814\u7a76\u3092\u53c2\u8003\u306b\u3057\u305f\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0cFunctional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation\u306e\u8ad6\u6587\u3092\u53c2\u8003\u306b\u3057\u307e\u3057\u305f\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>7<\/strong><br \/>\n\u306a\u305c\u8133\u3092AAL\u3067116\u9818\u57df\u306b\u5206\u3051\u305f\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u306f\uff0cAAL\u3067\u8133\u5168\u4f53\u3092\u5206\u5272\u3057\u305f\u65b9\u304c\u8133\u5168\u4f53\u3092\u6271\u3044\u3084\u3059\u3044\u304b\u3089\u3067\u3059\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u30fb\u4eca\u56de\u306f\u521d\u306e\u82f1\u8a9e\u3067\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\u3057\u305f\u304c\uff0c\u82f1\u8a9e\u3067\u4f1d\u3048\u308b\u3053\u3068\u306e\u96e3\u3057\u3055\u3092\u611f\u3058\u307e\u3057\u305f\uff0e\u521d\u65e5\u306f\u7dca\u5f35\u3057\u307e\u3057\u305f\u304c\uff0c2\u65e5\u76ee\uff0c3\u65e5\u76ee\u306f\u82f1\u8a9e\u3067\u8aac\u660e\u3059\u308b\u3053\u3068\u3078\u306e\u62b5\u6297\u306f\u306a\u304f\u306a\u308a\u307e\u3057\u305f\uff0e\u4eca\u56de\uff0c\u8cea\u554f\u304c\u3042\u307e\u308a\u805e\u304d\u53d6\u308c\u305a\uff0c\u4f55\u5ea6\u3082\u805e\u304d\u76f4\u3059\u5834\u9762\u304c\u591a\u3005\u3042\u3063\u305f\u306e\u3067\uff0c\u4eca\u5f8c\u306f\u82f1\u8a9e\u306e\u52c9\u5f37\u306b\u3088\u308a\u4e00\u5c64\u529b\u3092\u5165\u308c\u3066\u53d6\u308a\u7d44\u307f\u305f\u3044\u3068\u601d\u3044\u307e\u3059\uff0e\u307e\u305f\uff0c\u4eca\u56de\u306e\u5b66\u4f1a\u3067\uff0c\u81ea\u5206\u306e\u7814\u7a76\u30c6\u30fc\u30de\u3067\u3042\u308b\u500b\u4eba\u5185\u306e\u3070\u3089\u3064\u304d\u306b\u95a2\u3059\u308b\u7814\u7a76\u304c\u591a\u304f\u884c\u308f\u308c\u3066\u3044\u308b\u5370\u8c61\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u81ea\u5206\u306e\u7814\u7a76\u304c\u4e16\u754c\u30ec\u30d9\u30eb\u3067\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e2\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Matrix-normal models for fMRI analysis<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Michael Shvartsman\uff0cMikio Aoi\uff0cNarayanan Sundaram\uff0cet al .<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION\uff1aModeling and Analysis Methods I<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction\uff1a\u3000Early approaches to fMRI data analysis used the univariate general linear model (GLM), but the fundamentally spatial nature of fMRI limited their success, driving development of multivariate methods organized under the umbrella of Multi-Voxel Pattern Analysis (MVPA; Norman et al., 2006). Unlike univariate methods, MVPA has not been organized in a unified theoretical framework. Rather, methods are developed independently on a per-problem basis, e.g. dimension reduction (SRM: Chen et al., 2015; HTFA: Manning et al., 2017), correlation estimation (RSA: Kriegeskorte et al. 2008; BRSA: Cai et al., 2016; ISFC: Simony et al., 2016), multivariate regression (Allefeld and Haynes, 2014). Alongside the success of these methods came new challenges for interpretation and hypothesis testing (Allefeld et al., 2015; Cai et al., 2016; Schreiber and Krekelberg, 2013). Furthermore, the lack of unified theoretical perspective has led to a lack of consistency even in addressing identical problems.<br \/>\nWe propose matrix-variate normal (MN) models as a unifying framework for fMRI analysis. MN models combine explicit spatiotemporal modeling with the interpretability of probabilistic generative models. They include as special cases many existing methods: PCA, generalized CCA, the GLM, MANOVA, as well as the fMRI-specific methods noted above including SRM and (H)TFA, ISFC, and (B)RSA. The shared structure enables the creation of a modeling toolkit that admits flexible prototyping of spatiotemporal analysis methods, which we use to develop a number of new method variants that yield advantages relative to the original formulations.<br \/>\nMethods:MN models are a tool for multilinear data analysis that model separable spatiotemporal noise. In the MN view, multivariate regression and dimension reduction models are special cases of the same model, namely one that models the brain as a spatial projection into a lower-dimensional time series with added spatiotemporal structured residual.<br \/>\nWe leverage this perspective to develop an estimation framework using Python. Our implementation is flexible in the residual covariance specification, only requiring efficient implementations of \u03a3-1X and log|\u03a3| per covariance \u03a3, and we provide implementations of a number of popular covariance structures. Additionally, estimation by gradient descent on automatically derived gradients is provided by Tensorflow, giving users the ability to explore multiple models easily.<br \/>\nResults:We develop and explore the behavior of three new methods: MN-RSA, MN-SRM and MN-ISFC. Since there are no agreed-upon real-data evaluation metrics for latent correlation estimation methods like RSA or ISFC, we show their performance on synthetic data. In this setting, MN-RSA outperforms BRSA (a state-of-the-art RSA method; Cai et al., 2016) by as much as 6x in terms of RMSE. MN-ISFC shows comparable performance to ISFC in terms of RMSE, but with the further advantage that MN-ISFC is guaranteed to return valid (symmetric positive definite) covariance matrix estimates and can use our flexible noise modeling infrastructure.<br \/>\nWe validate MN-SRM (a hyperalignment method) on real data: our ECM algorithm estimates far fewer parameters than SRM by integrating the projection matrices instead of the shared time series. As a result, we show modest improvements in out-of-sample reconstruction relative to SRM on the sherlock and raider datasets (Chen et al., 2016; Haxby et al., 2011).<br \/>\nConclusions:We have provided a unified theoretical perspective for understanding a number of multivariate methods for fMRI analysis, deriving relationships between existing state of the art methods as special cases of a MN framework. This perspective has allowed us to develop a software toolkit for model prototyping, which enabled us to develop new method variants that show improved performance relative to the previous state of the art.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306ffMRI\u30c7\u30fc\u30bf\u89e3\u6790\u3067\u306fGLM\u304c\u4e00\u822c\u7684\u3067\u3042\u308b\u4e2d\u3067fMRI\u30c7\u30fc\u30bf\u89e3\u6790\u306e\u305f\u3081\u306e\u884c\u5217-\u5909\u91cf\u6b63\u898f\uff08MN\uff09\u30e2\u30c7\u30eb\u3092\u63d0\u6848\u3057\u307e\u3057\u305f\uff0e\u79c1\u306f\uff0c\u7814\u7a76\u306b\u304a\u3044\u3066GLM\u306e\u89e3\u6790\u3092\u884c\u306a\u3063\u3066\u3044\u307e\u305b\u3093\u304c\uff0c\u884c\u5217\u3092\u6271\u3063\u3066\u3044\u307e\u3059\uff0e\u3053\u306e\u8b1b\u6f14\u3092\u805e\u3044\u3066\u884c\u5217\u3092\u7528\u3044\u305f\u69d8\u3005\u306a\u89e3\u6790\u65b9\u6cd5\u304c\u3042\u308b\u3053\u3068\u3092\u77e5\u308a\uff0c\u3088\u308a\u826f\u3044\u77e5\u898b\u3092\u5f97\u3089\u308c\u305f\u3068\u8003\u3048\u3066\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aVisibility graphs for fMRI data: multiplex temporal graphs and their spatiotemporal modulations<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Daniele Marinazzo1, Sebastiano Stramaglia2, Speranza Sannino3, Lucas Lacasa4, Anees Abrol5, Vince Calhoun6<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION\uff1aModeling and Analysis Methods II<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction\uff1a Visibility algorithms map time series into graphs, such as that the tools of graph theory can be used for the characterization of time series. This approach has proved a convenient tool and visibility graphs have found applications across several disciplines.<br \/>\nHere we test their application to fMRI time series, following two main motivations, namely that (i) this approach allows to simultaneously capture and process relevant aspects of both local and global dynamics in an easy and intuitive way, and (ii) this provides a suggestive bridge between time series and network theory.<br \/>\nMethods: The procedure to build up a visibility graph is extensively described in (Lacasa 2015).<br \/>\nGiven a time series of N data, any two time points i and j in which the measured quantity takes the values yi and yj respectively will have visibility and consequently will become two connected nodes in the associated natural visibility graph if any other data point yk placed between them fulfills the condition:<br \/>\n&nbsp;<br \/>\nyk &lt; yi+(yj-yi)(k-i)\/(j-i).<br \/>\n&nbsp;<br \/>\nIn presence of a multivariate set of M series, each of these yields a different visibility graph, giving a multilayer graph with M layers. A simple way of measuring the interdependence across time series mapped on a graph is the interlayer mutual information.<br \/>\n&nbsp;<br \/>\nAs an application to resting state fMRI data we used the public dataset described in (Poldrack 2016), containing resting state fMRI data from 121 healthy controls, 50 individuals diagnosed with schizophrenia, 49 individuals diagnosed with bipolar disorder and 40 individuals diagnosed with ADHD. After processing as described in (Sannino 2017), we averaged the signal in 278 regions of interest (ROIs) ,and we assigned each of these ROIs to one of the 9 resting state networks (7 cortical networks, plus subcortical regions and cerebellum). Figure 1 sketches the procedure used.<br \/>\nFurthermore we explored the possibility to view dynamic (time-resolved) functional connectivity (DFC) in the visibility framework. The visibility networks have a modular structure, which is an indication of different temporal regimes. DFC can then be seen in the visibility framework as the comparison of the temporal networks, taking their modular structure into account, by means of the Partition Distance Mutual Information. For this test we used a large dataset of 7000 subjects, divided in 28 groups of 350 (Abrol 2017).<br \/>\nResults: Figure 2 reports the results from the four groups. The intrinsic connectivity network called Limbic in Yeo&#8217;s parcellation is the smallest one, but nonetheless has a low interlayer mutual information compared to the other networks for all the clinical groups.<br \/>\nThis latter network showed the clearest differentiation in terms of the average interlayer mutual information among the four clinical groups. This evidence was assessed by means of a multivariate response test with age of the subjects and framewise displacement as covariates. The p-value of 0.005 was corrected for multiple comparisons using the Bonferroni-Holm criterion .<br \/>\nWe also used shift functions and Kolmogorov-Smirnov tests to visualize the difference between two distributions (Sannino 2017).<br \/>\nThe visibility version of the time-resolved functional connectivity is also promising, having a lower group level variance with respect to classic FC. Singular value decomposition revealed that both FC patterns very along one dimension, but that the variance explained by the first component for visibility FC is almost 100% (and 66% for classical FC), indicating that temporal variability is efficiently captured.<br \/>\nConclusions:<br \/>\nWe present visibility graphs as an intuitive and robust tool to describe multivatiate time series dynamics with the tools of graph theory, motivating its application to the spatial and temporal variability of fMRI data.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u3067\u306f\uff0c\u30b0\u30e9\u30d5\u7406\u8ad6\u306e\u30c4\u30fc\u30eb\u3092\u6642\u7cfb\u5217\u306e\u7279\u5fb4\u4ed8\u3051\u306b\u5229\u7528\u3057\u6642\u7cfb\u5217\u3092\u30b0\u30e9\u30d5\u306b\u30de\u30c3\u30d7\u3059\u308b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u30ed\u30fc\u30ab\u30eb\u304b\u3064\u30b0\u30ed\u30fc\u30d0\u30eb\u306a\u30c0\u30a4\u30ca\u30df\u30c3\u30af\u30b9\u306e\u95a2\u9023\u3059\u308b\u4e21\u9762\u3092\u6355\u6349\uff0c\u51e6\u7406\u3057\u6642\u7cfb\u5217\u3068\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u7406\u8ad6\u9593\u306e\u6a4b\u6e21\u3057\u306e\u5f79\u5272\u3092\u62c5\u3063\u3066\u3044\u307e\u3059\uff0e\u79c1\u81ea\u8eab\u30b0\u30e9\u30d5\u7406\u8ad6\u7279\u5fb4\u91cf\u3092\u4f7f\u7528\u3057\u3066\u3044\u308b\u306e\u3067\uff0c\u4eca\u5f8c\u306e\u7814\u7a76\u306b\u6d3b\u7528\u3067\u304d\u308b\u306e\u3067\u306f\u3068\u8003\u3048\u3066\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Same Data &#8211; Different Software &#8211; Different Results? Analytic Variability of Group fMRI Results.<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Alexander Bowring1, Thomas Nichols1, Camille Maumet2<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSIONS: Informatics<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction\uff1a\u3000A plethora of tools and techniques are now available to process and model fMRI data. However, this &#8216;methodological plurality&#8217; has come with a drawback. Application of different analysis pipelines (Carp, 2012), alterations in software version (Glatard, 2015), and even changes in operating system (Gronenschild, 2012) have all been shown to cause variation in the results of a neuroimaging study. This high analytic flexibility has been pinpointed as a key factor that can lead to increased false-positives (Ioannidis, 2005), and compounded with a lack of data sharing, irreproducible research findings (Poldrack, 2017).<br \/>\nIn this work, we seek to understand how choice of software package impacts analysis results. We reproduce the results of three published neuroimaging studies (Schonberg, 2012; Moran, 2012; Padmanabhan, 2011) with publicly available data within the three main neuroimaging software packages: AFNI, FSL and SPM, using parametric and nonparametric inference. All information for how to process, analyze, and model each dataset we obtain from the publication. We make a variety of comparisons to assess the similarity of our results across both software packages and choice of inference method.<br \/>\nMethods: We reanalysed data from three published fMRI studies and attempted to replicate the result for the principal effect depicted in the main figure of each publication within the three packages. The dataset associated to each study was obtained from the OpenfMRI (Poldrack, 2015) database (ds000001, R: 2.0.4; ds000109, R:2.0.2; ds000120, R:2.0.4).<br \/>\nPrior to the analyses we determined a number of processing steps to be included in all of our reproductions, for example, inclusion of six motion regressors in the analysis design matrix to remove motion-related artefacts. Although this meant deviating from an exact reproduction of a publication&#8217;s analysis, these steps were included to maximise comparability. Excluding these procedures, we endeavoured to choose the analysis pipeline within each package most consistent with the publication given the limitations of the software. Scripts were written to carry out the analyses in each package, and for FSL and SPM, export the group-level results as NIDM-Results packs.<br \/>\nFor each study, the activation maps were uploaded to Neurovault (Gorgolewski, 2015). We applied three quantitative comparison methods: Bland-Altman plots, assessing differences in the magnitudes of activations between the unthresholded group T-statistic maps; Dice statistics, comparing the locations of activation in the FWE-thresholded maps. Finally, Euler Characteristics were computed for each software&#8217;s group T-statistic map characterizing differences in the topological properties of the thresholded images. Comparisons were made both between software, as well as within software for the parametric and nonparametric inference results.<br \/>\nResults: Figures A-E present comparisons of the group-level results in each package for reproductions of the main contrast &#8216;false belief &gt; false photograph&#8217; from the publication associated to the ds000109 dataset. Group-level inference was conducted using a cluster-forming threshold p &lt; 0.005, FWE-corrected clusterwise threshold p &lt; 0.05. While qualitatively the regions of activation determined in the thresholded images are similar, the comparisons display striking differences across software, as well as between parametric and nonparametric inferences within FSL.<br \/>\nConclusions: We have found a disappointing level of agreement between software packages. While the general pattern of activations found was similar, the best inter-software Dice overlap was 54% (intra-software, parametric-vs-nonparametric, were better, e.g. 97% for SPM). This work supports the need for open sharing of data, and the importance of understanding the fragility of one&#8217;s results under the choice of software used.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u89e3\u6790\u3059\u308b\u30bd\u30d5\u30c8\u30a6\u30a7\u30a2\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u9078\u629e\u306b\u3088\u3063\u3066\u89e3\u6790\u7d50\u679c\u306b\u3069\u306e\u3088\u3046\u306a\u5f71\u97ff\u3092\u4e0e\u3048\u308b\u304b\u3068\u3044\u3046\u5185\u5bb9\u306b\u3064\u3044\u3066\u3067\u3057\u305f\uff0e\u30bd\u30d5\u30c8\u30a6\u30a7\u30a2\u3068\u3057\u3066\u306f\uff0cAFNI\uff0cFSL\u304a\u3088\u3073SPM\u306e3\u3064\u306e\u4e3b\u8981\u306a\u30cb\u30e5\u30fc\u30ed\u30a4\u30e1\u30fc\u30b8\u30f3\u30b0\u30bd\u30d5\u30c8\u30a6\u30a7\u30a2\u30d1\u30c3\u30b1\u30fc\u30b8\u3067\u3057\u305f\uff0e\u4f7f\u7528\u3059\u308b\u30bd\u30d5\u30c8\u30a6\u30a7\u30a2\u306e\u9078\u629e\u3067\u7d50\u679c\u306e\u8106\u5f31\u6027\u3092\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u91cd\u8981\u3068\u306e\u3053\u3068\u3067\u3057\u305f\uff0e\u88ab\u9a13\u8005\u5185\u5909\u52d5\u3060\u3051\u3067\u304f\uff0c\u4f7f\u7528\u3059\u308b\u30bd\u30d5\u30c8\u30a6\u30a7\u30a2\u306b\u3088\u3063\u3066\u3082\u7d50\u679c\u306e\u518d\u73fe\u6027\u306a\u3069\u304c\u5909\u308f\u308b\u3053\u3068\u304c\u3042\u308b\u3068\u3044\u3046\u77e5\u898b\u3092\u5f97\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Validity of summary statistics-based mixed-effects group fMRI<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Camille Maumet1, Thomas Nichols2<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION\uff1aModelling and Analysis Methods II<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Introduction\uff1a\u3000Statistical analysis of multi-subject functional Magnetic Resonance Imaging (fMRI) data is traditionally done using either: 1) a mixed-effects GLM (MFX GLM) where within-subject variance estimates are used and incorporated into per-subject weights or 2) a random-effects General linear model (GLM) (RFX GLM) where within-subject variance estimates are not used. Both approaches are implemented and available in major neuroimaging software packages including: SPM (MFX analysis; 2nd-Level statistics), FSL (FLAME; OLS) and AFNI (3dMEMA; 3dttest++). While MFX GLM provides the most efficient statistical estimate, its properties are only guaranteed in large samples, and it has been shown that RFX GLM is a valid alternative for one-sample group analyses in fMRI [1]. We recently showed that MFX GLM for image-based meta-analysis could lead to invalid results in small samples. Here, we investigate whether this issue also affects group fMRI.<br \/>\nMethods:<br \/>\nGLM can be expressed with: Y = X\u03b2 + \u03b5, where Y is the N-vector of subject-level contrast estimates, X the design matrix, \u03b2 the group parameter to estimate and \u03b5 the random error. In group fMRI, the error term has two contributions, from within- and between-subject variance.<br \/>\nMFX GLM. Using within-subject variance estimates requires a weighted least squares (WLS) approach, where the group parameter \u03b2 is a weighted average of the subject-level contrasts. The weights are inversely proportional to the sum of the within- and between-subject variances. But in practice, those weights are unknown and have to be estimated from the data leading to a Feasible Generalised Least Squares (FGLS). FGLS is asymptotically efficient but its finite sample properties are unknown [2]. We used FSL&#8217;s &#8216;FLAME 1&#8217; FGLS that uses maximum likelihood to estimate between-subject variance, computing a T-statistic compared to a Student distribution with N-1 degrees of freedom (DF) [3].<br \/>\nRFX GLM. Under the assumption that the within-subject variance is constant or negligible in comparison to the between-study variance, the weights above are equal and the GLM can be estimated with Ordinary Least Squares (OLS), \u03b2 estimated as the average of the subject-level contrasts. We used SPM&#8217;s 2nd level one-sample model, computing a T statistic also compared to a Student distribution with N\u22121 DF [4]. OLS p-values are exact for any sample size, in contrast to FGLS which are only asymptotically valid [2].<br \/>\nWe used Monte Carlo simulations to investigate the validity of MFX and RFX GLMs under varying degrees of within-subject variance heteroscedasticity. Within-subject variances took on 2 values, a &#8216;good&#8217; value and a &#8216;high&#8217; values of 2, 4, 8 &amp; 16x good values; we considered 4%, 20%, 40%, 80% or 96% of the subjects to have the high values. We fixed the mean within-subject standard error to be equal to the between-subject variance. We assumed 25 subjects per group and 1000 independent time points per subject. Accuracy was assessed by comparing FSL &amp; SPM distributions of -log10 P-values to Monte Carlo -log 10 P-values based on 10\u02c66 realisations.<br \/>\nResults:<br \/>\nFig. 1 presents deviation from theoretical P-values with varying percentage and intensity of high intra-subject variance. For low intensity heteroscedasticity (&lt;= 2x), MFX GLM is valid but becomes increasingly invalid in the presence strong and prevalent high variance subjects. RFX GLM is valid with all settings but displays some conservativeness in the presence of strong heteroscedasticity.<br \/>\nConclusions:<br \/>\nHere we investigated the validity of RFX and MFX GLMs in the presence of varying within-subject variance. As previously shown in the literature [1], we observed that RFX GLM is robust to the presence of heteroscedasticity. More surprisingly, MFX GLM was invalid in the presence of high variations in within-subject variances. More work is needed to investigate which of these settings is closest to the patterns present in real fMRI data. In the meantime, we recommend RFX GLM when working with small samples.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u88ab\u9a13\u8005\u5185\u5909\u52d5\u304c\u3042\u308b\u6642\u306eRFX\u304a\u3088\u3073MFX GLM\u306e\u59a5\u5f53\u6027\u306b\u3064\u3044\u3066\u3067\u3057\u305f\uff0e\u88ab\u9a13\u8005\u5185\u5206\u6563\u306e\u7570\u65b9\u6027\u306e\u7a0b\u5ea6\u3092\u5909\u3048\u3066RFX\u304a\u3088\u3073MFX GLM\u306e\u59a5\u5f53\u6027\u304c\u8a55\u4fa1\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u88ab\u9a13\u8005\u5185\u5909\u52d5\u304c\u9ad8\u3044\u5834\u5408\u306bMFX GLM\u306f\u7121\u52b9\u3067\u3042\u308b\u3053\u3068\u304c\u793a\u3055\u308c\u3066\u3044\u307e\u3057\u305f\uff0e\u88ab\u9a13\u8005\u5185\u5909\u52d5\u304c\u89e3\u6790\u65b9\u6cd5\u306b\u5f71\u97ff\u3092\u4e0e\u3048\u3066\u3044\u307e\u3059\u3053\u3068\u304b\u3089\uff0c\u88ab\u9a13\u8005\u5185\u5909\u52d5\u3092\u8003\u616e\u3059\u308b\u5fc5\u8981\u6027\u304c\u9ad8\u3044\u3068\u8003\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>OHBM2018 Annual meeting,<\/li>\n<\/ul>\n<p>https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821<br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u4e09\u597d\u5de7\u771f<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u4f4e\u6b21\u5143\u7a7a\u9593\u306b\u304a\u3051\u308b\u7791\u60f3\u4e2d\u8133\u72b6\u614b\u306e\u30de\u30c3\u30d4\u30f3\u30b0<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Mapping the brain state behavior during meditation in low dimensional feature space<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u4e09\u597d\u5de7\u771f\uff0c\u65e5\u548c\u609f\uff0c\u5ee3\u5b89\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">Organization of Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">24th Annual Meeting of the Organization of Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Suntec Singapore<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/21\u306b\u304b\u3051\u3066\uff0c\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb\u306eSuntec Singapore \u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f24th Annual Meeting of the Organization of Human Brain Mapping\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u5b66\u4f1a\u306f\uff0cOrganization of Human Brain Mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u56fd\u969b\u5b66\u4f1a\u3067\uff0c\u30d2\u30c8\u306e\u8133\u5730\u56f3\u4f5c\u6210\u306e\u305f\u3081\u306e\u6700\u65b0\u306e\u7814\u7a76\u306b\u3064\u3044\u3066\u5b66\u3076\u3053\u3068\u3092\u76ee\u7684\u306b\u958b\u50ac\u3055\u308c\u3066\u3044\u307e\u3059\uff0e\u3053\u306e\u5206\u91ce\u306e\u5c02\u9580\u5bb6\u3068\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3092\u901a\u3058\u3066\u30c7\u30a3\u30b9\u30ab\u30c3\u30b7\u30e7\u30f3\u3092\u884c\u3044\uff0c\u4e16\u754c\u4e2d\u306e\u7814\u7a76\u8005\u3068\u4ea4\u6d41\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059\uff0e<br \/>\n\u79c1\u306f\u5168\u65e5\u7a0b\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0c\u77f3\u7530\uff0c\u85e4\u539f\uff0c\u4e2d\u6751(\u572d)\uff0c\u897f\u6fa4\uff0c\u5927\u585a\uff0c\u6749\u91ce\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f19\u65e5\u304b\u308921\u65e5\u306e3\u65e5\u9593\u958b\u50ac\u3055\u308c\u305fPoster Session\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c\u5404\u65e51\u6642\u9593\u306e\u767a\u8868\u6642\u9593\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0cMapping the brain state behavior during meditation in low dimensional feature space\u3067\u3059\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\"><strong>Background<\/strong>: Mindfulness meditation has positive effects on well-being by reducing stress and improving concentration. However, it is often difficult for meditation novices to practice meditation accurately. In this study, we propose a method for visualizing changes in brain states during meditation in a low-dimensional space that practitioners can easily recognize.<br \/>\n<strong>Methods<\/strong>: We propose a method for constructing a two-dimensional space for characterizing the state of a subject using two feature axes of functional connectivity (FC) and fractional amplitude of low-frequency fluctuations (fALFF) [1]. In total, 29 meditation novices (22.9 \u00b1 2.3 years; 6 females) and 7 meditation practitioners (43.0 \u00b1 9.1 years; 1 female) participated in this experiment, and they performed a 5-min breath-counting meditation after a 5-min rest period in an fMRI scanner. Brain activities of all the subjects during rest and meditation were analyzed. First, the whole brain was divided into 116 regions using automated anatomical labeling. The correlation coefficient of the BOLD signal band-pass filtered at 0.008\u20130.09 Hz was calculated between each region. An edge density of 22.5% was set as the threshold, and regional degree centrality was calculated from the threshold FC matrix. In addition, the Z-score of fALFF (zfALFF), which is an index of local spontaneous brain activity in each voxel, was calculated and averaged within each region. Furthermore, the degree and zfALFF of 116 regions were reduced to a one- dimensional axis by supervised reduced k-means clustering, in which reduced k-means clustering [2] is extended using a supervised method, and a two-dimensional plane that identifies the subject&#8217;s rest and meditative states were constructed. Based on the constructed two-dimensional plane, changes in the brain state induced during the subject&#8217;s meditation and the differences between novices and practitioners were analyzed.<br \/>\n<strong>Results<\/strong>: Fig. 2 shows the feature space representing the resting and the meditative states, and the principal component loading constituting each feature axis. As shown in Fig. 2, the resting states were located in the bottom-left area. In the degree axis, variables with a negative loading included left HIP, left ANG, and left ITG, which belong to the default mode network (DMN) involved in mind wandering. In the zfALFF axis, variables with negative loading were few regions related to meditation. These results suggest that the resting state has a tendency of mind-wandering by forming the DMN, although the activity strength of the brain region related to meditation is weak. In contrast, the meditative states were located in the top-right area. In the degree axis, variables with a positive loading included PreCG, SMA, PoCG, and PCL, which belong to the somatosensory-motor network. In the zfALFF axis, variables with a positive loading included ACG and right INS, which belong to the salience network (SN). The zfALFF axis also included an orbital part, which belongs to the attention network (AN), as well as PCG, right HIP, and right ANG, which belong to the DMN. Most of the practitioners&#8217; meditative states were located in the top-center area. Therefore, these results suggest that practitioners restrain the function of the DMN and control attention to breath by reducing the connections of the DMN and strengthening the activations of the SN and AN from the resting state. Some of the novice&#8217;s meditative states were located in the middle-right area, indicating that they controlled breathing by reducing the connections of the DMN and forming the somatosensory-motor network from the resting state.<br \/>\n<strong>Conclusions<\/strong>: The brain states of the subjects practicing breath-counting meditation were represented in two-dimensional feature space using the proposed method. The experimental results suggest that we should focus on both the brain regions related to meditation, as revealed by previous studies, and the regions included in the somatosensory-motor network.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u672c\u767a\u8868\u3067\u306f\u8cea\u554f\u8005\u306e\u540d\u524d\u306f\u805e\u3044\u3066\u3044\u307e\u305b\u3093\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u30bf\u30a4\u30c8\u30eb\u306e<\/strong><strong> low dimensional feature space <\/strong><strong>\u3068\u306f\u4f55\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u672c\u6765\uff0c\u591a\u6b21\u5143\u306e\u7a7a\u9593\u3067\u8868\u3055\u308c\u308b\u88ab\u9a13\u8005\u306e\u8133\u72b6\u614b\u306b\u5bfe\u3057\u3066\uff0c\u72b6\u614b\u306e\u9055\u3044\u3092\u8868\u73fe\u3059\u308b\u3088\u3046\u306a\u7279\u5fb4\u3092\u6301\u3064\u3088\u3046\u306b\u63a2\u7d22\u3055\u308c\u305f\u4f4e\u6b21\u5143\u7a7a\u9593\u306e\u3053\u3068low dimensional feature space \u3044\u3046\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u4f4e\u6b21\u5143\u7279\u5fb4\u7a7a\u9593\u304b\u3089\u7406\u89e3\u3067\u304d\u308b\u3053\u3068\u306b\u3064\u3044\u3066\u3082\u8aac\u660e\u3092\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u89b3\u6e2c\u30c7\u30fc\u30bf\u3092\u4e09\u76f8\u30c7\u30fc\u30bf\u3068\u3057\u3066\u6271\u3046\u306e\u306f\u306a\u305c\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u9818\u57df\u306e\u610f\u5473\u3092\u8003\u616e\u3057\u306a\u304c\u3089\u72b6\u614b\u8b58\u5225\u306b\u6709\u7528\u306a\u8133\u9818\u57df\u3092\u691c\u8a0e\u3059\u308b\u305f\u3081\u306b\u4e09\u76f8\u30c7\u30fc\u30bf\u3068\u3057\u3066\u6271\u3046\uff0e\u5f93\u6765\u306e\u624b\u6cd5\u3067\u306f\uff0c\u884c\u5217\u3092\u6a2a\u306b\u9023\u7d50\u3057\u3066\u3044\u308b\u3060\u3051\u3067\u3042\u308b\u305f\u3081\uff0c\u9818\u57df\u306e\u7e4b\u304c\u308a\u304c\u8003\u616e\u3055\u308c\u3066\u3044\u306a\u3044\u305f\u3081\uff0c\u4e09\u76f8\u30c7\u30fc\u30bf\u69cb\u9020\u3092\u63d0\u6848\u3059\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n\u30fb<strong>Functional network <\/strong><strong>\u306b\u5c5e\u3059\u308b\u8133\u9818\u57df\u306f\u30dd\u30b9\u30bf\u30fc\u306b\u66f8\u304b\u308c\u3066\u3044\u308b\u3082\u306e\u304c\u6b63\u3057\u3044\u306e\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u5168\u3066\u6b63\u3057\u3044\uff0e\u5148\u884c\u7814\u7a76\u3067\u5831\u544a\u3055\u308c\u3066\u3044\u308b\u3082\u306e\u3092\u6319\u3052\u3066\u3044\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e\u8cea\u554f\u8005\u306e\u77e5\u3063\u3066\u3044\u308b\u3082\u306e\u3068\u9055\u3044\u304c\u3042\u3063\u305f\u305f\u3081\u306b\u3053\u306e\u8cea\u554f\u304c\u751f\u3058\u307e\u3057\u305f\uff0e\u30dd\u30b9\u30bf\u30fc\u3067\u3042\u3063\u3066\u3082\uff0cReference \u3092\u3064\u3051\u308b\u3088\u3046\u306b\u3059\u308b\u5fc5\u8981\u304c\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u5b9f\u9a13\u3067\u306e\u7791\u60f3\u306e\u7a2e\u985e\u306f\u4f55\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300cBreath-counting meditation \u3092\u63a1\u7528\u3057\u305f\uff0e\u3082\u3063\u3068\u3082\u7c21\u5358\u306a\u7791\u60f3\u3067\u521d\u5fc3\u8005\u5411\u3051\u3067\u3042\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u521d\u5fc3\u8005\u3068\u5b9f\u8df5\u8005\u306e\u8133\u72b6\u614b\u306e\u9055\u3044\u306f\u3042\u308b\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u9055\u3044\u306f\u3042\u308b\uff0e\u6210\u5206\u8ca0\u8377\u91cf\u3088\u308a\u7279\u5b9a\u306e\u9818\u57df\u306b\u7740\u76ee\u3059\u308b\u3068\uff0c\u521d\u5fc3\u8005\u3067\u306f\u7791\u60f3\u306b\u3088\u3063\u3066\u5171\u901a\u3057\u305f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\u5f62\u6210\u3055\u308c\u306a\u304b\u3063\u305f\u304c\uff0c\u5b9f\u8df5\u8005\u3067\u306f\u7791\u60f3\u306b\u3088\u3063\u3066DMN\u3068CEN\u306e\u7d50\u5408\u304c\u5171\u901a\u3057\u3066\u898b\u3089\u308c\uff0c\u6ce8\u610f\u306e\u5236\u5fa1\u304c\u884c\u308f\u308c\u3066\u3044\u308b\u3068\u8003\u3048\u3089\u308c\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u7791\u60f3\u5b9f\u8df5\u8005\u304c\u5b9f\u8df5\u3057\u3066\u3044\u308b\u7791\u60f3\u306e\u7a2e\u985e\u306f\u4f55\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u5b9f\u8df5\u8005\u306b\u3088\u3063\u3066\u5b9f\u8df5\u3057\u3066\u3044\u308b\u7791\u60f3\u306e\u7a2e\u985e\u306f\u7570\u306a\u308b\uff0e\u672c\u7814\u7a76\u3067\u306f\uff0c\u7791\u60f3\u6642\u9593\u306e\u7dcf\u8a08\u306e\u307f\u3067\u5b9f\u8df5\u8005\u3068\u521d\u5fc3\u8005\u3092\u30e9\u30d9\u30ea\u30f3\u30b0\u3057\u3066\u3044\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u7791\u60f3\u5b9f\u8df5\u8005\u306e\u30b5\u30f3\u30d7\u30eb\u6570\u304c\u5c11\u306a\u3044\uff0e<\/strong><br \/>\n\u3053\u306e\u6307\u6458\u306b\u5bfe\u3057\u3066\uff0c\u300c\u5b9f\u8df5\u8005\u3092\u96c6\u3081\u308b\u306e\u306f\u7c21\u5358\u3067\u306f\u306a\u3044\uff0e\u3057\u304b\u3057\uff0c\u30b5\u30f3\u30d7\u30eb\u6570\u3092\u3082\u3063\u3068\u5897\u3084\u3059\u5fc5\u8981\u304c\u3042\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u56de\u7b54\u306b\u5bfe\u3057\uff0c\u8cea\u554f\u8005\u306f\u30b5\u30f3\u30d7\u30eb\u6570\u306e\u5c11\u306a\u3055\u306b\u7d0d\u5f97\u3057\u3066\u9802\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u3053\u306e\u624b\u6cd5\u306b\u3088\u3063\u3066\uff0c\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u3068\u30ec\u30b9\u30c8\u306e\u8b58\u5225\u306f\u53ef\u80fd\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u79c1\u306f\u82f1\u8a9e\u3067\u306e\u8cea\u554f\u306e\u610f\u5473\u304c\u5206\u304b\u3089\u306a\u304b\u3063\u305f\u305f\u3081\uff0c\u5ee3\u5b89\u5148\u751f\u306b\u56de\u7b54\u3057\u3066\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff0e\u300c\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u306e\u72b6\u614b\u3092\u4f5c\u308b\u30bf\u30b9\u30af\u304c\u3042\u308c\u3070\u30ec\u30b9\u30c8\u3068\u306e\u8b58\u5225\u306f\u53ef\u80fd\u3067\u3042\u308b\uff0e\u3057\u304b\u3057\uff0c\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u306a\u72b6\u614b\u3068\u306f\uff0c\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u306b\u306a\u3063\u305f\u6642\u306b\uff0c\u3059\u3050\u306b\u305d\u308c\u306b\u6c17\u3065\u304f\u3053\u3068\u304c\u3067\u304d\u308b\u72b6\u614b\u3067\u3082\u3042\u308b\u305f\u3081\uff0c\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u72b6\u614b\u306e\u4e2d\u306b\u3082\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u306f\u5b58\u5728\u3059\u308b\uff0e\u305d\u306e\u305f\u3081\uff0c\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u306a\u72b6\u614b\u3068\u30ec\u30b9\u30c8\u306e\u8b58\u5225\u306e\u4e2d\u306b\u3082\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u304c\u542b\u307e\u308c\u3066\u3044\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u3066\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u521d\u5fc3\u8005\u3068\u5b9f\u8df5\u8005\u3067\u7279\u5fb4\u7a7a\u9593\u306e\u63a2\u7d22\u3092\u5225\u3067\u884c\u3046\u306e\u306f\u306a\u305c\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\uff0c\u300c\u5b9f\u8df5\u8005\u3068\u521d\u5fc3\u8005\u306e\u30b5\u30f3\u30d7\u30eb\u6570\u306b\u5dee\u304c\u3042\u308b\u305f\u3081\uff0c\u5225\u3005\u306b\u63a2\u7d22\u3057\u3066\u3044\u308b\uff0e\u307e\u305f\uff0c\u5b9f\u8df5\u8005\u306e\u7791\u60f3\u306b\u3088\u308b\u5909\u5316\u3068\u521d\u5fc3\u8005\u306e\u7791\u60f3\u306b\u3088\u308b\u5909\u5316\u306b\u306f\u9055\u3044\u304c\u3042\u308a\uff0c\u305d\u306e\u9055\u3044\u3092\u691c\u8a0e\u3057\u305f\u3044\u3068\u8003\u3048\u3066\u3044\u308b\u305f\u3081\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb<\/strong><strong>Tucker3 clustering<\/strong><strong>\u306f\u65b0\u624b\u6cd5\u304b\uff1f\u30c4\u30fc\u30eb\u3092\u4f7f\u3063\u3066\u3044\u308b\u306e\u304b\uff1f<\/strong><br \/>\n\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u300c\u65b0\u624b\u6cd5\u3067\u306f\u306a\u3044\uff0e\u65e2\u5b58\u624b\u6cd5\u3067\uff0c\u4e3b\u306b\u5fc3\u7406\u5b66\u306e\u5206\u91ce\u3067\u4f7f\u308f\u308c\u3066\u3044\u308b\uff0eMATLAB\u3067\u89e3\u6790\u3092\u3057\u3066\u3044\u308b\u304c\uff0c\u30d1\u30c3\u30b1\u30fc\u30b8\u306f\u306a\u304f\uff0c\u81ea\u5206\u81ea\u8eab\u3067\u30b3\u30fc\u30c7\u30a3\u30f3\u30b0\u3057\u3066\u3044\u308b\uff0e\u300d\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u975e\u5e38\u306b\u591a\u304f\u306e\u4eba\u306b\u8208\u5473\u3092\u6301\u3063\u3066\u3044\u305f\u3060\u304d\uff0c\u304a\u3082\u3057\u308d\u3044\u3068\u8a00\u3063\u3066\u3044\u305f\u3060\u3051\u305f\u305f\u3081\uff0c\u81ea\u5206\u81ea\u8eab\u306e\u7814\u7a76\u304c\u65b0\u3057\u3044\u3082\u306e\u3067\u3042\u308b\u3053\u3068\u3092\u8a8d\u8b58\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\uff0e\u82f1\u8a9e\u3067\u306e\u8cea\u554f\u306e\u610f\u5473\u3092\u4e2d\u3005\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u51fa\u6765\u305a\u306b\u82e6\u6226\u3057\u307e\u3057\u305f\u304c\uff0c\u540c\u6642\u306b\u591a\u304f\u306e\u4eba\u3068\u30c7\u30a3\u30b9\u30ab\u30c3\u30b7\u30e7\u30f3\u3092\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u305f\u308a\uff0c\u30a2\u30a4\u30c7\u30a2\u3092\u3044\u305f\u3060\u3051\u305f\u305f\u3081\uff0c\u767a\u8868\u3057\u3066\u826f\u304b\u3063\u305f\u3068\u611f\u3058\u3066\u3044\u307e\u3059\uff0e\u4e88\u60f3\u3057\u3066\u3044\u305f\u3088\u308a\u3082\u7d76\u3048\u9593\u7121\u304f\u8074\u8b1b\u8005\u306b\u6765\u3066\u3044\u305f\u3060\u3051\u305f\u305f\u3081\uff0c\u77ed\u3044\u6642\u9593\u306e\u306a\u304b\u3067\u3082\u6fc3\u3044\u6642\u9593\u3092\u904e\u3054\u3059\u3053\u3068\u304c\u3067\u304d\u305f\u3068\u601d\u3044\u307e\u3059\uff0e\u53cd\u7701\u70b9\u3068\u3057\u3066\u306f\uff0c\u82f1\u8a9e\u3067\u306e\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\u304c\u3046\u307e\u304f\u3067\u304d\u306a\u304b\u3063\u305f\u70b9\u3067\u3059\uff0e\u8cea\u554f\u5185\u5bb9\u304c\u7406\u89e3\u3067\u304d\u305a\uff0c\u8cea\u554f\u304c\u6d88\u3048\u3066\u3057\u307e\u3046\u5834\u9762\u3082\u3042\u308a\u307e\u3057\u305f\uff0e\u65e5\u7528\u7684\u306a\u5358\u8a9e\u3060\u3051\u3067\u306a\u304f\u5c02\u9580\u7684\u306a\u5358\u8a9e\u3082\u805e\u304d\u53d6\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3088\u3046\u306b\u52c9\u5f37\u304c\u5fc5\u8981\u3067\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e4\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Mediation analysis of triple networks may interpret mindfulness in real-time fMRI neurofeedback<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Hyun-Chul Kim, Gunther Meinlschmidt, Esther Stalujanis, Angelo Belardi, Sungman Jo, Juhyeon Lee, Dawoon Heo, Dong-Youl Kim, Marion Tegethoff, Seung-Schik Yoo, Jong-Hwan Lee<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster Session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\n<strong>Backgrodund<\/strong>: Triple networks include (a) the central executive network (CEN) associated with attention to exogenous events, (b) default mode network (DMN) related to attention to endogenous events, and (c) salience network (SN) believed to be a hub of multisensory interoception [1]. These triple networks have been crucial to the understanding of the abnormal functional networks associated with neuropsychiatric disorders and to development of human cognitive systems [2, 3] and also been characterized as the representative functional networks of mindfulness (MF) [4]. However, there has been no unequivocal evidence on the functional connectivity (FC) across triple networks with MF [4]. Thus, the aims of this study are (a) to investigate an FC fingerprint associated with MF and (b) to show enhancement of the MF fingerprint via the real-time fMRI neurofeedback (rtfMRI-NF). Our hypotheses are that (a) the FC from the SN to DMN mediated by the CEN (Fig. 1a) would be an MF fingerprint and (b) the rtfMRI-NF would enhance this MF.<br \/>\n<strong>Methods<\/strong>: Healthy male participants (n = 60; na\u00efve to MF) underwent a 10-day smartphone-based ambulatory training [6] of MF with attention to the sensation of breathing [7] and of mindwandering (MW) [8]. Then, in the MRI session (Fig. 1b), they were assigned randomly to a control (CTR) and experimental (EXP) group according to the block randomization procedure [5]. In the rtfMRI-NF runs, triple networks were defined from the intersection of (a) spatial maps of an independent component analysis using two non-real-time resting-state scans (Fig. 1b) and (b) of the meta-analysis maps from the Neurosynth (neurosynth.org; keywords: &#8216;default mode network&#8217;, &#8216;frontoparietal network&#8217;, &#8216;salience network&#8217;). During the 300-s rtfMRI-NF block, the FC fingerprint (c\u00b4) was calculated from the mediation analysis using 30 fMRI volumes (approximately 44 s) in every TR and was used as a NF signal to change the thermometer bar (Fig. 1c). Participants were informed that the height of the thermometer bar would be proportional to the MF level and were asked to increase the bar. The bar of the EXP subject was changed using his c&#8217; values and the bar of the SHAM subject was changed using c&#8217; values of his matched EXP subject. In the subsequent transfer run, the white cross was displayed on the screen without the NF information (Fig. 1c). After each of three runs, the short version of the State Mindfulness Scale (SMSS) and task performance feedback (TPF) were obtained. A total of 52 subjects included for analysis due to severe head motions from at least one subject in each of four pairs of subjects.<br \/>\n<strong>Results<\/strong>: The SMSS and TPF values were similar across two groups (Fig. 2a). The c&#8217; values of our online aCompCor [9] based physiological noise removal were similar to the off-line RETROICOR analysis [10] (Fig. 2c). The average c&#8217; values across subjects and time from the EXP group were relatively greater than the SHAM group across two rtfMRI-NF runs and one transfer run (Fig. 2b). Interestingly, the FC levels from SN to DMN or DMN to SN mediated by CEN were positively correlated with the MF and TPF scales only from the EXP group. In addition, the FC levels from CEN to DMN or DMN to CEN mediated by SN were positively correlated with the MF scales from the SHAM group, whereas those FC values were negatively correlated with the TPF scales from the EXP group.<br \/>\n<strong>Conclusions<\/strong>: To the best of our knowledge, this is the first study to investigate the FC fingerprint of MF in the triple-network framework. Our data suggested that the FC fingerprint of MF may be the FC between SN and DMN mediated by CEN and this fingerprint was enhanced by the rtfMRI-NF training. Future works include (a) an analysis of the battery of questionnaires obtained during pre- and post-MRI sessions and (b) an investigation of neuronal bases of MF and MW possibly from FC in the triple networks using non-rtfMRI data.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f,real-time fMRI\u3092\u7528\u3044\u3066\u7791\u60f3\u72b6\u614b\u3092\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u3059\u308b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u7814\u7a76\u3067\u306f\uff0c\u5148\u884c\u7814\u7a76\u3067\u77e5\u3089\u308c\u3066\u3044\u308bDMN\uff0cSN\uff0cCEN\u306e\u7d50\u5408\u9818\u57df\u306b\u7740\u76ee\u3057\uff0c\u3053\u306e3\u3064\u306e\u72b6\u614b\u3092\u30e2\u30c7\u30eb\u5316\u3059\u308b\u3053\u3068\u3067\u5b9a\u91cf\u5316\u3092\u884c\u306a\u3063\u3066\u3044\u307e\u3057\u305f\uff0e\u3053\u306e\u30e2\u30c7\u30eb\u306b\u3088\u3063\u3066\u5b9a\u91cf\u5316\u3055\u308c\u305f\u7791\u60f3\u72b6\u614b\u3092\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u3059\u308b\u3053\u3068\u3067\uff0c\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u5ea6\u5408\u3044\u3092\u793a\u3059\u4e3b\u89b3\u7684\u30b9\u30b3\u30a2\u3068\u30e2\u30c7\u30eb\u30b9\u30b3\u30a2\u306b\u76f8\u95a2\u304c\u898b\u3089\u308c\u307e\u3057\u305f\uff0e\u3053\u306e\u7814\u7a76\u306f\uff0c\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u306e\u65b9\u6cd5\u3068\u5b9a\u91cf\u5316\u65b9\u6cd5\u3092\u79c1\u305f\u3061\u306b\u7814\u7a76\u306b\u3082\u53c2\u8003\u306b\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aA generative model for inferring whole-brain effective connectivity<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aStefan Fr\u00e4ssle, Ekaterina Lomakina, Lars Kasper, Zina Manjaly, Alex Leff, Klaas Pruessmann, Joachim Buhmann, Klaas Stephan<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a ORAL SESSION: Modeling and Analysis Methods II<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\n<strong>Background<\/strong>: Developing whole-brain models that infer the effective (directed) connectivity among neuronal populations from neuroimaging data represents a central challenge for computational neuroscience. Dynamic causal models (DCMs; Friston et al., 2003) of functional magnetic resonance imaging (fMRI) data have been used frequently for inferring effective connectivity, but are presently restricted to small graphs (up to 10 regions) to keep model inversion feasible. Here, we introduce regression DCM (rDCM; Fr\u00e4ssle et al., 2017, Fr\u00e4ssle et al., under review) as a novel variant of DCM for fMRI that enables whole-brain effective connectivity analyses.<br \/>\n<strong>Methods<\/strong>: In brief, rDCM converts the numerically costly estimation of coupling parameters in differential equations of a linear DCM in the time domain into an efficiently solvable Bayesian linear regression in the frequency domain. This necessitates several modifications to the original DCM implementation, including: (i) translation from time to frequency domain by exploiting the differential property of the Fourier transform, (ii) linearization of the hemodynamic forward model, (iii) mean-field approximation (across regions), and (iv) use of a Gamma prior on noise precision. These changes allow us to derive a highly efficient variational Bayesian update scheme. Additionally, by incorporating sparsity constraints, rDCM does not require any a priori assumptions about the network&#8217;s connectivity structure but prunes fully (all-to-all) connected networks as part of model inversion.<br \/>\n<strong>Results<\/strong>: First, we used simulations to test how accurately rDCM could recover the known network architecture (i.e., the connections present in the data-generating model) for large networks with 66 regions. We mapped sensitivity and specificity for various settings of the signal-to-noise ratio of the fMRI data and the a priori assumptions about the sparseness of the network (Fig. 1). These simulations suggest that rDCM is a suitable tool for inferring sparse effective connectivity patterns. In previous work, we already demonstrated face validity of the approach with respect to parameter recovery and model selection (Fr\u00e4ssle et al., 2017).<br \/>\nWe then applied rDCM to several empirical fMRI datasets. In particular, we showed that it is feasible to infer effective connection strengths from fMRI data using a network with more than 100 regions and 10,000 connections. Sparse rDCM was applied to fMRI data from a motor paradigm (visually paced fist closings) (Fig. 2A). Model inversion resulted in a sparse graph with biologically plausible connectivity (Fig. 2B, left) and driving input patterns (Fig. 2B, right): we observed pronounced driving inputs to and functional integration among motor regions (precentral, SMA, cerebellum), visual regions (cuneus, occipital), regions associated with the somatosensory and proprioceptive aspects of the task which are essential for visuomotor integration (postcentral, parietal), and frontal regions engaging in top-down control. The inferred effective connectivity pattern and its sparseness become most apparent when visualized as a connectogram (Fig. 2C) or projected onto the whole-brain volume (Fig. 2D).<br \/>\nNotably, inversion of this whole-brain model was computationally highly efficient and took only 10 minutes on standard hardware.<br \/>\n<strong>Conclusions<\/strong>: Regression DCM allows one to infer effective connectivity, with connection-specific estimates, in whole-brain networks from fMRI data. Importantly, in contrast to functional connectivity measures, effective connectivity discloses the directionality of functional couplings. We anticipate that rDCM will find useful application in connectomics (e.g., enabling graph theoretical approaches to effective connectivity; Bullmore &amp; Sporns, 2009) and could offer tremendous opportunities for clinical applications, such as whole-brain phenotyping of patients (Stephan et al., 2015).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u5168\u8133\u304b\u3089Dynamic casual modeling \u306b\u7528\u3044\u308b\u8133\u9818\u57df\u3092\u62bd\u51fa\u3059\u308b\u89e3\u6cd5\u306e\u63d0\u6848\u3067\u3057\u305f\uff0e\u901a\u5e38\uff0c8\u9818\u57df\u306eseed\u3092\u6c7a\u5b9a\u3057\u3066\u304b\u3089DCM\u3092\u884c\u3046\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u304c\uff0c\u9818\u57df\u3092\u6c7a\u3081\u308b\u306e\u306f\u5148\u884c\u7814\u7a76\u306b\u983c\u3089\u306d\u3051\u308c\u3070\u306a\u3089\u305a\uff0c\u56f0\u96e3\u3067\u3057\u305f\uff0e\u3053\u306e\u624b\u6cd5\u306b\u3088\u308a\u6700\u9069\u306a\u9818\u57df\u3092\u9078\u629e\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3068\u601d\u3044\u307e\u3059\uff0eDCM\u306f\u4eca\u5f8c\u8abf\u67fb\u3059\u308b\u5fc5\u8981\u306e\u3042\u308b\u624b\u6cd5\u3067\u3042\u308b\u3068\u672c\u767a\u8868\u3092\u901a\u3058\u3066\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDynamic functional connectivity markers of objective trait mindfulness<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Julian Lim, James Teng, Amiya Patanaik,\u00a0 Jesisca Tandi, Stijn Massar<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster Session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\n<strong>Background<\/strong>: Mindfulness is the practice of purposefully paying attention to present-moment experiences with a non-judgmental attitude. This ability can be enhanced by training, and also varies widely among untrained individuals. Here, we collected resting-state(rs)-fMRI to investigate the individual differences in functional connectivity associated with natural variation in dispositional mindfulness. We hypothesized that higher levels of mindfulness would be associated with greater optimization of the intrinsic functional connectome (i.e. connectivity patterns with higher similarity to that seen during task engagement).<br \/>\n<strong>Methods<\/strong>: 125 healthy young participants were recruited to perform a breath-counting task (Levinson et al., 2014), an objective measure of mindfulness and meta-awareness. Participants kept track of their breath over a 20-minute period by pressing a button with every 1st to 8th breath, and a separate button for every 9th breath. Accuracy was computed as the number of correctly completed breath cycles divided by the total number of cycles. From this sample, two groups were selected: those with high trait mindfulness (HTM: accuracy &gt; 90.2%; N=21) and low trait mindfulness (LTM: accuracy &lt; 56.1%; N=18). Individuals in these groups were invited for an imaging session, which consisted of a second run of the breath-counting task (behavioral), and an ~8 minute rs-fMRI scan.<br \/>\nWhole-brain data were segmented based on the Yeo parcellation (Yeo et al, 2015), and connectivity was computed using the multiplication of temporal derivatives (MTD) method (Shine et al., 2015). Static connectivity maps were calculated as an average of all MTD coupling values, and dynamic functional connectivity analysis was performed using k-means clustering after averaging within a 7-TR sliding window across the MTD time series. Connectivity was compared between the HTM and LTM groups.<br \/>\n<strong>Results<\/strong>: Inter-session reliability of breath counting accuracy was high (ICC = .48; p = .001), and good and poor performers continued to differ significantly in their second test (t37 = 3.20; p = .003).<br \/>\nStatic rs-fMRI connectivity maps showed that HTM individuals had greater within-network connectivity in the DMN and the salience network, and greater anti-correlations between the DMN and task-positive networks. Dynamic functional connectivity analysis revealed two reproducible patterns of connectivity, corresponding to &#8220;task-ready&#8221; and &#8220;idling&#8221; brain states (Figure 1a). HTM individuals had spent significantly more time in the task-ready state, and significantly less time in the idling state compared to LTM individuals (Figure 1b). The HTM group transitioned between brain states more frequently, but the dwell time in each episode of the task-ready state was equivalent between groups. These results persisted even after controlling for vigilance. Across individuals, time spent in the task-ready state was correlated with self-reported mindfulness as measured by the Five Facet Mindfulness Questionnaire.<br \/>\n<strong>Conclusions<\/strong>: Mindful individuals switch more often into a task-ready state, and spend more time in that state overall. These features may be useful biomarkers of the flexibility and greater degree of awareness that characterizes the mindful brain.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u7791\u60f3\uff08\u547c\u5438\u7791\u60f3\uff09\u3092\u884c\u3044\uff0c\u30dc\u30bf\u30f3\u30d7\u30ec\u30b9\u306b\u3088\u3063\u3066\u7791\u60f3\u72b6\u614b\u3092\u8a08\u6e2c\u3059\u308b\u3082\u306e\u3067\uff0c\u30dc\u30bf\u30f3\u30d7\u30ec\u30b9\u306e\u7d50\u679c\u304b\u3089\uff0c\u9ad8\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u7fa4\u3068\u4f4e\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u7fa4\u306b\u5206\u3051\uff0c\u305d\u308c\u305e\u308c\u306e\u7279\u5fb4\u3092\u691c\u8a0e\u3059\u308b\u3082\u306e\u3067\u3057\u305f\uff0e\u88ab\u9a13\u8005\u306e\u6570\u304c125\u4eba\u3068\u591a\u304f\uff0c\u9ad8\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u7fa4\u3068\u4f4e\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u7fa4\u3067\u5927\u304d\u306a\u9055\u3044\u304c\u898b\u3089\u308c\u305f\u3068\u611f\u3058\u307e\u3059\uff0e\u30dc\u30bf\u30f3\u30d7\u30ec\u30b9\u3068Dynamic FC\u3092\u691c\u8a0e\u3059\u308b\u3053\u3068\u3082\u9762\u767d\u3044\u3068\u601d\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aClassification of mindfulness and mind-wandering states using functional connectivity patterns<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Niv Lustig, Hyun-Chul Kim, Jong-Hwan Lee<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster Session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\n<strong>Background<\/strong>: The aim of this study is to identify distinct patterns in functional connectivity (FC) during mindfulness (MF) [1] and mind-wandering (MW) [2] states using functional magnetic resonance imaging (fMRI) data. For this, we and compared the performance of three types of machine learning models: support vector machine (SVM), convolutional neural network (CNN) [3] and deep neural network (DNN) to classify whole brain functional connectivity (FC) patterns of both states. To our knowledge, there has not been an attempt to classify these two states using machine learning models.<br \/>\n<strong>Methods<\/strong>: Raw echo-planar-imaging (EPI) volumes (TR\/TE=1.44s\/30ms; axial slices=50; voxel size=3\u00d73\u00d73mm3) from 60 subjects acquired during two non-real-time fMRI runs containing three blocks of both MF and MW (Fig. 1a) were preprocessed using the SPM8 toolbox: realignment, normalization to the Montreal Neurological Institute space with a 3mm3 voxel size, and spatial smoothing with an 8mm full-width at half-maximum Gaussian kernel, nuisance signal regression using 12 regressors (i.e., three principal components extracted from each cerebro-spinal fluid (CSF) and white matter (WM) masks selected from a-prior maps from SPM8, with a probability greater than 0.7 for CSF and 0.9 for WM, and six motion parameters), band-pass filtering between 0.01Hz \u2013 0.1Hz. For FC pattern estimation, an anatomical automatic labeling mask was used to parcellate different regions of interest (ROI; n=116). Averaged blood-oxygenation-level-dependent (BOLD) signals within each ROI were used to calculate a FC matrix (116\u00d7116) via Pearson correlation analysis and a vector of the lower triangular of this matrix (116C2=6,670) was obtained for each block (Fig. 1b). All FC values were Fisher&#8217;s r-to-z transformed [4] and used as input for the CNN (360\u00d7116\u00d7116) and DNN (360\u00d76670) models, respectively. The CNN model contained one convolutional layer with a 3\u00d73 kernel and a single fully connected layer with 30 hidden nodes built using the PyTorch (www.pytorch.org) library and was tested in a five-fold cross-validation (CV). The DNN model contained three hidden layers (20 nodes per layer) (Fig. 1c) and applied with explicit control of weight sparsity scheme [5]. Hyperbolic tangent was used as the activation function of the hidden layers and a linear function was used as a node function for the output. Using a nested five-fold CV scheme, all 27 combinatoric scenarios of Hoyer&#8217;s sparseness levels (i.e., 0.4, 0.6, and 0.8) across hidden layers were validated to find optimal weight sparsity from the training and validation data. The Python based DNN toolbox (github.com\/lisalab\/DeepLearningTutorials) was modified to implement our explicit L1 norm regularization scheme. Afterwards, averaged Z-scored (threshold=|Z| &gt;1.96) feature map of model weights was calculated using linear multiplication of the each layer&#8217;s weights averaged across all folds. SVMs with linear and non-linear kernels [6] using LIBSVM (https:\/\/www.csie.ntu.edu.tw\/~cjlin\/libsvm) toolbox were also used for comparison.<br \/>\n<strong>Results<\/strong>: Overall, the proposed DNN model showed relatively higher classification performance (mean \u00b1 standard deviation; 68.05%\u00b14.97%) compared to the CNN (61.5%\u00b14.77%) and SVM (linear; 63.05%\u00b14.17%. non-linear; 63.6%\u00b16.47%) models (Fig. 2a). The weight feature map showed the highest intensity weight was between the L-frontal superior gyrus and R-putamen and the lowest between R-Rolandic operculum and the cerebellum (Fig. 2b)<br \/>\n<strong>Conclusions<\/strong>: As a preliminary study, our proposed DNN model showed a promising classification result considering our input may be improved, for instance, by using (a) an optimized pipeline to preprocess raw EPI volumes and (b) atlases for parcellation besides AAL. Future works include (a) fine-tuning DNN and CNN models (e.g., pre-training models with FC data from other tasks to make up for the small sample size) and (b) interpretation of trained weight parameters of the CNN.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u3067\u7740\u76ee\u3057\u305f\u306e\u306f\uff0c\u30de\u30a4\u30f3\u30c9\u30d5\u30eb\u30cd\u30b9\u72b6\u614b\u3068\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u72b6\u614b\u3092\u8b58\u5225\u3057\u3088\u3046\u3068\u3057\u3066\u3044\u308b\u70b9\u3067\uff0c\u79c1\u305f\u3061\u306e\u7814\u7a76\u306b\u8fd1\u3044\u3082\u306e\u3092\u611f\u3058\u307e\u3057\u305f\uff0e\u624b\u6cd5\u3068\u3057\u3066\u306fSVM\u3092\u7528\u3044\u3066\u3044\u307e\u3057\u305f\u304c\uff0c\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u72b6\u614b\u3092\u4f5c\u308b\u306e\u304cResting State\u3067\u306f\u306a\u304f\uff0c\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u30d6\u30ed\u30c3\u30af\u3067\u3042\u308b\u3068\u3053\u308d\u306b\u79c1\u305f\u3061\u306e\u7814\u7a76\u3068\u306e\u9055\u3044\u304c\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e\u30de\u30a4\u30f3\u30c9\u30ef\u30f3\u30c0\u30ea\u30f3\u30b0\u30d6\u30ed\u30c3\u30af\u306e\u30bf\u30b9\u30af\u306f\u805e\u304f\u3053\u3068\u304c\u3067\u304d\u307e\u305b\u3093\u3067\u3057\u305f\u304c\uff0c\u3053\u3046\u3044\u3063\u305f\u5b9f\u9a13\u30c7\u30b6\u30a4\u30f3\u3092\u8abf\u67fb\u3059\u308b\u5fc5\u8981\u304c\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n\u53c2\u8003\u6587\u732e<br \/>\n[1] Q.-H. Zou, C.-Z. Zhu, Y. Yang, X.-N. Zuo, X.-Y. Long, Q.-J. Cao, Y.-F. Wang, and Y.-F. Zang, \u201cAn improved approach to detection of amplitude of low-frequency fluctuation (ALFF) for resting-state fMRI: Fractional ALFF,\u201d J. Neurosci. Methods, vol.172, no. 1, pp. 137-141, Jul. 2008.<br \/>\n[2] G. De Soete and J.D. Carroll, \u201cK-means clustering in a low-dimensional Euclidean space,&#8221; in New approaches in classication and data analysis, pp. 212-219, Springer, 1994.<br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u00a0<\/strong><br \/>\n<strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">&nbsp;<br \/>\n\u897f\u6fa4\u7f8e\u7d50<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u52d5\u7684\u6a5f\u80fd\u7684\u7d50\u5408\u5206\u6790\u306b\u57fa\u3065\u304f\u6ce8\u610f\u72b6\u614b\u306eFNIRS\u7814\u7a76<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">FNIRS study of attentional states based on dynamic functional connectivity analysis<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u897f\u6fa4\u7f8e\u7d50, \u65e5\u548c\u609f, \u5ee3\u5b89\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">Organization for Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">OHBM2018<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Suntec Singapore<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/21\u306b\u304b\u3051\u3066\uff0cSuntec Singapore\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305fOHBM2018(https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821)\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306e\u5b66\u4f1a\u306f\uff0cOrganization for Human Brain Mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u5b66\u4f1a\u3067\uff0c\u4eba\u9593\u306e\u8133\u5730\u56f3\u4f5c\u6210\u306b\u304a\u3051\u308b\u6700\u65b0\u306e\u56fd\u969b\u7814\u7a76\u3092\u5b66\u3076\u3053\u3068\u306e\u3067\u304d\u308b\u5b66\u4f1a\u3067\uff0c\u4e16\u754c\u4e2d\u306e\u7814\u7a76\u8005\u3068\u8b70\u8ad6\u3067\u304d\u308b\u5b66\u4f1a\u3067\u3059\uff0e<br \/>\n\u79c1\u306f17-21\u65e5\u306e\u5168\u65e5\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0c\u4e2d\u6751\u572d\u4f51\uff0c\u77f3\u7530\uff0c\u85e4\u539f\uff0c\u4e09\u597d\uff0c\u5927\u585a\uff0c\u6749\u91ce\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f19-21\u65e5\u306e\u5348\u5f8c\u306e\u30bb\u30c3\u30b7\u30e7\u30f3\u300cConsciousness and Awareness\u300d\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c1\u65e51\u6642\u9593\u3067\u8a083\u6642\u9593\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0c\u300cFNIRS study of attentional states based on dynamic functional connectivity analysis\u300d\u3068\u3044\u3046\u984c\u76ee\u3067\uff0c\u767a\u8868\u3092\u884c\u3044\u307e\u3057\u305f\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">Introduction<br \/>\nWe are unaware of what is going on at the moment about 50% of the time in our daily life[1]. Inattentional states reduce efficiency at work or cause traffic accidents when driving. In order address this issue, it is necessary to monitor the temporal change of our attentional states and quantify the degree of attention. We have developed a method to classify attentional states using dynamic functional connectivity analysis of brain activities measured by functional near infrared spectroscopy (fNIRS).<br \/>\nMethods<br \/>\nTwenty healthy adults (10 males, age 22.4 \u00b1 1.0 years) participated in the experiment. A psychomotor vigilance task (PVT) was used to induce sustained attention, and the participants were required to respond to a visual stimulus as quickly as possible. In addition, their brain activity, covering the whole brain, was measured using a 116-channel fNIRS system (ETG-7100, Hitachi, Ltd.). The cerebral blood flow change data obtained were band-pass filtered at 0.01\u20130.1 Hz, and then the time-varying correlation coefficient matrix was calculated using a sliding-window approach with a window width of 10 s and a sliding step of 0.1 s. In order to obtain the typical brain states of each subject, the correlation coefficient matrix of each subject was classified into plural clusters at each time point using <em>k<\/em>-means clustering. The optimum number of clusters that would minimize the ratio of the intra-cluster distance to the inter-cluster distance was determined.<br \/>\nResults<br \/>\nThe optimum number of clusters was fixed at 2 in all subjects. For each subject, one of the two clusters was labeled \u201dattentional\u201d state if it contained more edges included in the central executive network than in the default mode network; otherwise, it was labeled \u201dinattentional\u201d state. After labeling the brain states, the functional connections were determined in 80% of the subjects (Fig. 1) In the attentional state, a connection was observed between the right middle frontal gyrus (MFG.R) and the right lingual gyrus (LNG.R). Because MFG.R is activated in response to preparations for the expected stimuli [2] and the LNG.R is related to visual processing[3], we presumedthat participants in the attentional state concentrated on the input of the stimulus. In the inattentional state, a connection was observed between the right postcentral gyrus (PoCG.R) and the right angular gyrus (ANG.R). Because the PoCG.R is associated with the recognition of pain [4] and the ANG.R is related to episodic memory [5], we presumed that participants&#8217; minds were wandering in the inattentional state. Furthermore, the temporal changes of the classification result of the correlation matrix at each time point indicated that the participants switched between the two states repeatedly (Fig. 2). Moreover, in order to investigate the relationship between the two defined states and the response time (RT) to the stimulus, the occurrence rates of the two states were calculated for each individual in the durations in which the fastest and the slowest 10% of RT were achieved ( Fig. 2). The attentional state tended to occur more frequently in the fastest 10% of RTs than in the slowest 10%, whereas the inattentional state was observed more frequently in the slowest 10%. This suggests that two brain states with different attention states defined for each individual are related to RTs in PVT.<br \/>\nConclusions<br \/>\nIn this paper, we have proposed a method to analyze changes in attentional states based dynamic functional connectivity analysis performed by measuring brain activity in a PVT by fNIRS. Two typical brain states were calculated and labeled as the attentional state and inattentional state. The temporal changes of the classification result of the correlation matrix at each time point showed that the two states appeared repeatedly. In addition, the occurrence of these brain states might be associated with the response time to a stimulus. The degree of attention may also be estimated from the brain activity patterns.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cfNIRS \u306f\u3069\u306e\u3088\u3046\u306b\u4f55\u3092\u8a08\u6e2c\u3057\u3066\u3044\u308b\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\u982d\u76ae\u306e\u4e0a\u304b\u3089\u8133\u306b\u8fd1\u8d64\u5916\u5149\u3092\u7167\u5c04\u3057\u8133\u8868(\u5927\u8133\u76ae\u8cea) \u306e\u8133\u8840\u6d41\u5909\u5316\u3092\u8a08\u6e2c\u3057\u3066\u3044\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u7a7a\u9593\u5206\u89e3\u80fd\u306f\u3044\u304f\u3064\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f3cm\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u591a\u76ee\u7684\u6700\u9069\u5316\u3068\u306f\u4f55\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u4e92\u3044\u306b\u7af6\u5408\u3059\u308b\u8907\u6570\u306e\u76ee\u7684\u95a2\u6570\u306e\u3082\u3068\u3067\u6700\u9069\u89e3\u3092\u6c42\u3081\u308b\u554f\u984c\u3067\u3059\u304c\uff0c\u82f1\u8a9e\u3067\u7b54\u3048\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u305b\u3093\u3067\u3057\u305f\uff0e\u30dd\u30b9\u30bf\u30fc\u3067\u6700\u9069\u5316\u306e\u8aac\u660e\u3092\u3057\u3066\u3044\u306a\u304f\u3066\u3082\u3042\u308b\u7a0b\u5ea6\u306e\u8cea\u554f\u3068\u56de\u7b54\u3092\u4f5c\u6210\u3057\u3066\u3044\u304f\u3079\u304d\u3060\u3063\u305f\u3068\u611f\u3058\u307e\u3057\u305f\uff0e\u305d\u306e\u305f\u3081SfNIRS\u306e\u6642\u306b\u306f\u3088\u308aMethod\u3092\u91cd\u70b9\u7684\u306b\u6700\u9069\u5316\u306b\u3064\u3044\u3066\u3082\u558b\u308c\u308b\u3088\u3046\u306b\u5165\u5ff5\u306b\u6e96\u5099\u3057\u3066\u3044\u304d\u307e\u3059\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>4<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u3053\u306e\u7814\u7a76\u306e\u5c55\u671b\u306f\u4f55\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\u3053\u306e\u30e1\u30bf\u72b6\u614b\u304c\u5b9a\u7fa9\u3067\u304d\u305f\u3053\u3068\u3067\uff0c\u5916\u56e0\u6027\u6ce8\u610f\u306b\u304a\u3051\u308b\u52d5\u7684\u306a\u8133\u72b6\u614b\u306e\u30ec\u30d9\u30eb\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u305f\u3081\uff0c\u4ea4\u901a\u4e8b\u6545\u306e\u9632\u6b62\u3084\u4f5c\u696d\u52b9\u7387\u306e\u5411\u4e0a\u306b\u3082\u3064\u306a\u3052\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>5<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u523a\u6fc0\u9593\u9694\u306f\u4f55\u79d2\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f30-45s\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>6<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u6642\u9593\u5206\u89e3\u80fd\u306f\u3044\u304f\u3064\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f0.1s\u3054\u3068\u306b\u30c7\u30fc\u30bf\u3092\u53d6\u5f97\u3067\u304d\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>7<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cSMA\u306e\u6a5f\u80fd\u3068\u306f\u4f55\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306fMotion control based on memory\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>8<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c116\u30c1\u30e3\u30f3\u30cd\u30eb\u306f\u3069\u3046\u3084\u3063\u306642\u9818\u57df\u306b\u3059\u308b\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\u30ec\u30b8\u30b9\u30c8\u30ec\u30fc\u30b7\u30e7\u30f3\u306e\u7d50\u679c\uff0c\u540c\u3058\u9818\u57df\u3068\u5224\u5b9a\u3055\u308c\u305fCH \u3092\u5e73\u5747\u3057\u3066\u65b0\u305f\u306b42\u9818\u57df\u306e\u884c\u5217\u3092\u4f5c\u6210\u3057\u3066\u3044\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>9<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u4f55CH\u53d6\u308c\u308b\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f116CH\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>10<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cWindowsize\u306f\u4f55\u6545\u3053\u306e\u30b5\u30a4\u30ba\u306a\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\u30b5\u30a4\u30ba\u306b\u95a2\u3057\u3066\u306f\u691c\u8a0e\u4e2d\u3067\u3042\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>11<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u523a\u6fc0\u306f1\u3064\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f1\u3064\u3067\u3059\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>12<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u306a\u305c\u5916\u7684\u6ce8\u610f\u306a\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\u5916\u56e0\u6027\u306b\u3088\u308b\u6a5f\u80fd\u7684\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u30c8\u30dd\u30ed\u30b8\u306e\u6642\u9593\u5909\u5316\u306f\u5341\u5206\u306b\u691c\u8a0e\u3055\u308c\u3066\u3044\u306a\u3044\u304c\uff0c\u5916\u56e0\u6027\u306e\u6ce8\u610f\u306e\u6b20\u5982\u306f\uff0c\u30d3\u30b8\u30cd\u30b9\u306b\u304a\u3051\u308b\u751f\u7523\u52b9\u7387\u306e\u4f4e\u4e0b\u3084\u81ea\u52d5\u8eca\u904b\u8ee2\u6642\u306b\u304a\u3051\u308b\u4e8b\u6545\u306b\u3064\u306a\u304c\u308b\u3053\u3068\u304c\u3042\u308b\u305f\u3081\uff0c\u5916\u56e0\u6027\u6ce8\u610f\u306b\u7740\u76ee\u3057\u305f\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>13<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0cmetastate\u3068\u306f\u4f55\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\uff0c\u7279\u5fb4\u7684\u306a\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u69cb\u6210\u3092\u8868\u3059\u72b6\u614b\u3092\u30e1\u30bf\u72b6\u614b\u3068\u547c\u3093\u3067\u3044\u308b\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>14<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u4f55\u3092\u307f\u3064\u3051\u305f\u306e\u304b\uff1f\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\uff0c\u3053\u306e\u624b\u6cd5\u3092\u7528\u3044\u308b\u3053\u3068\u30672\u3064\u306e\u7279\u5fb4\u7684\u306a\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u8868\u3059\u30e1\u30bf\u72b6\u614b\u3092\u7b97<br \/>\n\u51fa\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u305f\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>15<\/strong><br \/>\n\u8cea\u554f\u8005\u306e\u6c0f\u540d\u3092\u63a7\u3048\u640d\u306d\u3066\u3057\u307e\u3044\u307e\u3057\u305f\uff0e\u3053\u3061\u3089\u306e\u8cea\u554f\u306f\uff0c\u5b9f\u9a13\u306f\u4f55\u5206\u9593\u884c\u3063\u3066\u3044\u308b\u306e\u304b\u3068\u3044\u3046\u3082\u306e\u3067\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\u79c1\u306f\uff0c10 \u5206\u9593\u3068\u56de\u7b54\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u79c1\u306f\u56fd\u969b\u5b66\u4f1a2\u56de\u76ee\u3067\u3057\u305f\u304c\uff0c\u524d\u56de\u3088\u308a\u3082\u305f\u304f\u3055\u3093\u306e\u65b9\u306b\u7814\u7a76\u3092\u8aac\u660e\u3059\u308b\u6a5f\u4f1a\u304c\u3042\u308a\u5927\u5909\u6709\u610f\u7fa9\u306a\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u306b\u306a\u308a\u307e\u3057\u305f\uff0e\u307e\u305f\uff0c\u81ea\u5206\u304b\u3089\u7a4d\u6975\u7684\u306b\u558b\u308a\u304b\u3051\u7814\u7a76\u5185\u5bb9\u3092\u805e\u3044\u3066\u3082\u3089\u3046\u3088\u3046\u306b\u3067\u304d\u305f\u3053\u3068\u306f\u524d\u56de\u304b\u3089\u306e\u6210\u9577\u3060\u3068\u601d\u3063\u3066\u3044\u307e\u3059\uff0e\u307e\u305fMaterial\u3092\u8a73\u3057\u304f\u805e\u304d\u305f\u3044\u4eba\u304c\u591a\u3044\u5370\u8c61\u3092\u6301\u3061\u307e\u3057\u305f\uff0e10\u6708\u306e\u56fd\u969b\u5b66\u4f1a\u306b\u5411\u3051\u3066\uff0c\u4eca\u56de\u306eDynamic FC\u3000\u30bb\u30c3\u30b7\u30e7\u30f3\u3067\u5f97\u305f\u77e5\u8b58\u3092\u751f\u304b\u3057\u3066\uff0c\u7814\u7a76\u3092\u3069\u3093\u3069\u3093\u9032\u3081\u3066\u3044\u304d\u305f\u3044\u3068\u601d\u3044\u307e\u3059\u3002<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e5\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Neural Circuitry of Changing Social Norms through Persuasion<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Matsumoto Kenji<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Cognitive &amp; Affective Neuroscience: From Circuitry to Network<br \/>\nand Behavior<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Social norms regulate behavior, and changes in norms have a great impact on society. In most modern societies, norms change through interpersonal communication and persuasive messages found in media. We examined the neural basis of persuasion-induced changes in attitude toward and away from norms using fMRI. We measured brain activity while human participants were exposed to persuasive messages directed toward specific norms. Persuasion directed toward social norms specifically activated a set of brain regions including temporal poles, temporo-parietal junction, and medial prefrontal cortex. Beyond these regions, persuasion away from an accepted norm specifically recruited the left middle temporal and supramarginal gyri. Furthermore, in combination with data from a separate attitude-rating task, we found that the activity in left supramarginal gyrus represented participants\u2019 attitudes toward norms and tracked the persuasion-induced attitude changes that were away from agreement.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u00a0\uff0cfMRI\u3092\u7528\u3044\u3066\uff0c\u8aac\u5f97\u306b\u57fa\u3065\u3044\u3066\u793e\u4f1a\u7684\u898f\u7bc4\u306b\u8fd1\u3065\u3044\u305f\u308a\u96e2\u308c\u305f\u308a\u3059\u308b\u614b\u5ea6\u306e\u5909\u5316\u306e\u795e\u7d4c\u7684\u6839\u62e0\u3092\u8abf\u67fb\u3057\u305f\u7814\u7a76\u3067\u3057\u305f\uff0eMRI\u3092\u7528\u3044\u3066\u8ce6\u6d3b\u3092\u691c\u8a0e\u3057\u3066\u3044\u308b\u7814\u7a76\u3067\uff0c\u5bfe\u4eba\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\u3084\u8aac\u5f97\u306b\u3088\u3063\u3066\u3069\u306e\u3088\u3046\u306b\u884c\u52d5\u304c\u5909\u5316\u3059\u308b\u306e\u304b\u3092\u691c\u8a0e\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u767a\u60f3\u304c\u9762\u767d\u3044\u7814\u7a76\u3067\uff0c\u5927\u5909\u53c2\u8003\u306b\u306a\u308a\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aMapping Dynamics of Emotional Brain States and Memory Consolidation: From Circuitry to Network and Behavior<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Shaozheng Qin,<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Cognitive &amp; Affective Neuroscience: From Circuitry to Network<br \/>\nand Behavior<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Brain regions engage and disengage constantly with each other to support rapid and flexible changes in emotional states and access to memories. Conventional approaches on analyzing regional brain activity and static functional connectivity patterns provide little information about transient dynamics in brain functional organization. Novel approaches are needed to investigate how transient brain dynamics contribute to human emotion and memory with rapid and flexible access to disparate aspects of information. I will present a series of task- and resting-state fMRI studies with concurrent skin conductance recording and advanced analytic approaches (i.e., K-means, HMM and network dynamics) to investigate how dynamic states of emotion-related brain circuitry evolve over time at both encoding and at rest, and how these brain dynamics contribute to emotional memory consolidation. We found that: (1) emotion-related amygdala circuitry undergoes rapid changes in integration and segregation of functional connectivity with other brain regions critical for attention, salience detection and emotion regulation; (2) emotioncharged reactivation of hippocampus-based memory system at encoding enhances subsequent episodic memories; (3) re-occurrence of emotion-charged brain states at post-encoding rest predicts better episodic memories; (4) largescale brain functional networks among neocortical regions are gradually building up to support long-term memory retention after 24 hours. Altogether, our findings point toward the dynamic nature of brain emotional states and memory systems, and rapid changes in emotional memory organization with consolidation.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u4e00\u9023\u306e\u30bf\u30b9\u30af\u72b6\u614b\u304a\u3088\u3073\u5b89\u9759\u72b6\u614b\u306e\u8133\u306e\u52d5\u614b\u304c\u3069\u306e\u3088\u3046\u306b\u611f\u60c5\u7684\u8a18\u61b6\u7d71\u5408\u306b\u5bc4\u4e0e\u3057\u3066\u3044\u308b\u306e\u304b\u3092\u660e\u3089\u304b\u306b\u3059\u308b\u7814\u7a76\u3067\u3057\u305f\uff0eHMM\u3084k-means\u306e\u3088\u3046\u306a\u5f93\u6765\u306e\u65b9\u6cd5\u3092\u4f7f\u3063\u3066\u72b6\u614b\u3092\u52d5\u7684\u306b\u89e3\u6790\u3057\u3066\u3044\u308b\u7814\u7a76\u3067\u3057\u305f\uff0e \u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u7d71\u5408\u3084\u5206\u96e2\u304c\u3069\u306e\u3088\u3046\u306a\u5f71\u97ff\u3092\u4e0e\u3048\u308b\u306e\u304b\u3092\u898b\u3066\u3044\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aFunctional connectivity dynamics: Controversies, null models and clinical utility<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Andrew Zalesky,<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Dynamics of resting-state functional connectivity: Methods<br \/>\nand models<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a I will begin with an overview of the current state-of-the-art in methodologies and models for investigating functional connectivity dynamics with resting-state fMRI. I will specifically consider the choice of window length, mapping dynamics with instantaneous phase and whether dynamics are best represented as a continuum or discrete connectivity states. I will argue that it is important to establish whether dynamics in functional connectivity exceed expectations established under an appropriate null hypothesis. To this end, I will present an evaluation of various null models and test statistics that have been used to test whether functional connectivity dynamics can be distinguished from spurious fluctuations owing to sampling variation, physiological confounds and other noise sources. I will demonstrate that the choice of test statistic and null model to generate surrogate data is crucial. In the second part of my talk, I will demonstrate the potential clinical utility of resting-state functional connectivity dynamics in classifying schizophrenia patients from healthy comparison individuals. After controlling for head motion and physiological noise, I will show that utilizing spatial and temporal dynamics of resting-state fMRI can improve classification accuracy by more than 15% compared to static measures of connectivity. This work demonstrates the practical utility of functional connectivity dynamics, despite ongoing controversy about the core statistical properties of these dynamics.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u5b89\u9759\u72b6\u614b\u306efMRI\u3092\u7528\u3044\u305f\u6a5f\u80fd\u7684\u306a\u63a5\u7d9a\u30c0\u30a4\u30ca\u30df\u30af\u30b9\u3092\u8abf\u3079\u308b\u305f\u3081\u306e\u65b9\u6cd5\u8ad6\u3068\u30e2\u30c7\u30eb\u306e\u6700\u65b0\u306e\u72b6\u614b\u306e\u6982\u8981\u3092\u8ff0\u3079\u305f\u8b1b\u6f14\u3067\u3057\u305f\uff0e\u79c1\u3082\u3069\u3046\u691c\u8a0e\u3059\u3079\u304d\u304b\u8003\u3048\u3066\u3044\u308bwindow\u30b5\u30a4\u30ba\u306e\u5927\u304d\u3055\u306a\u3069\u52d5\u7684\u89e3\u6790\u306b\u304a\u3051\u308b\u51e6\u7406\u65b9\u6cd5\u304c\u8133\u306e\u52d5\u7684\u6027\u3092\u8868\u73fe\u3067\u304d\u3066\u3044\u308b\u304b\u3092\u691c\u8a0e\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u3053\u306e\u4eba\u306e\u767a\u8868\u306f\u5927\u5909\u8208\u5473\u6df1\u3044\u3082\u306e\u304c\u3042\u3063\u305f\u305f\u3081\uff0c\u4eca\u5f8c\u306e\u53c2\u8003\u306b\u3059\u308b\u305f\u3081\u306b\u3082\u518d\u8abf\u67fb\u3092\u884c\u3046\u5fc5\u8981\u304c\u3042\u308b\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aTime dynamics of resting brain networks<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Mark Woolrich<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Dynamics of resting-state functional connectivity: Methods<br \/>\nand models<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a The brain recruits multiple brain networks in a temporally coordinated manner in order to perform cognitive tasks. While much of the research has focused on the spatial nature of these networks, very little is known about the extent to which large-scale networks intrinsically exhibit their own organized temporal dynamics.\u00a0 Here, we use approaches based on the Hidden Markov Model (HMM) to investigate the nature of the temporal dynamics of large-scale networks when the brain is at rest. Using fMRI, we found that the brain spontaneously transitions between different large-scale networks, or brain states, in a relatively predictable manner [Vidaurre, PNAS]. Not only are some transitions more probable than others, but they also follow a hierarchical organization that is surprisingly simple: networks self-organise into two sets of states (referred to as metastates) such that the probability of cycling between networks within a metastate is much higher than transitioning between networks belonging to other metastates. The states belonging to one of these metastates primarily correspond to networks associated with higher-order cognitive functions, whereas the states belonging to the other metastate are mostly involved in perception and motor functions. Importantly, this organisation is specific to individuals in the sense that every person has a specific metastate profile that is very robust across sessions; is heritable, in the sense that members of the same family tend to have similar metastates profiles; and significantly relates to behaviour, in particular to intelligence, to happiness, and to the personality of the subjects. In a separate study, we used MEG and a version of the HMM capable of finding states with distinct spectral properties (including phase coupling between signals at different frequencies) to investigate the extent to which networks intrinsically exhibit phase-locking, consistent with the idea of communication-through-coherence being a key mechanism of communication in the brain. We found that resting cortical activity is indeed generally organised into networks that show transient increases in (phase-locking) coherence in specific frequency bands, but only when viewed at the very fast, subsecond time-scales accessible with the HMM [Vidaurre, bioRxiv]. This approach also revealed new insights into the organisation of the ubiquitous default mode network, which is revealed to be composed of two components, anterior and posterior, with very distinct temporal and spectral properties. These two separated components exhibit strong coherence with the posterior cingulate cortex, yet in very different frequency regimes. The operation of these large-scale cortical networks in very different frequency bands may reflect the different intrinsic timescales that they specialise in.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u5b89\u9759\u72b6\u614b\u306b\u304a\u3051\u308b\u5927\u898f\u6a21\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u6642\u9593\u7684\u30c0\u30a4\u30ca\u30df\u30af\u30b9\u306e\u6027\u8cea\u3092\u8abf\u3079\u308b\u305f\u3081\u306b\uff0c\u96a0\u308c\u30de\u30eb\u30b3\u30d5\u30e2\u30c7\u30eb\uff08HMM\uff09\u306b\u57fa\u3065\u304f\u30a2\u30d7\u30ed\u30fc\u30c1\u3092\u4f7f\u7528\u3057\u691c\u8a0e\u3092\u884c\u3063\u3066\u3044\u308b\u7814\u7a76\u3067\u3057\u305f\uff0eHMM\u306fk-means\u3068\u540c\u69d8\u591a\u304f\u4f7f\u308f\u308c\u3066\u3044\u308b\u624b\u6cd5\u3067\u3042\u308b\u305f\u3081\uff0c\u30e2\u30c7\u30eb\u3092\u4f7f\u3063\u3066\u8a55\u4fa1\u3059\u308b\u306e\u89e3\u6790\u65b9\u6cd5\u3068\u3057\u3066\u62bc\u3055\u3048\u3066\u304a\u304b\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3068\u611f\u3058\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aDynamic and static resting-state functional connectivity encode complementary behavioral information<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Raphael Liegeois<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Dynamics of resting-state functional connectivity: Methods<br \/>\nand models<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Converging evidence suggests that static measures of resting-state functional connectivity (sFC) MRI are missing important information encoded in the temporal fluctuations of fMRI and FC time series. Our recent work suggests that a single first-order autoregressive (AR) model of fMRI time series is able to encode most of these temporal FC fluctuations. Here, we use the coefficient matrix of the AR model as a dynamic marker of FC (dFC). We then explore whether this dFC marker encodes behavioral information in a different way as compared to a classical sFC marker. Across 58 behavioral measures in 419 unrelated HCP subjects, we find that the proposed marker of dFC captures more behavioral information than sFC. Furthermore, dFC outperforms sFC in explaining task-performance measures (e.g., accuracy in a working memory task), whereas sFC and dFC explains self-reported measures (e.g., evaluation<br \/>\nof social distress) equally well.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>fMRI\u6642\u7cfb\u5217\u306e1\u6b21\u81ea\u5df1\u56de\u5e30\uff08AR\uff09\u30e2\u30c7\u30eb\u304c\uff0c\u6642\u9593\u7684FC\u5909\u52d5\u306e\u5927\u90e8\u5206\u3092\u7b26\u53f7\u5316\u3067\u304d\u308b\u3053\u3068\u3092\u8a3c\u660e\u3057\u3066\u3044\u308b\u7814\u7a76\u3067\u3057\u305f\uff0eAR\u30e2\u30c7\u30eb\u306e\u4fc2\u6570\u884c\u5217\u3092FC\uff08dFC\uff09\u306e\u52d5\u7684\u30de\u30fc\u30ab\u3068\u3057\u3066\u4f7f\u7528\u3057\u3066\u304a\u308a\uff0c\u305d\u308c\u304c\u30d1\u30d5\u30a9\u30fc\u30de\u30f3\u30b9\u3092\u8aac\u660e\u3059\u308b\u969b\u306b\u52b9\u3044\u3066\u3044\u308b\u3053\u3068\u3092\u8a3c\u660e\u3057\u3066\u3044\u307e\u3057\u305f\uff0e\u4eca\u5f8c\uff0c\u52d5\u7684\u89e3\u6790\u3068\u884c\u52d5\u306e\u95a2\u4fc2\u6027\u3092\u691c\u8a0e\u3059\u308b\u306e\u306b\u53c2\u8003\u306b\u3057\u305f\u3044\u3068\u8003\u3048\u3066\u3044\u307e\u3059\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<\/p>\n<ul>\n<li>OHBM2018, https:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821<\/li>\n<\/ul>\n<p><strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">\u6749\u91ce\u68a8\u7dd2<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Functional connectivity network of Kanizsa illusory contour perception<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">Rio Sugino, Satoru Hiwa, Keisuke Hachisuka, Fumihiko Murase, Tomoyuki Hiroyasu<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">Organization for Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">24th Annual Meeting of the Organization for Human Brain Mapping<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">Santec Singapore<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/22<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/22\u306b\u304b\u3051\u3066\uff0cSuntec Singapore\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305f24th Annual Meeting of the Organization of Human Brain Mapping\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u3053\u306e24th Annual Meeting of the Organization of Human Brain Mapping\u306f\uff0cOrganization for Human Brain Mapping\u306b\u3088\u3063\u3066\u4e3b\u50ac\u3055\u308c\u305f\u56fd\u969b\u4f1a\u8b70\u3067\uff0c\u30d2\u30c8\u306e\u8133\u7d44\u7e54\u304a\u3088\u3073\u8133\u6a5f\u80fd\u306e\u30de\u30c3\u30d4\u30f3\u30b0\u306b\u95a2\u3059\u308b\u7814\u7a76\u306b\u643a\u308f\u308b\u69d8\u3005\u306a\u80cc\u666f\u3092\u6301\u3064\u7814\u7a76\u8005\u3092\u96c6\u3081\uff0c\u3053\u308c\u3089\u306e\u79d1\u5b66\u8005\u306e\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\uff0c\u304a\u3088\u3073\u6559\u80b2\u3092\u4fc3\u9032\u3059\u308b\u3053\u3068\u3092\u76ee\u7684\u306b\u958b\u50ac\u3055\u308c\u3066\u3044\u307e\u3059<sup>1<\/sup><sup>\uff09<\/sup>\uff0e<br \/>\n\u79c1\u306f16\uff5e22\u65e5\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0c\u4e09\u597d\u3055\u3093\uff0c\u77f3\u7530\u3055\u3093\uff0c\u4e2d\u6751\u3055\u3093\uff0c\u85e4\u539f\u3055\u3093\uff0c\u897f\u6fa4\u3055\u3093\uff0c\u5927\u585a\u3055\u3093\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f20, 21, 22\u65e5\u306e\u5348\u5f8c\u306e\u30bb\u30c3\u30b7\u30e7\u30f3\u300cPoster session\u300d\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e\u767a\u8868\u306e\u5f62\u5f0f\u306f\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u3067\uff0c\u767a\u8868\u6642\u9593\u306f1\u6642\u9593\u3068\u306a\u3063\u3066\u304a\u308a\u307e\u3057\u305f\uff0e<br \/>\n\u4eca\u56de\u306e\u767a\u8868\u306f\uff0cFunctional connectivity network of Kanizsa illusory contour perception\uff0e\u4ee5\u4e0b\u306b\u6284\u9332\u3092\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">1. Introduction<br \/>\nIllusion is used for signs, advertisements, and Virtual Reality. Especially illusory contour (IC) is one of the illusion that we often see in daily life. Conventional studies on IC neural bases have focused on activated areas during illusion, and analysis of brain functional network associated with IC is inadequate. In this study, Kanizsa figure was used to induce cognition of IC and brain functional network associated with IC was investigated.<br \/>\n&nbsp;<br \/>\n2. Method<br \/>\n20 healthy adult subjects (16 males, age: 22.9\u00b12.8 years old) saw white entity on a black background in an fMRI scanner. The stimulus consisted of three images: (1) four rounded Kanizsa type &#8216;pacmen&#8217; inducer (illusory image), (2) image with luminance contrast (&#8216;real&#8217;) contour (the contour was explicitly drawn on the illusory image) and (3) fixed cross. The whole brain was divided into 116 areas by Automated Anatomical Labeling (AAL). A correlation coefficient of the BOLD signal was calculated between each region. Graph theory analysis was done to identify functional network related to illusion. Three graph theoretical indices of degree centrality (BC), clustering coefficient (CC), and betweenness centrality (BC) of each brain region were calculated from the correlation coefficient matrix. Furthermore, feature extraction by a linear discriminant analysis (LDA) and a stepwise method was executed in order to extract the brain regions and graph theoretical indices that emphasizes the difference in the brain functional network structure between illusory and real contour. They were performed with the three graph theoretical indices of 116 regions (constituting 348-dimensional data in total). A one-dimensional discrimination axis for identifying the two conditions was obtained [2]. Since the discriminant axis is obtained by multiplying the original data and the discriminant vector, important factors were extracted by regarding the element value of the discriminant vector as the degree of contribution to the two-state discrimination.<br \/>\n&nbsp;<br \/>\n3. Results<br \/>\nEleven indices used for LDA were selected using the stepwise method. The indiscrimination rate was 0.00% (permutation p value &lt; 0.001). Fig.2(a) shows element values \u200b\u200bof standardized discriminant vectors. Graph theoretical metrics of brain regions with positive coefficients are increased in illusory task than real task but those with negative coefficients are increased in real task than illusory task. In the graph theoretical metrics shown in Fig.2(a), the difference in the degree centrality of Right precentral gyrus (PreCG.R) and Right middle frontal gyrus (MFG.R) associated with visual processing between two states was analyzed. Furthermore, representative data of each task condition shown in Fig.2(b) were extracted so that the distance between the two conditions is maximized. The functional network that PreCG.R and MFG.R were set to seed area was visualized. From Fig.2(b), it could be confirmed that PreCG.R and MFG.R had connections with other regions which were more in the illusory task than the real task, and that was consistent with the result in Fig.2(a). It had been shown that PreCG.R was activated when recognizing the IC called Poggendorff illusion [2], and in the perception of Kanitsa IC, it was found that the increase of connectivity was important. In addition, since MFG.R is involved in maintaining attention, it was suggested that attention network related to recognition of IC was different from actual contour perception.<br \/>\n&nbsp;<br \/>\n4. Conclusions<br \/>\nIn this study, by using the Kanizsa Figure, brain functional network associated with IC was investigated. In order to clarify the difference of brain functional network between the illusory task and the real task, important graph theoretical indices explaining two state differences were extracted by LDA and the stepwise method. From these results, it was found that PreCG.R and MFG.R were related to the perception of IC and maintenance of attention, respectively, and their degree centrality increased when Kanizsa IC was recognazed.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9\uff11<\/strong><br \/>\n\u932f\u8996\u7814\u7a76\u306e\u7814\u7a76\u610f\u7fa9\u306b\u3064\u3044\u3066\u8cea\u554f\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u5c06\u6765\u7684\u306b\u932f\u8996\u72b6\u614b\u3092\u5b9a\u7fa9\u53ef\u80fd\u306a\u8133\u6d3b\u52d5\u3092\u691c\u51fa\u3053\u3068\u306b\u3088\u308a\uff0c\u932f\u8996\u72b6\u614b\u3092\u5224\u65ad\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3068\u306a\u308b\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9\uff12<\/strong><br \/>\n\u89e3\u6790\u7d50\u679c\u3068\u3057\u3066\u8996\u899a\u91ce\u304c\u542b\u307e\u308c\u3066\u3044\u306a\u3044\u3053\u3068\u306b\u5bfe\u3059\u308b\u89e3\u91c8\u306b\u3064\u3044\u3066\u8cea\u554f\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u6bd4\u8f03\u3057\u3066\u3044\u308b\u30bf\u30b9\u30af\u306f\u3068\u3082\u306b\u753b\u9762\u3092\u898b\u3066\u3044\u308b\u3053\u3068\uff0c\uff12\u3064\u306e\u30bf\u30b9\u30af\u306e\u6bd4\u8f03\u306f\u5b58\u5728\u3057\u306a\u3044\u8f2a\u90ed\u3092\u77e5\u899a\u3059\u308b\u3053\u3068\u3092\u76ee\u7684\u3068\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u539f\u56e0\u3067\u3042\u308b\u3068\u8003\u3048\u3089\u308c\u308b\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9\uff13<\/strong><br \/>\n\u30b9\u30c6\u30c3\u30d7\u30ef\u30a4\u30ba\u6cd5\u306b\u304a\u3051\u308b\u9818\u57df\u9078\u629e\u306e\u969b\u306e\u9078\u629e\u57fa\u6e96\u306b\u3064\u3044\u3066\u8cea\u554f\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u9818\u57df\u3092\u8ffd\u52a0\u3059\u308b\u30b9\u30c6\u30c3\u30d7\u3067\u306f\u6b21\u6570\u4e2d\u5fc3\u6027\u3092\u7528\u3044\u3066\u30bf\u30b9\u30af\u3092\u8b58\u5225\u3059\u308b\u969b\u306e\u8b58\u5225\u7387\u304c\u6700\u5927\u3068\u306a\u308b\u3088\u3046\u306a\u9818\u57df\u306e\u7d44\u5408\u305b\u3068\u306a\u308b\u9818\u57df\u3092\u9078\u629e\u3057\uff0c\u9818\u57df\u3092\u524a\u9664\u3059\u308b\u30b9\u30c6\u30c3\u30d7\u3067\u306f\u524a\u9664\u3057\u3066\u3082\u8b58\u5225\u7387\u304c\u6709\u610f\u306b\u6e1b\u5c11\u3057\u306a\u3044\u9818\u57df\u3092\u524a\u9664\u3059\u308b\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9\uff14<\/strong><br \/>\nPPI\u89e3\u6790\u306e\u7d50\u679c\u306b\u304a\u3044\u3066\u932f\u8996\u72b6\u614b\u306b\u304a\u3044\u3066\u306f\uff11\u3064\u306e\u9818\u57df\u306e\u7d50\u5408\u5148\u306e\u307f\u3067\u3042\u308b\u304c\uff0c\u5b9f\u8f2a\u90ed\u72b6\u614b\u306b\u304a\u3044\u3066\u306f\uff12\u3064\u306e\u9818\u57df\u304b\u3089\u306e\u7d50\u5408\u5148\u304c\u7d50\u679c\u3068\u3057\u3066\u793a\u3055\u308c\u3066\u3044\u308b\u7406\u7531\u306b\u3064\u3044\u3066\u8cea\u554f\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e\u3053\u306e\u8cea\u554f\u306b\u5bfe\u3057\u3066\uff0c\u9078\u629e\u3055\u308c\u305f\u5168\u3066\u306e\u9818\u57df\u306b\u5bfe\u3057\u3066PPI\u89e3\u6790\u3092\u884c\u3063\u305f\u304c\uff0c\u591a\u91cd\u6bd4\u8f03\u88dc\u6b63\u3092\u304b\u3051\u308b\u3053\u3068\u3067\u4ed6\u306e\u9818\u57df\u306b\u304a\u3051\u308b\u7d50\u5408\u5148\u306f\u62bd\u51fa\u3055\u308c\u305a\uff0c\u9078\u629e\u9818\u57df\u306e\u3046\u3061\uff13\u3064\u306e\u9818\u57df\u306b\u304a\u3051\u308b\u7d50\u5408\u5148\u306e\u307f\u304c\u691c\u51fa\u3055\u308c\u305f\u3068\u7b54\u3048\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u521d\u3081\u3066\u306e\u5b66\u4f1a\u53c2\u52a0\u3067\u975e\u5e38\u306b\u7dca\u5f35\u3057\u305f\u304c\uff0c\uff13\u65e5\u9023\u7d9a\u3067\u767a\u8868\u3060\u3063\u305f\u306e\u3067\uff0c\uff12\u65e5\u76ee\u4ee5\u964d\u306f\u6bd4\u8f03\u7684\u843d\u3061\u7740\u3044\u3066\u767a\u8868\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u305f\uff0e\u82f1\u8a9e\u306b\u81ea\u4fe1\u306f\u306a\u304b\u3063\u305f\u304c\uff0c\u89aa\u5207\u306b\u5bfe\u5fdc\u3057\u3066\u3044\u305f\u3060\u3051\u305f\u3053\u3068\u3082\u3042\u308a\uff0c\u601d\u3063\u305f\u4ee5\u4e0a\u306b\u8a00\u3044\u305f\u3044\u3053\u3068\u304c\u4f1d\u308f\u3063\u3066\u3044\u305f\u3068\u601d\u3046\uff0e\u305f\u3060\u60f3\u5b9a\u3057\u3066\u3044\u305f\u8cea\u554f\u306b\u306f\u306a\u3093\u3068\u304b\u7b54\u3048\u3089\u308c\u305f\u304c\uff0c\u60f3\u5b9a\u3057\u3066\u3044\u306a\u304b\u3063\u305f\u8cea\u554f\u306b\u306f\u306a\u304b\u306a\u304b\u7b54\u3048\u3089\u308c\u305a\uff0c\u6e96\u5099\u304c\u4e0d\u8db3\u3057\u3066\u3044\u305f\u3068\u611f\u3058\u305f\uff0e\u307e\u305f\uff0c\u82f1\u8a9e\u3092\u805e\u304d\u53d6\u308b\u306e\u304c\u82e6\u624b\u3067\uff0c\u8cea\u554f\u3092\u3057\u3066\u3044\u305f\u3060\u3044\u3066\u3082\u3046\u307e\u304f\u805e\u304d\u53d6\u308c\u305a\u306b\u7d42\u308f\u3063\u3066\u3057\u307e\u3063\u305f\u3053\u3068\u304c\u53cd\u7701\u70b9\u3068\u3057\u3066\u6319\u3052\u3089\u308c\u308b\uff0e<br \/>\n&nbsp;<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e\uff14\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a\u3000Reconstructing the subjective perception of object size with population receptive field modeling<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Man-Ling Ho, John A. Greenwood, D. Samuel Schwarzkopf<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nSize perception rarely reflects the veridical size of an object, as revealed by contextual illusions that distort the apparent size of an object through depth (e.g. Ponzo illusion) or simple geometric cues (e.g. Muller-Lyer illusion). Despite the complex neural mechanism required to process contextual backgrounds, evidence suggests that apparent object size is encoded in as early as V1 (e.g. Murray et al., 2006). Here we used population receptive field (pRF) modelling as a tool for testing models of size perception. Our study aimed to reconstruct the subjective perception of an object&#8217;s size based on V1 activity. First we used a conceptual replication of previous findings on the Ponzo illusion (Experiment 1). We subsequently tested whether there is also a signature in V1 of apparent size for the Muller-Lyer illusion (Experiment 2).<br \/>\nMethods:<br \/>\nPopulation receptive field (pRF) modelling. Using functional magnetic resonance imaging (fMRI), we obtained estimates of pRF location and spread at voxel-width resolution for each participant following a similar procedure to the one outlined by Dumoulin and Wandell (2008).<br \/>\nReconstructing subjective perception of object size. We presented two equal length stimuli (flashing between black and white at 2.5Hz) in the context of the Ponzo (Experiment 1, n = 10) and Muller-Lyer illusions (Experiment 2, n = 10). These contextual backgrounds varied the perception of the stimulus lengths (or locations, see Fig. 1). Based on the participant&#8217;s pRF map, we sampled V1 responses after subtracting the response to the contextual background across the stimulus location. The sampled responses were then collapsed across the hemifields to produce a signature that reflects the apparent target stimuli size for the participant. We quantified this by fitting it with a Gaussian curve (four parameters: \u03b1, baseline; \u03b2, response amplitude; \u00b5, peak location; \u03c3, spread). For the Ponzo illusion, we expected \u03c3 (reflecting stimulus length) to be larger for the apparently longer bar. For the Muller-Lyer illusion, we expected \u00b5 (reflecting stimulus location) to shift outward for dots that appeared to be further apart.<br \/>\nBehavioral measures. We obtained measures of illusion strength during the same scanning session using an adjustment procedure.<br \/>\nResults:<br \/>\nWe first checked whether the signatures were well characterized by a Gaussian curve by averaging the goodness of fit across participants and conditions (Experiment 1, R2 = .94; Experiment 2, R2 = .96). For Experiment 1 (Ponzo illusion), the fitted curve for the further bar (\u03c3 = 1.18) was wider than the nearer bar (\u03c3 = 0.87) at the group level, suggesting the further bar looked longer. We found a difference in sigma in the same direction for all but one participant. For Experiment 2 (Muller-Lyer illusion), the peak of the fitted curve for the outward arrow condition (\u00b5 = 4.43) was further from fixation than for the inward arrow condition (\u00b5 = 3.91) at the group level (see Fig. 2), suggesting the dots looked further apart with the outward arrows. We found peak shifts in the same direction for all participants. We performed additional tests to ensure the peak shift could not be attributed to residual signal elicited by the contextual arrows; we found no significant correlation between signal in the arrows and illusion strength (r = .08, p = .837 for outward arrow condition; r = -.39, p = .268 for inward arrow condition).<br \/>\nConclusions:<br \/>\nWe were able to reconstruct the subjective perception of object size from V1 responses to the extent that it reflected the expected direction of the illusion, both for the Ponzo illusion used in previous research and the Muller-Lyer illusion. This suggests that V1 activity contains a general signature of apparent size. It is plausible that size and position representations share the same underlying mechanism. In future experiments we will therefore test whether V1 signals also encode apparent shifts in position unrelated to size perception.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u77e5\u899a\u3057\u305f\u7269\u4f53\u306e\u30b5\u30a4\u30ba\u306e\u518d\u69cb\u6210\u306b\u95a2\u3059\u308b\u767a\u8868\u3067\u3057\u305f\uff0epRF\u306f\u8996\u899a\u306b\u95a2\u9023\u3059\u308b\u7814\u7a76\u306b\u304a\u3044\u3066\uff0c\u8fd1\u5e74\u7528\u3044\u3089\u308c\u3066\u3044\u308b\u30e2\u30c7\u30ea\u30f3\u30b0\u65b9\u6cd5\u3067\u3042\u308a\uff0c\u4eca\u56de\u306e\u30dd\u30b9\u30bf\u30fc\u767a\u8868\u306b\u304a\u3044\u3066\u3082\u983b\u7e41\u306b\u7528\u3044\u3089\u308c\u3066\u3044\u308b\u624b\u6cd5\u3067\u3057\u305f\uff0e\u3053\u306e\u624b\u6cd5\u306f\u81ea\u8eab\u306e\u7814\u7a76\u306b\u3082\u5fdc\u7528\u53ef\u80fd\u3067\u3042\u308b\u3068\u8003\u3048\u3089\u308c\u308b\u3053\u3068\u304b\u3089\uff0cpRF\u304a\u3088\u3073\u3053\u306e\u767a\u8868\u306b\u304a\u3044\u3066\u7528\u3044\u3089\u308c\u3066\u3044\u305f\u30a8\u30f3\u30b3\u30fc\u30c9\u65b9\u6cd5\u306b\u3064\u3044\u3066\u3088\u308a\u8a73\u3057\u304f\u52c9\u5f37\u3057\u3088\u3046\u3068\u611f\u3058\u305f\uff0e\u307e\u305fV1\u3092\u898b\u308b\u3053\u3068\u306b\u3088\u3063\u3066\u523a\u6fc0\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u306e\u5927\u304d\u3055\u3092\u63a8\u5b9a\u3067\u304d\u308b\u3053\u3068\u306b\u975e\u5e38\u306b\u9a5a\u3044\u305f\uff0e\u932f\u899a\u306e\u4e2d\u3067\u3082\u6709\u540d\u306a\u3082\u306e\u3092\u7528\u3044\u3066\u3044\u308b\u3053\u3068\u3082\u3042\u308a\uff0c\u304b\u306a\u308a\u8208\u5473\u6df1\u304f\u611f\u3058\u305f\uff0e\u95a2\u9023\u3059\u308b\u9805\u76ee\u3092\u8abf\u67fb\u3057\u7406\u89e3\u3092\u6df1\u3081\uff0c\u81ea\u8eab\u306e\u7814\u7a76\u306b\u53d6\u308a\u5165\u308c\u3066\u3044\u304d\u305f\u3044\u3068\u601d\u3046\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aThe subjectiveness of illusory motion perception: evidence from effective connectivity study<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Sigita Cinciute, Bogdan Draganski<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nCan be an illusion rigorously defined? In the past, the visual illusion was considered to be an inappropriate object of study. Nowadays, they are a powerful tool for understanding the neurobiology of vision (Eagleman, 2001). The illusory motion refers to a perception of motion that is absent in the present physical stimulus. There is general agreement in ascribing motion illusion to higher-level processing in the visual cortex, but debate remains about the exact role of cortical networks triggering it. Yet, the subjectiveness of illusion perception rises: some subjects are more prone to perceive it than others. Moreover, some can consciously suppress or induce it. Thus, we hypothesise that illusion is caused by the distributed representation of sensory information processing among several brain regions and may be affected by some biological traits such as age.<br \/>\nMethods:<br \/>\nStructural and functional data were acquired using a 3T Siemens Prisma MRI scanner (Erlangen, Germany) equipped with a 64-channel head coil. Functional data were acquired for with gradient echo T2*-weighted pulse sequence (repetition time [TR] \/ echo time [TE] = 66 ms\/30 ms, slice thickness = 2.5 mm, interslice gap = 3.0 mm, matrix = 64 \u00d7 64, FOV =1152 mm, flip angle [FA] = 90\u00b0). Prior to fMRI, we also acquired a high-resolution T1-weighted structural image, using a standard MPRAGE sequence (TR\/TE\/TI = 2300\/2.98\/900 ms, slice thickness = 1 mm, FOV = 240mm*256mm, FA = 9\u00b0). The event-related experiment employed 6 random presentations of 7 optical illusion causing images (Youri Messen-Jaschin\u00a9 and Akiyoshi Kitaoka\u00a9). Each ~7 sec trial contained 4 sec rating with 3 sec of fixation.Participants were asked for a single rating of stimuli, whether they experience the depth or motion (Yes3D) by looking at it or not (No3D) task. During resting period eyes were fixed to the cross. Pre-processing and MRI data analysis were carried out with a functional connectivity toolbox Conn v17 (Whitfield-Gabrieli &amp; Nieto-Castanon, 2012). State-related connectivity differences (no illusion vs illusion) were assessed through a weighed general linear model (wGLM) with bivariate regression coefficients. Behavioural data were analysed using Matlab R2014a.<br \/>\nResults:<br \/>\nThe final study contained fourteen right-handed subjects (7 women) with a normal or corrected-to-normal vision and no apparent physical, neurological, or psychiatry disorder. All subjects were separated into two groups regarding their age: younger (n=7, 30.86 \u00b1 7.90 years), older (n=7, 61.29 \u00b1 5.25 years). Younger subjects significantly more often perceived no illusion (No3D: 34.0 \u00b1 14.3 % vs. 18.0 \u00b1 24.4 %, p&lt;0.1 ) and were more confident than older subjects (Uncertain: 11.6\u00b111.7 % vs. 24.5 \u00b1 15.5 %, p&lt;0.1). However, both groups perceived motion illusion equal times (Yes3D: 54.4 \u00b1 15.5 % vs 57.5 \u00b1 15.3 %). A similar percentage was regardless of presented stimuli.<br \/>\nWe found a significant interaction between visual, associative, and attention control systems regarding subjects age in ROI-to-ROI two-sample t-test for between-subjects (Younger&gt;Older) contrast at Yes3D&gt;No3D condition, with the threshold at FDR-seed-level correction (p&lt;0.05) and FWE-correction (p&lt;0.05) for network-based statistics (Fig.1.).<br \/>\nConclusions:<br \/>\nWe conclude that illusory motion perception is affected by subjects&#8217; age via visual associative and attention control systems. Moreover, we presume that visual art of Youri Messen-Jaschin\u00a9 (Messen-Jaschin, n.d.) and Akiyoshi Kitaoka\u00a9 (Ashida, Kuriki, Murakami, Hisakata, &amp; Kitaoka, 2012) similarly induce anomalous motion illusion and may involve the same neural mechanisms.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u753b\u50cf\u304c\u52d5\u3044\u3066\u898b\u3048\u308b\u932f\u8996\u306b\u304a\u3044\u3066\uff0c\u932f\u8996\u304c\u8d77\u3053\u308b\u304b\u3069\u3046\u304b\u306b\u306f\u88ab\u9a13\u8005\u306e\u5e74\u9f62\u304c\u95a2\u308f\u3063\u3066\u3044\u308b\u53ef\u80fd\u6027\u304c\u3042\u308b\u3053\u3068\u3092\u793a\u3057\u305f\uff0e\u5e74\u9f62\u5dee\u3092\u6bd4\u8f03\u3059\u308b\u305f\u3081\u306b\uff0c\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u89e3\u6790\u3092\u884c\u3063\u3066\u3044\u305f\u304c\uff0ceffective connectivity\u3092\u7528\u3044\u3066\u3044\u305f\uff0e\u4ed6\u306e\u767a\u8868\u306b\u304a\u3044\u3066\u3082effective connectivity\u306f\u591a\u304f\u7528\u3044\u3089\u308c\u3066\u3044\u3066\uff0c\u975e\u5e38\u306b\u6ce8\u76ee\u5ea6\u304c\u9ad8\u307e\u3063\u3066\u3044\u308b\u3068\u611f\u3058\u305f\u305f\u3081\uff0c\u3053\u306e\u767a\u8868\u306b\u95a2\u5fc3\u3092\u3088\u308a\u5f37\u304f\u6301\u3063\u305f\uff0e\u307e\u305f2\u7a2e\u985e\u306e\u932f\u8996\u753b\u50cf\u3092\u7528\u3044\u3066\u304a\u308a\uff0c\u3053\u308c\u3089\u306e\u932f\u8996\u753b\u50cf\u9593\u3067\u306e\u6bd4\u8f03\u3082\u884c\u3063\u3066\u3044\u308b\uff0e\u3053\u306e\u8003\u3048\u65b9\u306f\uff0c1\u3064\u306e\u932f\u8996\u753b\u50cf\u306b\u3068\u3089\u308f\u308c\u305a\uff0c\u4f3c\u305f\u3082\u306e\u3092\u8907\u6570\u4f7f\u7528\u3059\u308b\u3053\u3068\u3067\uff0c\u932f\u8996\u753b\u50cf\u306e\u5171\u901a\u8981\u7d20\u306b\u95a2\u9023\u3059\u308b\u8133\u6d3b\u52d5\u304c\u691c\u51fa\u53ef\u80fd\u3067\u3042\u308b\u306e\u3067\u306f\u306a\u3044\u304b\u3068\u601d\u3063\u305f\uff0e\u81ea\u8eab\u306e\u7814\u7a76\u306b\u3082\u751f\u304b\u305b\u308b\u8003\u3048\u65b9\u306a\u306e\u3067\u306f\u306a\u3044\u304b\u3068\u601d\u3046\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aPsychoPhysiological Interaction of CoActivation Patterns: tracking fMRI network dynamics during task<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Lorena Freitas<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Poster session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nInvestigating task-related modulations of Functional Connectivity (FC) with functional magnetic resonance imaging (fMRI) is crucial to reveal the neurological underpinnings of cognitive processing3. Existing analytical methods hypothesise sustained FC within the duration of a task, but this assumption has been shown too limiting by recent imaging studies.<br \/>\nChang and Glover (2010)1 were the first of many to show that FC fluctuates over time in resting state recordings, and several methods \u2013 such as CoActivation Pattern (CAP) analysis7,8 \u2013 were soon developed to reveal brain dynamics during rest. Unsurprisingly, activation time courses during task execution are also known to show exquisite complexity that cannot be captured by standard stationary approaches4, but existing task-based methods such as Psychophysiological Interaction Analysis2 (PPI) are yet to fully exploit the dynamic configurations of brain activity. Unveiling the dynamics of task-dependent FC may thus shed light on previously uncharacterised brain function during performance of cognitive tasks.<br \/>\nHere, we describe a novel seed-based method, called Psychophysiological Interation of Co-Activation Patterns (PPI-CAPs), that extracts task-dependent patterns of brain activity from a subset of the available fMRI data. Moreover, the method provides insight into the dynamics of pattern occurrences and their consistency across subjects over time. This approach contributes to the state-of-the-art methodologies for tracking brain function dynamics6.<br \/>\nMethods:<br \/>\nfMRI data from 16 healthy subjects watching a short video5 were used. The film alternated between two contexts: 1. images of children playing outside (fun); 2. scenes where scientific concepts were explained in a laboratory (science). 15 subjects whose scrubbed data9 contained more than 80% of original frames were analysed.<br \/>\nThe data were motion corrected, warped to MNI space, smoothed (6mm FWHM), deconvolved with the hemodynamic response function using a Wiener filter and z-scored in time. The Posterior Cingulate Cortex (PCC, 6mm sphere centred at x=0, y=-58, z=16) was selected as a seed. The seed signal was thresholded at T=0.5 so that 30% of the available timepoints were selected.<br \/>\nFor validation purposes, an initial stationary analysis step is performed. Here, frames in which the seed activity was above threshold were multiplied by a context variable and then averaged, to match the contrast of a PPI analysis (fun-science, Fig.1A). The spatial correlation between the obtained stationary interaction map (siMap) and the PPI results was measured using Pearson&#8217;s coefficient.<br \/>\nFor the dynamic analysis, k-means clustering (k=6) was applied to all suprathreshold frames. Those in each cluster were then averaged to form PPI-CAPs (Fig.1B). These were then analysed in terms of their polarity (positive or negative) and number of occurrences during each task.<br \/>\nResults:<br \/>\nOur siMap correlated significantly with the spatial pattern obtained from a PPI analysis (r=0.56, p&lt;0.001), which showed a significant increase in FC between the PCC and the right V5 during &#8216;fun&#8217; scenes (height threshold T=3.8, p&lt;0.001; rightV5, MNI x=39, y=-67, z=-2, cluster k=185, pfwe&lt;0.001).<br \/>\nOur dynamic analysis revealed 6 re-occurring patterns of coactivation (Fig.2A), 3 of which were significantly modulated by task (Fig.2C). Fig 2B shows the time-locked occurrence of each CAP and their consistency across all subjects over time.<br \/>\nConclusions:<br \/>\nOur novel method reveals dynamic building blocks of task-modulated FC which cannot be captured by stationary methods such as PPI analysis, thus providing a more accurate picture of brain activity during task performance \u2013 here, in a natural setting (movie watching). Moreover, this cutting-edge approach successfully identifies dynamic task-dependent patterns using only 30% of the available data, dramatically reducing the computation time for large datasets.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\uff0c\u5f93\u6765\u306ePPI\u89e3\u6790\u3067\u306f\u691c\u51fa\u3067\u304d\u306a\u304b\u3063\u305f\u52d5\u7684\u306a\u8133\u6d3b\u52d5\u3092\u89e3\u6790\u53ef\u80fd\u306aPPI-CAPs\u306b\u3064\u3044\u3066\u767a\u8868\u3092\u884c\u3063\u3066\u3044\u305f\uff0e\u81ea\u8eab\u306e\u767a\u8868\u306bPPI\u89e3\u6790\u3092\u7528\u3044\u3066\u3044\u305f\u305f\u3081\uff0c\u975e\u5e38\u306b\u8208\u5473\u3092\u6301\u3063\u305f\u304c\uff0c\u305d\u308c\u3068\u540c\u6642\u306bPPI\u306b\u3064\u3044\u3066\u306e\u7814\u7a76\u306f\u5148\u306b\u9032\u3093\u3067\u3044\u308b\u3053\u3068\u3092\u5b9f\u611f\u3057\u305f\uff0e\u89e3\u6790\u65b9\u6cd5\u306f\u96e3\u3057\u304b\u3063\u305f\u304c\uff0c\u30dd\u30b9\u30bf\u30fc\u304c\u56f3\u3067\u8aac\u660e\u3055\u308c\u3066\u3044\u305f\u305f\u3081\uff0c\u304b\u306a\u308a\u7406\u89e3\u3057\u3084\u3059\u3044\u30dd\u30b9\u30bf\u30fc\u3068\u306a\u3063\u3066\u3044\u305f\u306e\u3067\uff0c\u3042\u308b\u7a0b\u5ea6\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u305f\uff0e\u5185\u5bb9\u3060\u3051\u3067\u306a\u304f\u30dd\u30b9\u30bf\u30fc\u4f5c\u88fd\u306b\u3064\u3044\u3066\u3082\u53c2\u8003\u306b\u3067\u304d\u308b\u90e8\u5206\u304c\u591a\u304b\u3063\u305f\u306e\u3067\u975e\u5e38\u306b\u5370\u8c61\u306b\u6b8b\u308b\u767a\u8868\u3060\u3063\u305f\uff0e\u76f4\u63a5\u767a\u8868\u3092\u805e\u304f\u6642\u9593\u304c\u5c11\u306a\u304f\uff0c\u8a73\u3057\u304f\u672c\u4eba\u306b\u805e\u304f\u3053\u3068\u306f\u3067\u304d\u306a\u304b\u3063\u305f\u306e\u3067\uff0c\u3082\u3063\u3068\u8a73\u3057\u3044\u8a71\u3092\u805e\u304d\u305f\u304b\u3063\u305f\u3068\u601d\u3046\uff0e<br \/>\n&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1aA generative model for inferring whole-brain effective connectivity<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Stefan Fr\u00e4ssle<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Oral session<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nDeveloping whole-brain models that infer the effective (directed) connectivity among neuronal populations from neuroimaging data represents a central challenge for computational neuroscience. Dynamic causal models (DCMs; Friston et al., 2003) of functional magnetic resonance imaging (fMRI) data have been used frequently for inferring effective connectivity, but are presently restricted to small graphs (up to 10 regions) to keep model inversion feasible. Here, we introduce regression DCM (rDCM; Fr\u00e4ssle et al., 2017, Fr\u00e4ssle et al., under review) as a novel variant of DCM for fMRI that enables whole-brain effective connectivity analyses.<br \/>\nMethods:<br \/>\nIn brief, rDCM converts the numerically costly estimation of coupling parameters in differential equations of a linear DCM in the time domain into an efficiently solvable Bayesian linear regression in the frequency domain. This necessitates several modifications to the original DCM implementation, including: (i) translation from time to frequency domain by exploiting the differential property of the Fourier transform, (ii) linearization of the hemodynamic forward model, (iii) mean-field approximation (across regions), and (iv) use of a Gamma prior on noise precision. These changes allow us to derive a highly efficient variational Bayesian update scheme. Additionally, by incorporating sparsity constraints, rDCM does not require any a priori assumptions about the network&#8217;s connectivity structure but prunes fully (all-to-all) connected networks as part of model inversion.<br \/>\nResults:<br \/>\nFirst, we used simulations to test how accurately rDCM could recover the known network architecture (i.e., the connections present in the data-generating model) for large networks with 66 regions. We mapped sensitivity and specificity for various settings of the signal-to-noise ratio of the fMRI data and the a priori assumptions about the sparseness of the network (Fig. 1). These simulations suggest that rDCM is a suitable tool for inferring sparse effective connectivity patterns. In previous work, we already demonstrated face validity of the approach with respect to parameter recovery and model selection (Fr\u00e4ssle et al., 2017).<br \/>\nWe then applied rDCM to several empirical fMRI datasets. In particular, we showed that it is feasible to infer effective connection strengths from fMRI data using a network with more than 100 regions and 10,000 connections. Sparse rDCM was applied to fMRI data from a motor paradigm (visually paced fist closings) (Fig. 2A). Model inversion resulted in a sparse graph with biologically plausible connectivity (Fig. 2B, left) and driving input patterns (Fig. 2B, right): we observed pronounced driving inputs to and functional integration among motor regions (precentral, SMA, cerebellum), visual regions (cuneus, occipital), regions associated with the somatosensory and proprioceptive aspects of the task which are essential for visuomotor integration (postcentral, parietal), and frontal regions engaging in top-down control. The inferred effective connectivity pattern and its sparseness become most apparent when visualized as a connectogram (Fig. 2C) or projected onto the whole-brain volume (Fig. 2D).<br \/>\nNotably, inversion of this whole-brain model was computationally highly efficient and took only 10 minutes on standard hardware.<br \/>\nConclusions:<br \/>\nRegression DCM allows one to infer effective connectivity, with connection-specific estimates, in whole-brain networks from fMRI data. Importantly, in contrast to functional connectivity measures, effective connectivity discloses the directionality of functional couplings. We anticipate that rDCM will find useful application in connectomics (e.g., enabling graph theoretical approaches to effective connectivity; Bullmore &amp; Sporns, 2009) and could offer tremendous opportunities for clinical applications, such as whole-brain phenotyping of patients (Stephan et al., 2015).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u3053\u306e\u767a\u8868\u306f\u5f93\u6765\u306eDCM\u3067\u306f\u4e0d\u53ef\u80fd\u306a\u5168\u8133\u3067\u306eDCM\u304c\u53ef\u80fd\u3068\u3059\u308b\u305f\u3081\u306b\uff0c\u56de\u5e30\u3092\u7528\u3044\u305frDCM\u306b\u3064\u3044\u3066\u306e\u767a\u8868\u3060\u3063\u305f\uff0e\u3053\u306e\u30e2\u30c7\u30ea\u30f3\u30b0\u306f\uff0c\u6642\u9593\u9818\u57df\u306e\u7dda\u5f62DCM\u306e\u5fae\u5206\u65b9\u7a0b\u5f0f\u306b\u304a\u3051\u308b\u6570\u5024\u7684\u306b\u9ad8\u3044\u7d50\u5408\u30d1\u30e9\u30e1\u30fc\u30bf\u306e\u63a8\u5b9a\u3092\uff0c\u30d9\u30a4\u30b8\u30a2\u30f3\u7dda\u5f62\u56de\u5e30\u306b\u5909\u63db\u3059\u308b\u3068\u3044\u3046\u3082\u306e\u3067\u3042\u3063\u305f\uff0e\u5c11\u3057\u96e3\u6613\u5ea6\u304c\u9ad8\u304f\uff0c\u7406\u89e3\u304c\u53ca\u3070\u306a\u3044\u90e8\u5206\u3082\u591a\u3005\u3042\u3063\u305f\u304c\uff0ceffective connectivity\u306b\u6ce8\u76ee\u304c\u96c6\u307e\u3063\u3066\u3044\u308b\u3053\u3068\u3082\u3042\u308a\uff0c\u307e\u305fDCM\u306b\u3064\u3044\u3066\u306f\u5168\u304f\u8a73\u3057\u304f\u306f\u306a\u3044\u304c\u805e\u3044\u305f\u3053\u3068\u304c\u3042\u3063\u305f\u305f\u3081\uff0c\u975e\u5e38\u306b\u8208\u5473\u3092\u6301\u3063\u305f\uff0e\u81ea\u8eab\u306e\u7814\u7a76\u306b\u3082\u53d6\u308a\u5165\u308c\u3066\u307f\u305f\u3044\u3068\u601d\u3046\uff0e<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n\u53c2\u8003\u6587\u732e<br \/>\n1) OHBM2018 Annual meeting,<br \/>\nhttps:\/\/www.humanbrainmapping.org\/i4a\/pages\/index.cfm?pageID=3821<br \/>\n<strong>\u5b66\u4f1a\u53c2\u52a0\u5831\u544a\u66f8<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"147\"><strong>\u5831\u544a\u8005\u6c0f\u540d<\/strong><\/td>\n<td width=\"373\">\u77f3\u7530\u3000\u7fd4\u4e5f<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">\u30b0\u30e9\u30d5\u7406\u8ad6\u30e1\u30c8\u30ea\u30af\u30b9\u3092\u7528\u3044\u305f\u8907\u6570\u8133\u72b6\u614b\u306e\u6a5f\u80fd\u7684\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u9055\u3044\u306e\u62bd\u51fa<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u767a\u8868\u8ad6\u6587\u82f1\u30bf\u30a4\u30c8\u30eb<\/strong><\/td>\n<td width=\"373\">Extracting functional network differences<br \/>\nfrom multiple brain states based on graph theory metrics<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8457\u8005<\/strong><\/td>\n<td width=\"373\">\u77f3\u7530\u7fd4\u4e5f\uff0c\u65e5\u548c\u609f\uff0c\u5ee3\u5b89\u3000\u77e5\u4e4b<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4e3b\u50ac<\/strong><\/td>\n<td width=\"373\">OHBM<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u8b1b\u6f14\u4f1a\u540d<\/strong><\/td>\n<td width=\"373\">OHBM2018<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u4f1a\u5834<\/strong><\/td>\n<td width=\"373\">\u30b5\u30f3\u30c6\u30c3\u30af\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb<\/td>\n<\/tr>\n<tr>\n<td width=\"147\"><strong>\u958b\u50ac\u65e5\u7a0b<\/strong><\/td>\n<td width=\"373\">2018\/06\/17-2018\/06\/21<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<br \/>\n&nbsp;<\/p>\n<ol>\n<li>\u8b1b\u6f14\u4f1a\u306e\u8a73\u7d30<\/li>\n<\/ol>\n<p>2018\/06\/17\u304b\u30892018\/06\/21\u306b\u304b\u3051\u3066\uff0c\u30ab\u30ca\u30c0\uff0c\u30d0\u30f3\u30af\u30fc\u30d0\u30fc\u306e\u30b3\u30f3\u30d9\u30f3\u30b7\u30e7\u30f3\u30bb\u30f3\u30bf\u30fc\u306b\u3066\u958b\u50ac\u3055\u308c\u307e\u3057\u305fOHBM2018\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0eOHBM\u306f\uff0c\u8133\u306e\u30a4\u30e1\u30fc\u30b8\u30f3\u30b0\u88c5\u7f6e\u3092\u7528\u3044\u305f\u4eba\u9593\u306e\u8133\u795e\u7d4c\u306e\u89e3\u660e\u306b\u304a\u3044\u3066\u91cd\u8981\u306a\u7d44\u7e54\u3067\u3042\u308a\uff0c\u795e\u7d4c\u6d3b\u52d5\u3092\u30de\u30c3\u30d4\u30f3\u30b0\u3059\u308b\u30e2\u30c0\u30ea\u30c6\u30a3\u306e\u6700\u65b0\u304b\u3064\u9769\u65b0\u7684\u306a\u7814\u7a76\u306e\u4ea4\u63db\u306e\u305f\u3081\u306e\u6559\u80b2\u30d5\u30a9\u30fc\u30e9\u30e0\u3067\u3059\uff0e<br \/>\n\u79c1\u306f\u5168\u65e5\u7a0b\u53c2\u52a0\u81f4\u3057\u307e\u3057\u305f\uff0e\u672c\u7814\u7a76\u5ba4\u304b\u3089\u306f\u4ed6\u306b\u5ee3\u5b89\u5148\u751f\uff0c\u65e5\u548c\u5148\u751f\uff0c\u4e09\u597d\uff0c\u4e2d\u6751\uff0c\u897f\u6fa4\uff0c\u85e4\u539f\uff0c\u6749\u91ce\uff0c\u5927\u585a\u304c\u53c2\u52a0\u3057\u307e\u3057\u305f\uff0e<\/p>\n<ol start=\"2\">\n<li>\u7814\u7a76\u767a\u8868\n<ul>\n<li>\u767a\u8868\u6982\u8981<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>\u79c1\u306f\u30dd\u30b9\u30bf\u30fc\u30bb\u30c3\u30b7\u30e7\u30f3\u306b\u53c2\u52a0\u3044\u305f\u3057\u307e\u3057\u305f\uff0e19, 20, 21\u65e5\u306e3\u65e5\u9593\u3067\u5408\u8a083\u6642\u9593\u30dd\u30b9\u30bf\u30fc\u524d\u3067\u8cea\u554f\u306b\u7b54\u3048\u307e\u3057\u305f\uff0e\u4eca\u56de\u306e\u767a\u8868\u306e\u6284\u9332\u3092\u4ee5\u4e0b\u306b\u8a18\u8f09\u81f4\u3057\u307e\u3059\uff0e<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">Introduction:<br \/>\nConventional functional connectivity analyses have revealed the existence of several characteristic networks like DMN [1]. Most of these studies have analyzed changes in the connections depending on diverse cognitive states, establishing these functional networks as ROIs. However, in these methods, the actual network structure with which the other brain regions that are not involved in the ROIs are associated gets overlooked. To address this issue, we have proposed a method of extracting significant features to explain the functional network differences among different cognitive states of humans through multivariate statistical analysis. The effectiveness of the proposed method was verified on the basis of fMRI data collected during the N-back task.<br \/>\nMethods:<br \/>\nFrom the regional graph theoretical metric data, which was obtained from the functional network in multiple cognitive states, the proposed method extracts the minimum regions and their graph metrics essential for emphasizing the difference among the cognitive states. Here, the brain activities during the N-back tasks (N = 1, 2, 3) of 13 healthy adults (22 \u00b1 0.9) were measured as 3 cognitive conditions with different working memory loads and were used to testify the proposed method. First, based on the AAL, the regional average BOLD signal was extracted and the ROI-wise correlation coefficient matrix was calculated for each of the 3 conditions. Next, graph theory metrics degree centrality, clustering coefficient, and betweenness centrality were extracted for each matrix. In order to identify the brain region and its graph theoretical metrics to explain the differences among the 3 groups, 3 groups with 348 dimensional data (3 graph theoretical metrics data of 116 regions) were discriminated on the one-dimensional axis through a multiple discriminant analysis (MDA). As each element of the discriminant vector (Eigen vector), derived by MDA, was correlated with the graph theoretical metrics of each region, the larger the elemental value, the more significant was it to represent the difference among the 3 conditions. Moreover, to eliminate the feature quantities that did not contribute to the discrimination of the 3 conditions and to obtain a combination of the minimum feature vectors, dimensional reduction was applied by stepwise method. Here, we assumed that the network characteristics varied in the order of 1, 2, and 3-back conditions. Therefore, the positions of the 3 conditions on the 1-D discriminant axis were forced to be correlated with the relationship of the working memory load amounts. Finally, the statistical significance of the separation of the 3 groups was tested by using permutation test.<br \/>\nResults:<br \/>\nA total of 16 feature quantities were selected by the stepwise method, and 3 groups were clearly discriminated with a discrimination error of 0.0217 by MDA (permutation_p &lt; 0.001). Fig. 1 shows the element values of the Eigen vectors derived by MDA. The graph metric of the brain region having the positive element value of the Eigen vector decreased with an increase in the working memory load, while that of the negative value increased. Furthermore, as shown in Fig. 2, in the neural circuit consisting of the prefrontal cortex and the basal ganglia related to the working memory [2,3], increase in the working memory load reduces the network centrality of the prefrontal cortex and increases that of the basal ganglia. These results suggested that the nodal centrality of the functional network shifts from the prefrontal cortex to the basal ganglia when storing additional information.<br \/>\nConclusions:<br \/>\nIn this study, the significant regional characteristics to explain functional network differences among different cognitive states were extracted by MDA and stepwise method using fMRI data during N-back task. As a result, we could explain the difference of 3 states with different working memory loads by graph metrics of 16 regions. Therefore, effectiveness of the proposed method is indicated on N-back task.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<ul>\n<li>\u8cea\u7591\u5fdc\u7b54<\/li>\n<\/ul>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u767a\u8868\u3067\u306f\uff0c\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u8cea\u7591\u3092\u53d7\u3051\u307e\u3057\u305f\uff0e<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>1<\/strong><br \/>\n\u30a2\u30c8\u30e9\u30b9\u306b\u306f\u4f55\u3092\u7528\u3044\u3066\u3044\u307e\u3059\u304b<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>2<\/strong><br \/>\n\u30b0\u30e9\u30d5\u7406\u8ad6\u3092\u4f7f\u3063\u3066\u3044\u308b\u306e\u306f\u306a\u305c\u3067\u3059\u304b\u3002<br \/>\n\u305d\u306e\u307e\u307e\u30a8\u30c3\u30b8\u3092\u7528\u3044\u3066\u5206\u6790\u3057\u3066\u306f\u3044\u3051\u306a\u3044\u306e\u3067\u3059\u304b\u3002<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>3<\/strong><br \/>\n\u306a\u305cPCA\u3092\u7528\u3044\u306a\u3044\u306e\u304b\u3002<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>4<\/strong><br \/>\n\u3069\u3046\u3084\u3063\u3066\u5206\u985e\u3057\u3066\u3044\u308b\u306e\u3067\u3059\u304b\u3002<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>5<\/strong><br \/>\n\u4ed6\u306b\u3082\u30b0\u30e9\u30d5\u7406\u8ad6\u7279\u5fb4\u91cf\u306f\u3042\u308b\u304c\u306a\u305c\u3053\u306e3\u3064\u3092\u9078\u3093\u3060\u306e\u3067\u3059\u304b\u3002<br \/>\n<strong>\u30fb\u8cea\u554f\u5185\u5bb9<\/strong><strong>6<\/strong><br \/>\n4\u3064\u306e\u72b6\u614b\u3063\u3066\u4f55\u3067\u3059\u304b\uff1f<br \/>\n<strong>\u00a0<\/strong><br \/>\n<strong>\u00a0<\/strong><\/p>\n<ul>\n<li>\u611f\u60f3<\/li>\n<\/ul>\n<p>\u524d\u5e74\u5ea6\u306eOHBM\u306b\u3082\u53c2\u52a0\u3057\u3066\u3044\u305f\u3053\u3068\u3082\u3042\u308a\u3001\u7dca\u5f35\u305b\u305a\u306b\u767a\u8868\u306b\u81e8\u3080\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\u3002\u3084\u306f\u308a\u30ea\u30b9\u30cb\u30f3\u30b0\u3001\u30b9\u30d4\u30fc\u30ad\u30f3\u30b0\u3001\u30ea\u30fc\u30c7\u30a3\u30f3\u30b0\u306e\u80fd\u529b\u306e\u4f4e\u3055\u306b\u60a9\u307e\u3055\u308c\u305f\u304c\u524d\u5e74\u5ea6\u3088\u308a\u3082\u4e00\u4eba\u3067\u5bfe\u5fdc\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u3063\u3066\u3044\u305f\u3053\u3068\u304c\u3088\u304b\u3063\u305f\u3002\u30aa\u30fc\u30e9\u30eb\u30bb\u30c3\u30b7\u30e7\u30f3\u3067\u306f\u524d\u5e74\u5ea6\u3068\u306f\u7814\u7a76\u306e\u30c8\u30ec\u30f3\u30c9\u304c\u5927\u304d\u304f\u7570\u306a\u3063\u3066\u304a\u308a\u3001\u4eca\u3069\u3093\u306a\u3053\u3068\u306b\u6ce8\u76ee\u3055\u308c\u3066\u3044\u308b\u304b\u3092\u77e5\u308b\u3053\u3068\u304c\u3067\u304d\u305f\u3002\u3053\u306e\u7d4c\u9a13\u3092\u3082\u3068\u306b\u4eca\u5f8c\u306e\u7814\u7a76\u306b\u751f\u304b\u3057\u3066\u3044\u304d\u305f\u3044\u3002<\/p>\n<ol start=\"3\">\n<li>\u8074\u8b1b<\/li>\n<\/ol>\n<p>\u4eca\u56de\u306e\u8b1b\u6f14\u4f1a\u3067\u306f\uff0c\u4e0b\u8a18\u306e5\u4ef6\u306e\u767a\u8868\u3092\u8074\u8b1b\u3057\u307e\u3057\u305f\uff0e<br \/>\n&nbsp;<br \/>\nNo1<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\n1189\u3000A brain signature predictive of autism<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Sebastian Urchs, Christian Dansereau, Angela Tam, Gleb Bezgin, John Lewis, Alan Evans, Pierre Bellec<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a poster<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nAutism spectrum disorder (ASD) is a heterogeneous developmental disorder, characterized by impairments of social interaction and rigid behavioural patterns [1] with an estimated prevalence greater than 1% of children [2]. Efforts to establish a descriptive signature of the underlying changes in functional and structural brain organization have been hindered by the considerable heterogeneity of the disorder [3,4]. Here we propose to train a model to predict ASD diagnosis not for the entire ASD population but for a subset of individuals with a common signature of brain alterations that are highly predictive of an ASD diagnosis.<br \/>\nMethods:<br \/>\nData were taken from the multi-site ABIDE 1 dataset [6]. ASD patients and controls were matched on motion and age within each site [7]. The final, quality controlled sample included 370 male subjects (182 ASD, 188 TDC, mean [SD] age 16.93 [7.18]) from 10 sites. Structural images were preprocessed and cortical thickness (CT) estimated with the CIVET pipeline [8]. Functional data were preprocessed with the NIAK pipeline [9]. Individual seed based functional connectivity (FC) was estimated for 20 networks [10]. Linear effects of age and site were regressed from CT estimates, and of age, site and motion from FC estimates.<br \/>\nFor each modality we extracted 5 features [11]. A feature is the average FC or CT in a homogeneous subgroup of the data. Individual similarity with the extracted subfeatures is the spatial correlation, or subtype weight. The normalized subtype weights of 5 CT subtypes, 5 * 20 FC subtypes, and individual Age, Brain volume, and mean FD scores were used as features to train the HPS model.<br \/>\nThe HPS model has two stages: First, we identified subjects that were consistently correctly classified by a support vector machine on 1000 random subsamples of the data. Second, we trained a regularized logistic regression [12] to predict only these easy cases.<br \/>\nModel performance on new data was estimated through 10-fold cross-validation. Lastly we trained the HPS model on the full data set to investigate the features with non-zero weights.<br \/>\nResults:<br \/>\nAcross subsamples, 40% of ASD patients were either most of the time (&gt;90%) or almost never (&lt;10%) correctly classified. The remaining individuals had intermediate hit probabilities that followed a bimodal distribution with extrema at 0 and 1. The cross-validated specificity (SPC) and sensitivity (SEN) of the second stage was 96.81% and 18.68% respectively. In our almost ideally balanced sample, this translates to a precision (PPV) of 85%. 40 out of the 108 features were assigned non-zero weights. The strongest weights were assigned to FC features based on seed networks in the task control network and medial prefrontal cortex (A, and B in figure 1). We compared ASD individuals with and without the HPS signature on a number of metrics (age, motion, brain volume, FIQ, ADOS Gotham severity) but found no significant differences between groups. Our model provides a prediction of ASD diagnosis with high specificity, and non-trivial sensitivity. The precision of our prediction (SPC 96.81, SEN 18.68, PPV 85.4) is higher than that of three comparable published models (avg SPC 80.16, SEN 79.27, PPV 79.9) [12\u201314]. Such high confidence in a positive test combined with homogeneous brain abnormalities will open up new avenues for targeted interventions. We argue that this highly predictive signature (HPS) may represent a distinct subtype of ASD and that the individuals who express it may express similarities in trajectory and treatment response.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>SVM\u3084HPS\u306e\u6a5f\u68b0\u5b66\u7fd2\u3092\u7528\u3044\u3066\u81ea\u9589\u75c7\u30b9\u30da\u30af\u30c8\u30eb\u969c\u5bb3\u306e\u4e88\u6e2c\u3092\u793a\u3059\u8133\u306e\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u69cb\u9020\u3092\u62bd\u51fa\u3059\u308b\u7814\u7a76\u3002\u30b7\u30fc\u30c9\u306b\u306f20\u500b\u306e\u6a5f\u80fd\u7684\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u30685\u3064\u306e\u76ae\u8cea\u306e\u539a\u3055\u3092\u7b97\u51fa\u3057\u3001\u5b66\u7fd2\u30c7\u30fc\u30bf\u3068\u3057\u3066\u7528\u3044\u3066\u3044\u305f\u3002<br \/>\n\u7d50\u8ad6\u3068\u3057\u3066\u306fHPS\u304c\u5206\u985e\u306b\u6709\u7528\u3068\u8ff0\u3079\u3089\u308c\u3066\u3044\u305f\u3002HPS\u304c\u4f55\u304b\u306b\u3064\u3044\u3066\u8abf\u3079\u3066\u307f\u3088\u3046\u3068\u601d\u3046\u3002<br \/>\n&nbsp;<br \/>\nNo2<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a2276 A Graph Theoretical Approach for Classifying Autism Using Support Vector Machines<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Amirali Kazeminejad1, Roberto Sotero Diaz<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a poster<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nAutism Spectrum Disorder (ASD) is a neurodevelopmental disorder which manifests in early childhood and persists into adulthood. While there is no global cure for ASD, early diagnosis of autistic individuals improves their quality of life (Fernell et al., 2013). Automatic diagnosis algorithms with accuracies above 90% based on machine learning have already been proposed for Alzheimer&#8217;s Disease (AD) patients (Chen et al., 2011). However, similar techniques, when applied to ASD, have resulted in much lower accuracies. Recent studies have found topological differences between ASD and normal brains which can be quantified by graph theory (Zeng et al., 2017). Here we present a new feature extraction pipeline using graph theoretical metrics derived from resting state fMRI that improves the accuracy of ASD classification in children by more than 15% when compared to state of the art methods (Abraham et al., 2017). To the best of our knowledge, this method has never been applied to ASD diagnosis.<br \/>\nMethods:<br \/>\nResting state fMRI data from 104 subjects (Control= 59, Male=79, Age = 8.79\u00b10.85 years) was obtained from the Autism Brain Imaging Data Exchange dataset (Cameron et al., 2013b). The data was preprocessed using the Configurable Pipeline for the Analysis of Connectomes (Cameron et al., 2013a) and ROI specific time series of fMRI activity were extracted for each of the 116 AAL atlas brain regions (Tzourio-Mazoyer et al., 2002). For graph analysis, we used the GraphVar toolbox in MATLAB(Kruschwitz et al., 2015). A correlation matrix was computed for each subject by calculating the Pearson correlation coefficient between individual brain regions. This resulted in a 116 by 116 functional connectivity matrix. Each area was treated as a graph node and the correlation matrix was considered as the between node (edge) weights. Finally, the weights were subjected to a relative thresholding process that only kept the 20% of the highest weights and disregarded the rest. This resulted in a binary graph comprising the strongest connections in the subject&#8217;s brain. 12 different graph theoretical measures, listed in Table 1, were computed from this graph, resulting in 817 different features. Equations for each measure can be found in (Rubinov and Sporns, 2010). A sequential selection algorithm was used to individually feed these features to a linear support vector machine (SVM) in 6 consecutive iterations. In each iteration, all features were individually added to previously selected features and the one which best classified the groups was kept for use in the next iteration. Additionally, we conducted a permutation test with 40000 permutations to investigate the statistical significance of the difference in global graph metrics between groups.<br \/>\nResults:<br \/>\nUsing the described method, we achieved a classification accuracy of 84% between normal and ASD subjects which is significantly higher than previous methods applied to this problem. Figure 2 highlights some of the model details. Upon further investigation, it became evident that all brain areas associated with the extracted features have been shown to undergo changes in ASD (Ha et al., 2015). Furthermore, our permutation tests revealed statistically significant decreases (p&lt;0.05) in global small-world propensity and clustering coefficient.<br \/>\nConclusions:<br \/>\nWe have shown that graph theoretical metrics based classification significantly outperforms any previously tried method on the ABIDE dataset. Further investigations and optimizations using different correlation techniques or graph metrics may provide further insight into the forte of this approach for automatic diagnosis of ASD and push the performance closer to AD standards. These methods can also provide further insight into how ASD works. The observed decrease in global clustering and small-world propensity for ASD patients may suggest that information processing in ASD patients&#8217; brains is more decentralized compared to healthy controls which encourages further investigations.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u81ea\u9589\u75c7\u30b9\u30da\u30af\u30c8\u30eb\u969c\u5bb3\u3092SVM\u3067\u5206\u985e\u3057\u305f\u3002\u7279\u5fb4\u91cf\u306fLocal\u304a\u3088\u3073Grobal\u306e\u7279\u5fb4\u91cf\u306e\u8a08817\u7a2e\u985e\u306e\u30b0\u30e9\u30d5\u7406\u8ad6\u7279\u5fb4\u91cf\u304c\u7528\u3044\u3089\u308c\u3001Stepwise\u306b\u3088\u3063\u3066\u5909\u6570\u9078\u629e\u3055\u308c\u3066\u3044\u305f\u3002\u6700\u5f8c\u306bPermutation\u30c6\u30b9\u30c8\u306b\u3088\u3063\u3066\u7d71\u8a08\u7684\u6709\u610f\u6027\u304c\u78ba\u8a8d\u3055\u308c\u305f\u3002\u524d\u56de\u306e\u81ea\u5206\u306e\u7814\u7a76\u3068\u975e\u5e38\u306b\u4f3c\u3066\u3044\u3066\u9a5a\u3044\u305f\u3002<br \/>\n&nbsp;<br \/>\nNo3<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a1144 Altered functional brain networks in Alzheimer\u2019s disease: Does the choice of graph metric matter?<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Priya Aggarwal1, Suresh Joel1, Radhika Madhavan1<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a poster<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nResting-state functional magnetic resonance imaging (rs-fMRI) is a promising biomarker for measuring connectivity of the brain in patients with Alzheimer&#8217;s disease (AD). AD is characterized by structural and functional connectivity loss resulting in cognitive decline1,2. Many graph metrics have been shown to be associated with changes in brain connectivity in disease3. However, few studies have identified graph metrics that significantly contribute to AD staging, and can be used as a biomarker for neurodegenerative disease progression. To identify graph metrics relevant to AD progression, we developed an approach which models brain functional connectivity using multiple local and global graph measures, and summarizes contribution of each measure within a classification framework.<br \/>\nMethods:<br \/>\nData used in the preparation of this article were obtained from the Alzheimer&#8217;s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early AD. In this study, we used rs-fMRI data from 37 healthy controls (HC), 35 late MCI (LMCI) and 34 AD patients. For each subject, the first 5 volumes of the functional images were discarded. rs-fMRI data were preprocessed using motion correction, rigid registration to T1-weighted image, non-rigid registration to MNI atlas, nuisance removal using aCompCor4, spatial smoothing using Gaussian filter (6-mm FWHM) and band-pass filtering [0.01-0.1 Hz] using custom-built software. Functional connectivity matrices were generated for each subject by parcellating the brain into 90 nodes5, and constructing networks from the edges between nodes representing correlation coefficient of rs-fMRI time courses. Top 10% connections were used to generate binary connectivity matrices. Ten local and 7 global graph theoretical measures (Fig.1) were used to generate a total of 907 features (90*10+7=907). To reliably down-select a set of graph metrics for classifying AD patients, we used the Fisher-score based feature selection6. Linear support vector machine (SVM) were used as prediction models for classification. The best subset of features, identified based on a 10-fold cross-validation, was used to validate performance.<br \/>\nResults:<br \/>\nUsing sub-selected graph features, we obtained an accuracy of 88% for classifying AD and HC, 83% for classifying LMCI and HC, and 82% for classifying AD and LMCI (10% binary threshold). Selected graph features were ranked based on their contribution to classification accuracy using Fisher score. Hippocampus, cerebellum and inferior frontal regions were most discriminative for classifying AD, MCI and HC (Table 1). Subgraph centrality had maximal contribution for the classification of AD from LMCI and HC (Table 2). For classification of HC from LMCI, local efficiency showed maximal accuracy (Table 2). Global graph metrics had minimal contribution for AD classification.<br \/>\nConclusions:<br \/>\nWe used neuroimaging data from the ADNI database to classify patients with AD, LMCI and HC. Using relevant graph theoretical metrics, we were able to classify between HC, LMCI and AD with &gt;82% accuracy. Hippocampus, cerebellum and inferior frontal regions maximally contributed to the classification. A comparison of most relevant features showed that the subgraph centrality had the highest sensitivity in classifying AD from LMCI and HC, whereas local efficiency showed the highest sensitivity for classifying HC from LMCI. Global metrics are insensitive to changes caused by AD, while local measures of efficiency and clique-iness contribute to AD staging, indicating that AD affects local functional network connectivity.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u30a2\u30eb\u30c4\u30cf\u30a4\u30de\u30fc\u306e\u8a3a\u65ad\u306b\u30b0\u30e9\u30d5\u7406\u8ad6\u7279\u5fb4\u91cf\u306e\u9078\u629e\u306f\u91cd\u8981\u304b\u3069\u3046\u304b\u3092\u793a\u3059\u7814\u7a76\u3002\u7dda\u5f62\u306eSVM\u3092\u7528\u3044\u306610\u500b\u306eLocal\u306e\u30e1\u30c8\u30ea\u30af\u30b9\u30687\u500b\u306eGlobal\u306e\u30e1\u30c8\u30ea\u30af\u30b9\u304c\u7b97\u51fa\u3055\u308c\u3001\u30d5\u30a3\u30c3\u30b7\u30e3\u30fc\u30b9\u30b3\u30a2\u30d9\u30fc\u30b9\u3067\u7279\u5fb4\u9078\u629e\u3092\u884c\u3063\u305f\u3002\u7279\u5fb4\u306b\u306f100\uff5e200\u7a0b\u5ea6\u306e\u7279\u5fb4\u91cf\u304c\u9078\u629e\u3055\u308c\u305f\u3002\u7d50\u679c\u3068\u3057\u3066Global\u306e\u7279\u5fb4\u91cf\u306f\u8a3a\u65ad\u306b\u306f\u5f71\u97ff\u3057\u306a\u3044\u3053\u3068\u304c\u793a\u5506\u3055\u308c\u305f\u3002<br \/>\n&nbsp;<br \/>\nNo4<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a2277 Sex classification by resting state brain connectivity<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Susanne Weis, Kaustubh Patil, Felix Hoffstaedter, Alessandra Nostro, B. T. Thomas Yeo, Simon Eickhoff<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a poster<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nCognitive sex differences have repeatedly been demonstrated in behaviour and task-related functional magnetic resonance imaging (fMRI). Additionally, more recent fMRI research has revealed sex differences in functional brain connectivity, even in the absence of specific cognitive tasks (e.g. Weis et al., 2017).<br \/>\nThe present study adopts a machine learning approach to address the generalizability of previous findings on sex differences in resting state connectivity to independent samples. Furthermore, we aimed to delineate regionally specific brain networks underlying successful classification of novel subjects&#8217; sex to help understand a possible sexual dimorphism of the resting state connectome.<br \/>\nMethods:<br \/>\nTwo mutually exclusive samples of unrelated subjects were constructed from data provided by the Human Connectome Project (HCP S1200 release, Van Essen et al., 2012). Sample 1 contained 434 subjects (217 males, age range: 22-37, mean age: 28.6 years); sample 2 comprised 310 subjects (155 males, age range: 22-36, mean age: 28.5 years). Within each sample, males and females were matched for age, twin-status and education. Resting state data comprised 1200 functional volumes per subject, acquired on a Siemens Skyra 3T scanner (TR=720ms). Data was realigned and normalized using an established SPM-based pipeline and denoised using FSL-FIX (Salimi-Khorshidi et al., 2014). Individual resting state connectomes were created based on 400 ROIs from a novel whole-cortex parcellation (Schaefer et al., 2017).<br \/>\nLinear SVM (LibSVM toolbox, Chang and Ling, 2011) was employed to train a model for classification of the subject&#8217;s sex from their connectome, including nested optimization of the cost parameter. Within sample accuracy was determined using 10-fold cross-validation. Across sample classification performance was determined by fitting the model on one and testing it on the other sample.<br \/>\nIn addition to whole-connectome based classification, we also investigated the capability of each individual ROI&#8217;s connectivity profile to differentiate male and female subjects. These ROI based analyses were performed separately for each sample and then conservatively characterized by the minimum 10-fold cross-validation accuracy across both samples.<br \/>\nResults:<br \/>\nFor whole brain connectivity profiles, 10-fold cross-validation within sample 1 achieved a mean accuracy of 79.3%, while for sample 2 mean accuracy was 78.8%.<br \/>\nAcross sample classification achieved an accuracy of 81.4% indicating a benefit of the larger training set as no fold had to be left out. Across all settings, the functional connectome thus allowed to predict the sex of a new subject with roughly 80% accuracy.<br \/>\nThe ROI based analysis (Figure 1) identified regions whose connectivity profile with the rest of the brain best allowed for the differentiation of males from females across both samples. Across the whole brain, highest accuracies were observed for the left temporo-parietal junction, (particularly left) inferior frontal gyrus, cingulate regions, as well as in bilateral temporal poles and inferior and medial temporal lobes. As shown in Figure 2, classification accuracies for the top ROIs were only marginally lower than for the whole connectome analyses.<br \/>\nConclusions:<br \/>\nBoth within- and between sample cross-validation revealed a reliable classification of previously unseen subjects&#8217; sex from resting state connectivity profiles, indicating a robust sexual dimorphism of the resting state connectome. Even though all brain regions provided classification accuracies above chance, analyses revealed a regional differentiation of predictive power which overall appears to be more pronounced in higher level cognitive regions, specifically brain areas involved with meta-cognition, motivation and learning. These findings might help explain why sex differences are commonly not found in behavioural performance per se, but rather with respect to the cognitive strategies employed by men and women to achieve the same goals.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u7dda\u5f62\u306eSVM\u3068\u30ec\u30b9\u30c6\u30a3\u30f3\u30b0\u30b9\u30c6\u30a4\u30c8\u306efunctional connectivity\u304b\u3089\u7537\u5973\u3092\u5206\u985e\u3059\u308b\u89e3\u6790\u304c\u884c\u308f\u308c\u3066\u3044\u305f\u3002\u307e\u305f\u3001\u5404ROI\u3092\u4e00\u3064\u305a\u3064\u7528\u3044\u3066\u500b\u3005\u306e\u5206\u985e\u5236\u5ea6\u306b\u3064\u3044\u3066\u3082\u691c\u8a0e\u3057\u3066\u3044\u305f\u3002\u4e00\u3064\u4e00\u3064\u306eROI\u306b\u5bfe\u3057\u3066\u306e\u5206\u985e\u7cbe\u5ea6\u3092\u8133\u306b\u30de\u30c3\u30d4\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u5206\u985e\u306b\u52b9\u3044\u3066\u3044\u308b\u9818\u57df\u3092\u53ef\u8996\u5316\u3057\u3066\u3044\u308b\u306e\u304c\u9762\u767d\u304b\u3063\u305f\u3002<br \/>\n&nbsp;<br \/>\nNo5<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"529\">\u767a\u8868\u30bf\u30a4\u30c8\u30eb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a2293 Classification of Major Depressive Disorder via Functional Connectivity and Effective Connectivity<br \/>\n\u8457\u8005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a Junhai Xu, Junyan Wang, Xiangfei Geng, Yonggang Shi<br \/>\n\u30bb\u30c3\u30b7\u30e7\u30f3\u540d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a poster<br \/>\nAbstruct\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \uff1a<br \/>\nIntroduction:<br \/>\nThe diagnosis of Major Depressive Disorder (MDD) is based on person&#8217;s mental status examinations and experiences. The functional connectivity (FC) represents the correlation between the time series of anatomically separated brain regions, which has been used to investigate the dysfunctions in many mental disorder diseases. Abnormal FC in MDD patients in the resting-state networks have been used as a biomarker for MDD diagnosis (Dorum et al., 2017). Effective connectivity (EC) is a more complex and efficient measure to examine the dynamic changes, which describes the causal influences that neural units exert over another. FC has achieved good performances in exploring the abnormalities between patients and HC, which shows the potential as a biomarker in disease diagnosis in many studies . It remain unknown whether EC can be used as an efficient and without self-reported symptoms biomarker for MDD diagnosis.<br \/>\nMethods:<br \/>\nParticipants<br \/>\n24 patients with MDD (16 females and 8 males) and 24 HC (16 females and 8 males) participated in this study. Those patients met criteria for DSM-IV-TR major depressive disorder without comorbidity and had a minimum duration of illness &gt; 3 months.<br \/>\nData acquisition<br \/>\nFunctional and structural images were obtained using 3T Siemens TIM Trio. One 8-minutes resting-state scan and a high-resolution T1-weighted scan were acquired on each participant.<br \/>\nData processing<br \/>\nAll fMRI data was preprocessed by spm12. A ROI-to-ROI functional connectivity analysis was performed using two whole-brain template separately, including the Automated Anatomical Labeling (AAL) template and Brainnetome template. The functional connectivity analysis was performed using CONN toolbox. Four resting-state networks (DMN, DAN, FPN and SN) were used in EC analysis. A seed-to-voxel FC analysis was performed using CONN toolbox to identify the main regions of the four resting-state networks. The spDCM analysis was performed using DCM12 in SPM12.<br \/>\nFour supervised learning classifiers (linear SVM, nonlinear SVM, LR and KNN) were used in the classification stage. A leave-one-out cross-validation (LOOCV) procedure was used to avoid the over fitting and used as many as possible samples in the training step. A permutation test was conducted to estimate the statistical significance of the observed classification accuracy.<br \/>\nResults:<br \/>\nIn this study, three classification procedures based on the different templates were performed using four classifiers separately. Figure 1 showed the whole workflow of the three classification procedures. The linear SVM classifier achieved the best performance among the four classifiers (linear SVM, nonlinear SVM, KNN, and LR) in all three classification tasks while the accuracies of the other classifiers were inconsistent. The best performance for the AAL classification was observed when 950 functional connections were chosen as features (accuracy: 87.50%; f1: 87.50%; AUC: 0.91), and 6650 functional connections as features were included for the Brainnetome classification (accuracy: 89.36%; f1: 90.90%; AUC: 0.92). The Brainnetome classification achieved the highest recall and f1 (recall: 96.15%; f1: 94.44%) among the three tasks, which revealed a good performance for the diseases diagnosis. Permutation tests were performed on all of the classifiers in the three classification procedures and p values of all classifiers were less than 0.0001 (p &lt; 0.0001), suggesting that accuracies of all of the classifiers were significantly higher than the chance level (50% of accuracy). The detailed results of the three classification procedures were showed in Figure 2.<br \/>\nConclusions:<br \/>\nIn this study, both functional connectivity and effective connectivity measures were used as features for the classification analysis. We found that both functional and effective connectivity show a disgnostic potential for MDD diagnosis, but effective connectivity may be more efficient compared with functional connectivity.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u8a3a\u65ad\u306e\u30d0\u30a4\u30aa\u30de\u30fc\u30ab\u30fc\u306bfunctional connectivity\u304c\u6709\u7528\u3060\u3068\u3055\u308c\u3066\u3044\u308b\u5927\u3046\u3064\u75c5\u306b\u304a\u3051\u308beffective connectivity\u306e\u6709\u7528\u6027\u3092\u78ba\u304b\u3081\u308b\u7814\u7a76\u3002DMN\u3001DAN\u3001FPN\u3001SN\u3092ROI\u306b\u8a2d\u5b9a\u3057\u30014\u3064\u306e\u5206\u985e\u6a5f\u3067\u5206\u985e\u3092\u8a66\u307f\u3066\u3044\u308b\u30023\u7a2e\u985e\u306e\u5206\u5272\u6cd5\u3067\u8a66\u3057\u305f\u3068\u3053\u308d\u3001Brainnetome\u3092\u7528\u3044\u305fLinearSVM\u304c\u3082\u3063\u3068\u3082\u6027\u80fd\u304c\u826f\u304b\u3063\u305f\u3002\u3053\u306e\u30dd\u30b9\u30bf\u30fc\u3067\u306fEffective connectivity\u306e\u6709\u7528\u6027\u3092\u793a\u5506\u3057\u3066\u3044\u305f\u3002<br \/>\n&nbsp;<br \/>\n5 \u53c2\u8003\u6587\u732e<br \/>\n[1] Greicius, Michael D. &#8220;Functional connectivity in the resting brain: a network analysis of the default mode hypothesis.&#8221; Proceedings of the National Academy of Sciences 100.1<br \/>\n(2003): 253-258.<br \/>\n[2]McNab, Fiona. &#8220;Prefrontal cortex and basal ganglia control access to working memory.&#8221; Nature neuroscience11.1 (2007): nn2024.<br \/>\n[3] Frank, Michael J., Bryan Loughry. &#8220;Interactions between frontal cortex and basal ganglia in working memory: a computational model.&#8221; Cognitive, Affective, &amp; Behavioral<br \/>\nNeuroscience 1.2 (2001): 137-160.<br \/>\n&nbsp;<br \/>\n&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>2018\/6\/17-2018\/6\/21\u306e\u65e5\u7a0b\u3067\u3001\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb\u3067\u958b\u50ac\u3055\u308c\u305f\u3000The 24th Annual Meeting of the Organization for Human Brain Mapping\u3000\u306b\u3066\u4e0b\u8a18\u306e &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/is.doshisha.ac.jp\/news\/?p=5213\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;\u3010\u901f\u5831\u3011The 24th Annual Meeting of the Organization for Human Brain Mapping&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10,3],"tags":[],"class_list":["post-5213","post","type-post","status-publish","format-standard","hentry","category-10","category-3"],"_links":{"self":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts\/5213","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5213"}],"version-history":[{"count":0,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=\/wp\/v2\/posts\/5213\/revisions"}],"wp:attachment":[{"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5213"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5213"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/is.doshisha.ac.jp\/news\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5213"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}