<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Xiangyang He | Hai Lin</title>
    <link>/home/lin/authors/xiangyang-he/</link>
      <atom:link href="/home/lin/authors/xiangyang-he/index.xml" rel="self" type="application/rss+xml" />
    <description>Xiangyang He</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 16 Sep 2022 13:29:31 +0800</lastBuildDate>
    
    
    <item>
      <title>DeepNFT: Towards Precise Neurofibrillary Tangle Detection via Improving Multi-scale Feature Fusion and Adversary</title>
      <link>/home/lin/project/yankaijiang-dtp/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:31 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-dtp/</guid>
      <description>&lt;p&gt;Detecting neurofibrillary tangles is an important procedure in the assessment of the intensity and distribution pattern of hippocampal tau pathology, which are the principal clinical phenotypes associated with Alzheimer’s disease. Existing deep learning based detectors still face a critical obstacle: the difficulty in detecting extremely small objects in high resolution images. In this paper, we propose a deep learning framework, named DeepNFT, which combines the multilevel feature aggregation pyramid network (MFAPN) and the adversarial feature generation module (AFGM) to acquire precise detection results with significantly reduced false positives. To prove its universality and robustness, DeepNFT has been validated on two datasets. Experiments show the significant performance gain of our proposed approach over state-of-the-art detectors. Ablation study shows our network components improve the performance of various backbones and detectors.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Graph convolutional network-based semi-supervisedfeature classiﬁcation of volumes</title>
      <link>/home/lin/project/xiangyanghe-gcn/</link>
      <pubDate>Thu, 15 Sep 2022 16:01:45 +0800</pubDate>
      <guid>/home/lin/project/xiangyanghe-gcn/</guid>
      <description>&lt;p&gt;Feature classiﬁcation has always been one of the research hotspots in scientiﬁc visualization.However, conventional interactive feature classiﬁcation methods rely on prior knowledge and typicallyrequire trial and error, whereas feature classiﬁcation based on data mining is generally based on localfeatures; therefore, obtaining good results with traditional methods is difﬁcult. In this paper, we ﬁrst map avolume to the super-voxel graph using a 3D extension of the simple linear iterative clustering algorithm andthen construct a graph convolutional neural network to implement node classiﬁcation in a semi-supervisedway, i.e., a small number of user-labeled super-voxels. We transform the feature classiﬁcation of a volumeinto the classiﬁcation task of nodes of a super-voxel graph, which is a novel approach and broadens theapplication scope of graph neural netwo rk to volumes. Experiments on different volumes have demonstratedthe strong learning ability and reasoning ability of the proposed method.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Voxel2vec</title>
      <link>/home/lin/project/voxel2vec/</link>
      <pubDate>Tue, 12 Jul 2022 16:33:15 +0800</pubDate>
      <guid>/home/lin/project/voxel2vec/</guid>
      <description>&lt;p&gt;Relationships in scientific data are intricate and complex, such as the numerical and spatial distribution relations of features in univariate data, the scalar-value combinations’ relations in multivariate data, and the association of volumes in time-varying and ensemble data. This paper presents voxel2vec, a novel unsupervised representation learning model, to learn distributed representations of scalar values in a low-dimensional vector space. The basic assumption is that if two scalar values/scalar-value combinations have similar contexts, they usually have high similarity in terms of features. By representing scalar values as symbols, voxel2vec learns the similarity of scalar values in the context of spatial distribution and then we can explore the overall association between volumes by transfer prediction. We demonstrate the usefulness and effectiveness of voxel2vec by comparing it with the isosurface similarity map of univariate data and applying the learned distributed representations to feature classification for multivariate data and association analysis for time-varying and ensemble data.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Graph convolutional network-based semi-supervised feature classification of volumes</title>
      <link>/home/lin/publication/dblp-journalsjvis-he-ytdl-22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsjvis-he-ytdl-22/</guid>
      <description></description>
    </item>
    
    <item>
      <title>ScalarGCN: scalar-value association analysis of volumes based on graph convolutional network</title>
      <link>/home/lin/publication/dblp-journalsjvis-he-tycl-22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsjvis-he-tycl-22/</guid>
      <description></description>
    </item>
    
    <item>
      <title>voxel2vec: A Natural Language Processing Approach to Learning Distributed Representations for Scientific Data</title>
      <link>/home/lin/publication/dblp-journalstvcg-he-22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalstvcg-he-22/</guid>
      <description></description>
    </item>
    
    <item>
      <title>DeepNFT: Towards Precise Neurofibrillary Tangle Detection via Improving Multi-scale Feature Fusion and Adversary</title>
      <link>/home/lin/publication/dblp-confbibm-jiang-zlhhztl-21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-confbibm-jiang-zlhhztl-21/</guid>
      <description></description>
    </item>
    
    <item>
      <title>IsoExplorer: an isosurface-driven framework for 3D shape analysis of biomedical volume data</title>
      <link>/home/lin/publication/dblp-journalsjvis-dai-thl-21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsjvis-dai-thl-21/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Multiple GPU parallel strategy of beam tracing for propagation prediction in large-scale scenes</title>
      <link>/home/lin/publication/lin-2019-multiple/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/lin-2019-multiple/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Multivariate spatial data visualization: a survey</title>
      <link>/home/lin/publication/dblp-journalsjvis-he-twl-19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsjvis-he-twl-19/</guid>
      <description></description>
    </item>
    
    <item>
      <title>A co-analysis framework for exploring multivariate scientific data</title>
      <link>/home/lin/publication/dblp-journalsvi-he-twl-18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsvi-he-twl-18/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Biclusters Based Visual Exploration of Multivariate Scientific Data</title>
      <link>/home/lin/publication/dblp-confscivis-he-tw-018/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-confscivis-he-tw-018/</guid>
      <description></description>
    </item>
    
    <item>
      <title>GPU parallel acceleration of beam tracing for propagation prediction in urban environments</title>
      <link>/home/lin/publication/he-2018-gpu/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/he-2018-gpu/</guid>
      <description></description>
    </item>
    
  </channel>
</rss>
