<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Jiahong Qian | Hai Lin</title>
    <link>/home/lin/authors/jiahong-qian/</link>
      <atom:link href="/home/lin/authors/jiahong-qian/index.xml" rel="self" type="application/rss+xml" />
    <description>Jiahong Qian</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 16 Sep 2022 13:29:56 +0800</lastBuildDate>
    
    
    <item>
      <title>ALA-Net: Adaptive Lesion-Aware Attention Network for 3D Colorectal Tumor Segmentation</title>
      <link>/home/lin/project/yankaijiang-aal/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:56 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-aal/</guid>
      <description>&lt;p&gt;Accurate and reliable segmentation of colorectal tumors and surrounding colorectal tissues on 3D magnetic resonance images has critical importance in preoperative prediction, staging, and radiotherapy. Previous works simply combine multilevel features without aggregating representative semantic information and without compensating for the loss of spatial information caused by down-sampling. Therefore, they are vulnerable to noise from complex backgrounds and suffer from misclassification and target incompleteness-related failures. In this paper, we address these limitations with a novel adaptive lesion-aware attention network (ALA-Net) which explicitly integrates useful contextual information with spatial details and captures richer feature dependencies based on 3D attention mechanisms. The model comprises two parallel encoding paths. One of these is designed to explore global contextual features and enlarge the receptive field using a recurrent strategy. The other captures sharper object boundaries and the details of small objects that are lost in repeated down-sampling layers. Our lesion-aware attention module adaptively captures long-range semantic dependencies and highlights the most discriminative features, improving semantic consistency and completeness. Furthermore, we introduce a prediction aggregation module to combine multiscale feature maps and to further filter out irrelevant information for precise voxel-wise prediction. Experimental results show that ALA-Net outperforms state-of-the-art methods and inherently generalizes well to other 3D medical images segmentation tasks, providing multiple benefits in terms of target completeness, reduction of false positives, and accurate detection of ambiguous lesion regions.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>LRVRG: a local region-based variational region growing algorithm for fast mandible segmentation from CBCT images</title>
      <link>/home/lin/project/yankaijiang-lal/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:38 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-lal/</guid>
      <description>&lt;p&gt;This paper proposes a local region-based variational region growing algorithm, which integrates local region and shape prior to segment the mandible accurately. Firstly, we select initial seeds in the CBCT image and then calculate candidate point sets and the local region energy function of each point. If a point reduces the energy, it is selected to be a pixel of the foreground region. By multiple iterations, the mandible segmentation of the slice can be obtained. Secondly, the segmented result of the previous slice is adopted as the shape prior to the next slice until all of the slices in CBCT are segmented. At last, the final mandible model is reconstructed by the Marching Cubes algorithm.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>ALA-Net: Adaptive Lesion-Aware Attention Network for 3D Colorectal Tumor Segmentation</title>
      <link>/home/lin/publication/dblp-journalstmi-jiang-xfqlztsl-21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalstmi-jiang-xfqlztsl-21/</guid>
      <description></description>
    </item>
    
    <item>
      <title>An automatic tooth reconstruction method based on multimodal data</title>
      <link>/home/lin/publication/dblp-journalsjvis-qian-lgtll-21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsjvis-qian-lgtll-21/</guid>
      <description></description>
    </item>
    
    <item>
      <title>CephaNN: A Multi-Head Attention Network for Cephalometric Landmark Detection</title>
      <link>/home/lin/publication/dblp-journalsaccess-qian-lctll-20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-journalsaccess-qian-lctll-20/</guid>
      <description></description>
    </item>
    
    <item>
      <title>An automatic algorithm for repairing dental models based on contours</title>
      <link>/home/lin/publication/qian-2019-automatic/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/qian-2019-automatic/</guid>
      <description></description>
    </item>
    
    <item>
      <title>CephaNet: An Improved Faster R-CNN for Cephalometric Landmark Detection</title>
      <link>/home/lin/publication/dblp-confisbi-qian-ctll-19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/home/lin/publication/dblp-confisbi-qian-ctll-19/</guid>
      <description></description>
    </item>
    
  </channel>
</rss>
