<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Medical Imaging | Hai Lin</title>
    <link>/home/lin/tag/medical-imaging/</link>
      <atom:link href="/home/lin/tag/medical-imaging/index.xml" rel="self" type="application/rss+xml" />
    <description>Medical Imaging</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 16 Sep 2022 13:30:29 +0800</lastBuildDate>
    
    
    <item>
      <title>Deep learning for accurate diagnosis of liver tumor based on magnetic resonance imaging and clinical data</title>
      <link>/home/lin/project/shihuizhen-dlf/</link>
      <pubDate>Fri, 16 Sep 2022 13:30:29 +0800</pubDate>
      <guid>/home/lin/project/shihuizhen-dlf/</guid>
      <description>&lt;h4 id=&#34;background&#34;&gt;Background&lt;/h4&gt;
&lt;p&gt;Early-stage diagnosis and treatment can improve survival rates of liver cancer patients. Dynamic contrast-enhanced MRI provides the most comprehensive information for differential diagnosis of liver tumors. However, MRI diagnosis is affected by subjective experience, so deep learning may supply a new diagnostic strategy. We used convolutional neural networks (CNNs) to develop a deep learning system (DLS) to classify liver tumors based on enhanced MR images, unenhanced MR images, and clinical data including text and laboratory test results.&lt;/p&gt;
&lt;h4 id=&#34;methods&#34;&gt;Methods&lt;/h4&gt;
&lt;p&gt;Using data from 1,210 patients with liver tumors (N = 31,608 images), we trained CNNs to get seven-way classifiers, binary classifiers, and three-way malignancy-classifiers (Model A-Model G). Models were validated in an external independent extended cohort of 201 patients (N = 6,816 images). The area under receiver operating characteristic (ROC) curve (AUC) were compared across different models. We also compared the sensitivity and specificity of models with the performance of three experienced radiologists.&lt;/p&gt;
&lt;h4 id=&#34;results&#34;&gt;Results&lt;/h4&gt;
&lt;p&gt;Deep learning achieves a performance on par with three experienced radiologists on classifying liver tumors in seven categories. Using only unenhanced images, CNN performs well in distinguishing malignant from benign liver tumors (AUC, 0.946; 95% CI 0.914–0.979 vs. 0.951; 0.919–0.982, P = 0.664). New CNN combining unenhanced images with clinical data greatly improved the performance of classifying malignancies as hepatocellular carcinoma (AUC, 0.985; 95% CI 0.960–1.000), metastatic tumors (0.998; 0.989–1.000), and other primary malignancies (0.963; 0.896–1.000), and the agreement with pathology was 91.9%.These models mined diagnostic information in unenhanced images and clinical data by deep-neural-network, which were different to previous methods that utilized enhanced images. The sensitivity and specificity of almost every category in these models reached the same high level compared to three experienced radiologists.&lt;/p&gt;
&lt;h4 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;Trained with data in various acquisition conditions, DLS that integrated these models could be used as an accurate and time-saving assisted-diagnostic strategy for liver tumors in clinical settings, even in the absence of contrast agents. DLS therefore has the potential to avoid contrast-related side effects and reduce economic costs associated with current standard MRI inspection practices for liver tumor patients.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>ALA-Net: Adaptive Lesion-Aware Attention Network for 3D Colorectal Tumor Segmentation</title>
      <link>/home/lin/project/yankaijiang-aal/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:56 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-aal/</guid>
      <description>&lt;p&gt;Accurate and reliable segmentation of colorectal tumors and surrounding colorectal tissues on 3D magnetic resonance images has critical importance in preoperative prediction, staging, and radiotherapy. Previous works simply combine multilevel features without aggregating representative semantic information and without compensating for the loss of spatial information caused by down-sampling. Therefore, they are vulnerable to noise from complex backgrounds and suffer from misclassification and target incompleteness-related failures. In this paper, we address these limitations with a novel adaptive lesion-aware attention network (ALA-Net) which explicitly integrates useful contextual information with spatial details and captures richer feature dependencies based on 3D attention mechanisms. The model comprises two parallel encoding paths. One of these is designed to explore global contextual features and enlarge the receptive field using a recurrent strategy. The other captures sharper object boundaries and the details of small objects that are lost in repeated down-sampling layers. Our lesion-aware attention module adaptively captures long-range semantic dependencies and highlights the most discriminative features, improving semantic consistency and completeness. Furthermore, we introduce a prediction aggregation module to combine multiscale feature maps and to further filter out irrelevant information for precise voxel-wise prediction. Experimental results show that ALA-Net outperforms state-of-the-art methods and inherently generalizes well to other 3D medical images segmentation tasks, providing multiple benefits in terms of target completeness, reduction of false positives, and accurate detection of ambiguous lesion regions.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>LRVRG: a local region-based variational region growing algorithm for fast mandible segmentation from CBCT images</title>
      <link>/home/lin/project/yankaijiang-lal/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:38 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-lal/</guid>
      <description>&lt;p&gt;This paper proposes a local region-based variational region growing algorithm, which integrates local region and shape prior to segment the mandible accurately. Firstly, we select initial seeds in the CBCT image and then calculate candidate point sets and the local region energy function of each point. If a point reduces the energy, it is selected to be a pixel of the foreground region. By multiple iterations, the mandible segmentation of the slice can be obtained. Secondly, the segmented result of the previous slice is adopted as the shape prior to the next slice until all of the slices in CBCT are segmented. At last, the final mandible model is reconstructed by the Marching Cubes algorithm.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>DeepNFT: Towards Precise Neurofibrillary Tangle Detection via Improving Multi-scale Feature Fusion and Adversary</title>
      <link>/home/lin/project/yankaijiang-dtp/</link>
      <pubDate>Fri, 16 Sep 2022 13:29:31 +0800</pubDate>
      <guid>/home/lin/project/yankaijiang-dtp/</guid>
      <description>&lt;p&gt;Detecting neurofibrillary tangles is an important procedure in the assessment of the intensity and distribution pattern of hippocampal tau pathology, which are the principal clinical phenotypes associated with Alzheimer’s disease. Existing deep learning based detectors still face a critical obstacle: the difficulty in detecting extremely small objects in high resolution images. In this paper, we propose a deep learning framework, named DeepNFT, which combines the multilevel feature aggregation pyramid network (MFAPN) and the adversarial feature generation module (AFGM) to acquire precise detection results with significantly reduced false positives. To prove its universality and robustness, DeepNFT has been validated on two datasets. Experiments show the significant performance gain of our proposed approach over state-of-the-art detectors. Ablation study shows our network components improve the performance of various backbones and detectors.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>
