Home Publications
Selected Publications (Selected papers/talks can be downloaded from My Favorite/By years/Talks webpages)
Selected List By years My Favorite Patents Talks    


2010    2009    2008    2007    2006    2005    2004    Before 2004

A Novel Interface for Interactive Exploration of DTI Fibers (Project webpage)
Wei Chen, Zi'ang Ding, Song Zhang, Anna MacKay-Brandt, Stephen Correia, Huamin Qu, John Allen Crow, David F. Tate, Zhicheng Yan, Qunsheng Peng

IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.

Visual exploration is essential to the visualization and analysis of densely sampled 3D DTI fibers in biological speciments, due to the high geometric, spatial, and anatomical complexity of fiber tracts. Previous methods for DTI fiber visualization use zooming, color-mapping, selection, and abstraction to deliver the characteristics of the fibers. However, these schemes mainly focus on the optimization of visualization in the 3D space where cluttering and occlusion make grasping even a few thousand fibers difficult. This paper introduces a novel interaction method that augments the 3D visualization with a 2D representation containing a low-dimensional embedding of the DTI fibers. This embedding preserves the relationship between the fibers and removes the visual clutter that is inherent in 3D renderings of the fibers. This new interface allows the user to manipulate the DTI fibers as both 3D curves and 2D embedded points and easily compare or validate his or her results in both domains. The implementation of the framework is GPUbased to achieve real-time interaction. The framework was applied to several tasks, and the results show that our method reduces the user’s workload in recognizing 3D DTI fibers and permits quick and accurate DTI fiber selection.

pdf video bibtex slides sourcecode project

Volume Illustration of Muscle from Diffusion Tensor Images
Wei Chen, Zhicheng Yan, Song Zhang, John Allen Crow, David S. Ebert, R. McLaughlin, K. Mullins, R. Cooper, Zi'ang Ding, Jun Liao

IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.

Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional examplebased solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.
pdf video bibtex slides

Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation
Ross Maciejewski,Insoo Wu, Wei Chen, David S. Ebert

IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.

The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means
of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation.

We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial transfer function generation serves as a reasonable base for volumetric rendering, reducing the trial-and-error overhead typically found in transfer function design.
pdf video bibtex slides

Perception-Based Transparency Optimization for Direct Volume Rendering
Ming-Yuen Chan, Yingcai Wu, Wai-Ho Mak, Wei Chen, Huamin Qu

IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.
(Honorable Mention)

The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Besides, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the
effectiveness and robustness of our method.
pdf video bibtex slides

Context-Aware Volume Modeling of Skeletal Muscles
Zhicheng
Yan, Wei Chen, Aidong Lu, David S. Ebert

Journal of Computer Graphics Forum (Special Issue of EuroVis 2009)
, 27(3), Germany, June.2009.

This paper presents an interactive volume modeling method that constructs skeletal muscles from an existing volumetric dataset. Our approach provides users with an intuitive modeling interface and produces compelling results that conform to the characteristic anatomy in the input volume. The algorithmic core of our method is an intuitive anatomy classification approach, suited to accommodate spatial constraints on the muscle volume. The presented work is useful in illustrative visualization, volumetric information fusion and volume illustration that involve muscle modeling, where the spatial context should be faithfully preserved.

pdf video bibtex slides

 

Bivariate Transfer Functions on Unstructured Grids
Yuyan Song, Wei Chen, Ross Maciejewski, Kelly Gaither, David S. Ebert

Journal of Computer Graphics Forum (Special Issue of EuroVis 2009), 27(3), Germany, June.2009.


Multi-dimensional transfer functions are commonly used in rectilinear volume renderings to effectively portray materials, material boundaries and even subtle variations along boundaries. However, most unstructured grid rendering algorithms only employ one-dimensional transfer functions. This paper proposes a novel pre-integrated Projected Tetrahedra (PT) rendering technique that applies bivariate transfer functions on unstructured grids. For each type of bivariate transfer function, an analytical form that pre-integrates the contribution of a ray segment in one tetrahedron is derived, and can be precomputed as a lookup table to compute the color and opacity in a projected tetrahedron on-the-fly. Further, we show how to approximate the integral using the pre-integration method for faster unstructured grid rendering. We demonstrate the advantages of our approach with a variety of examples and comparisons with one-dimensional transfer functions.


pdf video bibtex slides

Pics

Laplacian Lines for Real-Time Shape Illustration
Long Zhang, Ying He, Xuexiang Xie, Wei Chen

ACM Interactive 3D Graphics and Games (I3D), March.2009.


This paper presents a novel object-space line drawing algorithmthat can depict shape with view dependent feature lines in real-time. Strongly inspired by the Laplacian-of-Gaussian (LoG) edge detector in image processing, we define Laplacian Lines as the zerocrossing points of the Laplacian of the surface illumination. Compared to other view dependent features, Laplacian lines are computationally efficient because most expensive computations can be pre-processed. Thus, Laplacian lines are very promising for interactively illustrating large-scale models.


pdf video bibtex slides

Stippling by Example
SungYe Kim, Ross Maciejewski, Tobias Isenberg, William M. Andrews, Wei Chen,
Mario Costa Sousa, David S. Ebert
Proceedings of the 7th international symposium on Non-photorealistic animation and rendering (NPAR), 2009 pp.xx-xx.


In this work, we focus on stippling as an artistic style and discuss our technique for capturing and reproducing stipple features unique to an individual artist. We employ a texture synthesis algorithm based on the gray-level co-occurrence matrix (GLCM) of a texture field. This algorithm uses a texture similarity metric to generate stipple textures that are perceptually similar to input samples, allowing us to better capture and reproduce stipple distributions. First, we extract example stipple textures representing various tones in order to create an approximate tone map used by the artist. Second, we extract the stipple marks and distributions from the extracted example textures, generating both a lookup table of stipple marks and a texture representing the stipple distribution. Third, we use the distribution of stipples to synthesize similar distributions with slight variations using a numerical measure of the error between the synthesized texture and the example texture as the basis for replication. Finally, we apply the synthesized stipple distribution to a 2D grayscale image and place stipple marks onto the distribution, thereby creating a stippled image that is statistically similar to images created by the example artist.

pdf video bibtex slides

 

Shape Context Preserving Deformation of 2D Anatomical Illustrations (Project Webpage)
Wei Chen, Xiao Liang, Ross Maciejewski, David S.Ebert

Journal of Computer Graphics Forum, 28(1), January, 2009, pp.114-126


In this paper we present a novel 2D shape context preserving image manipulation approach which constructs and manipulates a 2D mesh with a new differential mesh editing algorithm. We introduce a novel shape context descriptor and integrate it into the deformation framework, facilitating shape-preserving deformation for 2D anatomical illustrations. Our new scheme utilizes an analogy based shape transfer technique in order to learn shape styles from reference images. Experimental results show that visually plausible deformation can be quickly generated from an existing example at interactive frame rates. An experienced artist has evaluated our approach and his feedback is quite encouraging.

pdf video bibtex

 

Visualizing Diffusion Tensor Imaging Data with merging ellipsoids
Wei Chen, Song Zhang, Steve Correia, David F. Tate

Proceedings of IEEE Pacific Visualization Symposium 2009, Beijing, China, April.2009, pp. 145-152.


Diffusion tensor fields reveal the underlying anatomical structures in biological tissues such as neural fibers in the brain. Most current methods for visualizing the diffusion tensor field can be categorized into two classes: integral curves and glyphs. Integral curves are continuous and rep-resent the underlying fiber structures, but are prone to inte-gration error and loss of local information. Glyphs are useful for representing local tensor information, but do not convey the connectivity in the anatomical structures well. We in-troduce a simple yet effective visualization technique that extends the streamball method in flow visualization to ten-sor ellipsoids. Each tensor ellipsoid represents a local tensor, and either blends with neighboring tensors or breaks away from them depending on their orientations and anisotropies. The resulting visualization shows the connectivity informa-tion in the underlying anatomy while characterizing the local tenors in detail.

pdf video bibtex slides