Selected Publications
(Selected papers/talks can be downloaded from My Favorite/By years/Talks webpages. Copyright reserved by the publishers!)
Real-Time Shape Illustration Using Laplacian Lines.
Long Zhang, Ying He, Jiazhi Xia, Xuexiang Xie, Wei Chen.
In IEEE Transactions on Visualization and Computer Graphics , 2010.
This paper presents a novel object-space line drawing algorithm that can depict shapes with view-dependent feature lines
in real time. Strongly inspired by the Laplacian-of-Gaussian (LoG) edge detector in image processing, we define Laplacian lines as the
zero-crossing points of the Laplacian of the surface illumination. Compared to other view-dependent feature lines, Laplacian lines are
computationally efficient because most expensive computations can be pre-processed. We further extend Laplacian lines to volumetric
data and develop the algorithm to compute volumetric Laplacian lines without iso-surface extraction. We apply the proposed Laplacian
lines to a wide range of real-world models and demonstrate that Laplacian lines are more efficient than the existing computer generated
feature lines and can be used in interactive graphics applications.
pdf video bibtex slides
|
Digital Storytelling: Automatic Animation for Time-Varying
Data Visualization
Li Yu, Aidong Lu, William Ribarsky, Wei Chen
To appear Computer Graphics Forum (Special Issue of Pacific Graphics 2010)
This paper presents a digital storytelling approach that generates automatic animations for time-varying data
visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes
animations to describe various event features. Specifically, we analyze information related to a given event and
abstract it as an event graph, which represents data features as nodes and event relationships as links. This
graph embeds a tree-like hierarchical structure which encodes data features at different scales. Next, narrative
structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of
narrative structures are considered in our automatic rendering parameter decision process to generate animations
as digital stories. We integrate this animation generation approach into an interactive exploration process of timevarying
data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a
storm surge application that our approach allows semantic visualization of time-varying data and easy animation
generation for users without special knowledge about the underlying visualization techniques.
pdf video bibtex slides
|
Motion
Track: Visualizaing Motion Variation of Human Motion Data
Yueqi Hu, Shuangyuan Wu, Shihong Xia, Jinghua Fu, Wei Chen
In Proceedings of
IEEE Pacific Visualization Symposium, March 2010, Taibei (Cover Image)
This paper proposes a novel visualization approach, which can depict
the variations between different human motion data. This is
achieved by representing the time dimension of each animation sequence
with a sequential curve in a locality-preserving reference 2D
space, called the motion track representation. The principal advantage
of this representation over standard representations of motion
capture data - generally either a keyframed timeline or a 2D motion
map in its entirety - is that it maps the motion differences along
the time dimension into parallel perceptible spatial dimensions but
at the same time captures the primary content of the source data.
Latent semantic differences that are difficult to be visually distinguished
can be clearly displayed, favoring effective summary, clustering,
comparison and analysis of motion database.
pdf video bibtex slides
|
|
Volume Exploration
using Elliptical Gaussian Functions Yunhai Wang, Wei Chen, Guihua Shang, Xuebin Chi
In Proceedings of IEEE Pacific
Visualization Symposium, March 2010, Taibei (Cover Image)
This paper presents an interactive transfer function design tool
based on ellipsoidal Gaussian transfer functions (ETFs). Our approach
explores volumetric features in the statistical space by modeling
the space using the Gaussian mixture model (GMM) with a
small number of Gaussians to maximize the likelihood of feature
separation. Instant visual feedback is possible by mapping these
Gaussians to ETFs and analytically integrating these ETFs in the
context of the pre-integrated volume rendering process. A suite
of intuitive control widgets is designed to offer automatic transfer
function generation and flexible manipulations, allowing an inexperienced
user to easily explore undiscovered features with several
simple interactions. Our GPU implementation demonstrates interactive
performance and plausible scalability which compare favorably
with existing solutions. The effectiveness of our approach has
been verified on several datasets.
pdf video bibtex slides
|
A Novel Interface for Interactive Exploration of DTI Fibers (Project webpage)
Wei Chen, Zi'ang Ding, Song Zhang, Anna MacKay-Brandt, Stephen Correia, Huamin Qu, John Allen Crow, David F. Tate, Zhicheng Yan, Qunsheng Peng
IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.
Visual exploration is essential to the visualization and analysis of densely sampled 3D DTI fibers in biological speciments,
due to the high geometric, spatial, and anatomical complexity of fiber tracts. Previous methods for DTI fiber visualization use zooming,
color-mapping, selection, and abstraction to deliver the characteristics of the fibers. However, these schemes mainly focus on the
optimization of visualization in the 3D space where cluttering and occlusion make grasping even a few thousand fibers difficult. This
paper introduces a novel interaction method that augments the 3D visualization with a 2D representation containing a low-dimensional
embedding of the DTI fibers. This embedding preserves the relationship between the fibers and removes the visual clutter that is
inherent in 3D renderings of the fibers. This new interface allows the user to manipulate the DTI fibers as both 3D curves and 2D
embedded points and easily compare or validate his or her results in both domains. The implementation of the framework is GPUbased
to achieve real-time interaction. The framework was applied to several tasks, and the results show that our method reduces
the user’s workload in recognizing 3D DTI fibers and permits quick and accurate DTI fiber selection.
pdf video bibtex slides sourcecode project
|
Volume Illustration of Muscle from Diffusion Tensor Images
Wei Chen, Zhicheng Yan, Song Zhang, John Allen Crow, David S. Ebert, R. McLaughlin, K. Mullins, R. Cooper, Zi'ang Ding, Jun Liao
IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.
Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant
details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do
not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging
(DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it
into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask
derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to
remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional examplebased
solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed
scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the
muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig
leg), demonstrating plausible illustration and expressiveness.
pdf video bibtex slides
|
Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation
Ross Maciejewski,Insoo Wu, Wei Chen, David S. Ebert
IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.
The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means
of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer
function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value
gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then
assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box,
circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data
values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of
non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation.
We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are
then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to
explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time
steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and
users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work
enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user
can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of
the transfer function, and we show that the initial transfer function generation serves as a reasonable base for volumetric rendering,
reducing the trial-and-error overhead typically found in transfer function design.
pdf video bibtex slides
|
Perception-Based Transparency Optimization for Direct Volume Rendering
Ming-Yuen Chan, Yingcai Wu, Wai-Ho Mak, Wei Chen, Huamin Qu
IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2009), Vol. 15(6), 2009.
The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However,
obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other
rendering parameters. Besides, the visual quality of layers also depends on various perceptual factors. In this paper, we propose
an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We
introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in
the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of
the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the
effectiveness and robustness of our method.
pdf video bibtex slides
|
Context-Aware Volume Modeling of Skeletal Muscles
Zhicheng Yan, Wei Chen, Aidong Lu, David S. Ebert
Journal of Computer Graphics Forum (Special Issue of EuroVis 2009), 27(3), Germany, June.2009.
This paper presents an interactive volume modeling method that constructs skeletal muscles from an existing
volumetric dataset. Our approach provides users with an intuitive modeling interface and produces compelling
results that conform to the characteristic anatomy in the input volume. The algorithmic core of our method is an
intuitive anatomy classification approach, suited to accommodate spatial constraints on the muscle volume. The
presented work is useful in illustrative visualization, volumetric information fusion and volume illustration that
involve muscle modeling, where the spatial context should be faithfully preserved.
pdf video bibtex slides
|
|
Bivariate Transfer Functions on Unstructured Grids
Yuyan Song, Wei Chen, Ross Maciejewski, Kelly Gaither, David S. Ebert
Journal of Computer Graphics Forum (Special Issue of EuroVis 2009), 27(3), Germany, June.2009.
Multi-dimensional transfer functions are commonly used in rectilinear volume renderings to effectively portray
materials, material boundaries and even subtle variations along boundaries. However, most unstructured grid
rendering algorithms only employ one-dimensional transfer functions. This paper proposes a novel pre-integrated
Projected Tetrahedra (PT) rendering technique that applies bivariate transfer functions on unstructured grids. For
each type of bivariate transfer function, an analytical form that pre-integrates the contribution of a ray segment
in one tetrahedron is derived, and can be precomputed as a lookup table to compute the color and opacity in
a projected tetrahedron on-the-fly. Further, we show how to approximate the integral using the pre-integration
method for faster unstructured grid rendering. We demonstrate the advantages of our approach with a variety of
examples and comparisons with one-dimensional transfer functions.
pdf video bibtex slides
|
Laplacian Lines for Real-Time Shape Illustration
Long Zhang, Ying He, Xuexiang Xie, Wei Chen
ACM Interactive 3D Graphics and Games (I3D), March.2009.
This paper presents a novel object-space line drawing algorithmthat
can depict shape with view dependent feature lines in real-time.
Strongly inspired by the Laplacian-of-Gaussian (LoG) edge detector
in image processing, we define Laplacian Lines as the zerocrossing
points of the Laplacian of the surface illumination. Compared
to other view dependent features, Laplacian lines are computationally
efficient because most expensive computations can be
pre-processed. Thus, Laplacian lines are very promising for interactively
illustrating large-scale models.
pdf video bibtex slides
|
Stippling by Example
SungYe Kim, Ross Maciejewski, Tobias Isenberg, William M. Andrews, Wei Chen,Mario Costa Sousa, David S. Ebert
Proceedings of the 7th international symposium on Non-photorealistic animation and rendering(NPAR), 2009 pp.xx-xx.
In this work, we focus on stippling as an artistic style and discuss
our technique for capturing and reproducing stipple features unique
to an individual artist. We employ a texture synthesis algorithm
based on the gray-level co-occurrence matrix (GLCM) of a texture
field. This algorithm uses a texture similarity metric to generate
stipple textures that are perceptually similar to input samples, allowing
us to better capture and reproduce stipple distributions. First,
we extract example stipple textures representing various tones in order
to create an approximate tone map used by the artist. Second,
we extract the stipple marks and distributions from the extracted
example textures, generating both a lookup table of stipple marks
and a texture representing the stipple distribution. Third, we use
the distribution of stipples to synthesize similar distributions with
slight variations using a numerical measure of the error between the
synthesized texture and the example texture as the basis for replication.
Finally, we apply the synthesized stipple distribution to a
2D grayscale image and place stipple marks onto the distribution,
thereby creating a stippled image that is statistically similar to images
created by the example artist.
pdf video bibtex slides
|
Visualizing Diffusion Tensor Imaging Data with merging ellipsoids
Wei Chen, Song Zhang, Steve Correia, David F. Tate
Proceedings of IEEE Pacific Visualization Symposium 2009, Beijing, China, April.2009, pp. 145-152.
Diffusion tensor fields reveal the underlying anatomical structures in biological tissues such as neural fibers in the brain. Most current methods for visualizing the diffusion tensor field can be categorized into two classes: integral curves and glyphs. Integral curves are continuous and rep-resent the underlying fiber structures, but are prone to inte-gration error and loss of local information. Glyphs are useful for representing local tensor information, but do not convey the connectivity in the anatomical structures well. We in-troduce a simple yet effective visualization technique that extends the streamball method in flow visualization to ten-sor ellipsoids. Each tensor ellipsoid represents a local tensor, and either blends with neighboring tensors or breaks away from them depending on their orientations and anisotropies. The resulting visualization shows the connectivity informa-tion in the underlying anatomy while characterizing the local tenors in detail.
pdf video bibtex slides
|
Shape Context Preserving Deformation of 2D Anatomical Illustrations (Project Webpage)
Wei Chen, Xiao Liang, Ross Maciejewski, David S.Ebert
Journal of Computer Graphics Forum, 28(1), January, 2009, pp.114-126
In this paper we present a novel 2D shape context preserving image manipulation approach which constructs and manipulates a 2D mesh with a new differential mesh editing algorithm. We introduce a novel shape context descriptor and integrate it into the deformation framework, facilitating shape-preserving deformation for 2D anatomical illustrations. Our new scheme utilizes an analogy based shape transfer technique in order to learn shape styles from reference images. Experimental results show that visually plausible deformation can be quickly generated from an existing example at interactive frame rates. An experienced artist has evaluated our approach and his feedback is quite encouraging.
pdf video bibtex
|
|
Abstractive Representation and Exploration of Hierarchically Clustered Diffusion Tensor Fiber Tracts
Wei Chen, Song Zhang, Stephan Correia, David S. Ebert
Journal of Computer Graphics Forum (Proceedings of EuroVis 2008), 27(3), May 2008, pp. 1071-1078.
Diffusion tensor imaging (DTI) has been used to generate fibrous structures in both brain white matter and muscles.Fiber clustering groups the DTI fibers into spatially and anatomically related tracts. As an increasing number of fiber clustering methods have been recently developed, it is important to display, compare, and explore the clustering results efficiently and effectively. In this paper, we present an anatomical visualization technique that reduces the geometric complexity of the fiber tracts and emphasizes the high-level structures. Beginning with a volumetric diffusion tensor image, we first construct a hierarchical clustering representation of the fiber bundles. These bundles are then reformulated into a 3D multi-valued volume data. We then build a set of geometric hulls and principal fibers to approximate the shape and orientation of each fiber bundle. By simultaneously visualizing the geometric hulls, individual fibers, and other data sets such as fractional anisotropy, the overall shape of the fiber tracts are highlighted, while preserving the fibrous details. A rater with expert knowledge of white matter structure has evaluated the resulting interactive illustration and confirmed the improvement over straightforward DTI fiber tract visualization.
pdf video bibtex slides
|
|
Shape-aware Volume Illustration
Wei Chen, Aidong Lu, David S.Ebert
Journal of Computer Graphics Forum (Proceedings of Eurographics 2007), 26(3), Czech Republic, September.2007, pp.705-714
We introduce a novel volume illustration technique for regularly sampled volume datasets. The fundamental difference between previous volume illustration algorithms and ours is that our results are shape-aware, as they depend not only on the rendering styles, but also the shape styles. We propose a new data structure that is derived from the input volume and consists of a distance volume and a segmentation volume. The distance volume is used to reconstruct a continuous field around the object boundary, facilitating smooth illustrations of boundaries and silhouettes. The segmentation volume allows us to abstract or remove distracting details and noise, and apply different rendering styles to different objects and components. We also demonstrate how to modify the shape of illustrated objects using a new 2D curve analogy technique. This provides an interactive method for learning shape variations from 2D hand-painted illustrations by drawing several lines. Our experiments on several volume datasets demonstrate that the proposed approach can achieve visually appealing and shape-aware illustrations. The feedback from medical illustrators is quite encouraging.
pdf video bibtex slides
|
|
Easy matting: A Stroke Based Approach for Continuous Image Matting
Yu Guan, Wei Chen, Xiao Liang, Zi'ang Ding, Qunsheng Peng
Journal Computer Graphics Forum (Proceedings of Eurographics 2006), 25(3):567-576
We propose an iterative energy minimization framework for interactive image matting. Our approach is easy in the sense that it is fast and requires only few user-specified strokes for marking the foreground and background. Beginning with the known region, we model the unknown region as a Markov Random Field (MRF) and formulate its energy in each iteration as the combination of one data term and one smoothness term. By automatically adjusting the weights of both terms during the iterations, the first-order continuous and feature-preserving result is rapidly obtained with several iterations. The energy optimization can be further performed in selected local regions for refined results. We demonstrate that our energy-driven scheme can be extended to video matting, with which the spatio-temporal smoothness is faithfully preserved. We show that the proposed approach outperforms previous methods in terms of both the quality and performance for quite challenging examples.
pdf video bibtex slides
|
|
Hardware-Accelerated Adaptive EWA Volume Splatting (Project Webpage)
Wei Chen, Liu Ren, Matthias Zwicker and Hanspeter Pfister
Proceedings of IEEE Visualization 2004, October.2004, Austin, USA. pp.67-74
We present a hardware-accelerated adaptive EWA (elliptical weighted average) volume splatting algorithm. EWA splatting combines a Gaussian reconstruction kernel with a low-pass image filter for high image quality without aliasing artifacts or excessive blurring. We introduce a novel adaptive filtering scheme to reduce the computational cost of EWA splatting. We show how this algorithm can be efficiently implemented on modern graphics processing units (GPUs). Our implementation includes interactive classification and fast lighting. To accelerate the rendering we store splat geometry and 3D volume data locally in GPU memory. We present results for several rectilinear volume datasets that demonstrate the high image quality and interactive rendering speed of our method.
pdf video bibtex slides
|
|
Real-time Voxelization for Complex Polygonal Models
Zhao Dong, Wei Chen, Hujun Bao, Hongxin Zhang and Qunsheng Peng
Proceedings of Pacific Graphics 2004, October 2004, Seoul, Korea. pp.73-78
In this paper we present a real-time voxelization algorithm which is implemented with newest programmable graphics hardware. The algorithm first converts geometric model into discrete voxel space. The resultant voxels are then encoded as 2D textures and stored in three intermediate sheet buffers called directional sheet buffers according the orientation of the boundary surface. These buffers are finally synthesized into one worksheet. The whole pipeline traverses the geometric model only once and is accomplished entirely in GPU (graphics processing unit), yielding a real-time and configurable voxelization engine. The algorithm is simple to implement and can be added easily to diverse applications such as volume-based modelling, transparent rendering and collision detection.
pdf video bibtex slides
|
|
|
|