您现在的位置: 实验室首页 >> 最新动态 >> 学术报告 >> 系列报告第26期:When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations
系列报告第26期:When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations
报告题目:When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations
报告时间:2023年5月9日下午14:00
报告地点:浙江大学紫金港校区蒙民伟楼402室
报告人:陈竹天 博士
主持人:巫英才 教授
报告内容:We live in a dynamic world that produces a growing volume of accessible data. Visualizing this data within its physical context can aid situational awareness, improve decision-making, enhance daily activities like driving and watching sports, and even save lives in tasks such as performing surgery or navigating hazardous environments. Augmented Reality (AR) offers a unique opportunity to achieve this contextualization of data by overlaying digital content onto the physical world. However, visualizing data in its physical context using AR devices (e.g., headsets or smartphones) is challenging for users due to the complexities involved in creating and accurately placing the visualizations within the physical world. This process can be even more pronounced in dynamic scenarios with temporal constraints. In this talk, I will introduce a novel approach, which uses sports video streams as a testbed and proxy for dynamic scenes, to explore the design, implementation, and evaluation of AR visualization systems that enable users efficiently visualize data in dynamic scenes. I will first present three systems allowing users to visualize data in sports videos through touch, natural language, and gaze interactions, and then discuss how these interaction techniques can be generalized to other AR scenarios. The designs of these systems collectively form a unified framework that serves as a preliminary solution for helping users visualize data in dynamic scenes using AR. I will next share my latest progress in using Virtual Reality (VR) simulations as a more advanced testbed, compared to videos, for AR visualization research. Finally, building on my framework and testbeds, I will describe my long-term vision and roadmap for using AR visualizations to advance our world in becoming more connected, accessible, and efficient.
报告人简介:Zhutian Chen is a PostDoc Fellow in the Visual Computing Group at Harvard University. His research is at the intersection of Data Visualization, Human-Computer Interaction, and Augmented Reality, with a focus on advancing human-data interaction in everyday activities. His research has been published as full papers in top venues such as IEEE VIS, ACM CHI, and TVCG, and received one best paper in ACM CHI and three best paper nominations in IEEE VIS, the premier venue in data visualization. Before joining Harvard, he was a PostDoc in the Design Lab at UC San Diego. Zhutian received his Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology.


[时间:2023-07-12 12:31 点击: 次]
地址:中国·浙江·杭州·余杭塘路866号(310058)
Copyright © 浙江大学CAD&CG国家重点实验室 浙ICP备05074421