I am interested in computer graphics/rendering, computer vision and perception, and future computing environments.
I received my Ph.D. from the College of Computing at Georgia Tech after performing research in the Computational Perception Lab.
My email address is mail@myfirstnamemylastname.com. |
Cameras and Vision: How can mobile games use computer vision to enable new game experiences? (GDC Mobile 2006) | |
Vison-based adaptive viewing: Vision-based mobile user-interfaces permit one-handed operation of complex tasks such as media gallery browsing. | |
Mobile vision-based UIs: Computer vision-based tracking can be used to create richer user interfaces than with buttons alone. (Video here) |
Example based processing (Thesis): Example based processing can be used to create imagery using machine learning. | |
Exemplar based surface texture: Novice users can use a digital photograph of a source material to easily create photorealistic renderings of 3D models. | |
Learning video processing by example: This algorithm approximates the output of an arbitrary video processing algorithm based on a pair of input and output exemplars. The algorithm relies on learning the mapping between the input and output exemplars to model the processing that has taken place. | |
Fine-scale skin structure rendering: Rendering the fine structure present in skin adds significantly to its photo-realism. | |
Eye tracking: Finding and detecting eyes in real-time computationally inexpensively is important because it is a fundamental first step towards more attentive user interfaces. | |
Head tracking: Natural interaction with computers requires awareness of the head position and orientation of users. Head tracking can also be used to process video to add special effects to video streams. |