Vivek Pradeep, Ph.D. Student in Biomedical Engineering
Address: Doheny Vision Research Center
1355 San Pablo Street, Suite 100
Los Angeles, CA 90033
Phone: (323) 442-6765
Fax: (323) 442-6755
Vivek Pradeep received his bachelor’s degree in Electronics and Communications Engineering at the Indian Institute of Technology, Roorkee in 2005. He was awarded the department gold medal for the best B.Tech project in his batch, for work done on a collision avoidance system at busy road intersections. He joined the PhD program in Biomedical Engineering at USC in Fall 2005, and is currently working at the BMES-ERC and the Institute for Robotics and Intelligent Systems (IRIS) at the Department of Computer Science. His research interests include:
1) Developing efficient algorithms for robot vision that enable intelligent interactions with the environment such as autonomous navigation, target pursuit and active exploration.
2) Investigating image formation and perception at the biological level and engineering biologically-inspired vision systems.
3) Robust and stable techniques for recovering structure from motion and multiple-view geometry.
4) Parallel processing using Graphics Processing Units (GPU's) for improved performance of computer vision algorithms and optimization for embedded applications.
5) Conducting psychophysics experiments for analyzing human visual behavior and human-computer interaction.
His current project is focused on vision-based Simultaneous Localization and Mapping (vSLAM) algorithms for developing mobility aids for the visually impaired. Due to surgical and technological limitations, the retinal prosthesis is presently able to provide vision only in the central 20 degrees field of view. However, peripheral vision is necessary for autonomous navigation. An algorithm that estimates the position of stationary or moving obstacles in the peripheral field of view and cues subjects towards them is proposed. A head-mounted stereo vision sensor acquires 3D point-clouds of the surrounding environment and the SLAM system continuously updates the computed map and camera trajectory. An interpretation layer detects and classifies obstacles, curbs and step-downs. Mobility studies with normally sighted and visually impaired subjects are currently in progress to test the efficacy and applicability of the system. Furthermore, an investiigation into the best cuiing modality - tactile, auditory or the prosthesis itself is underway.