Workshop on Augmented and Mixed Reality: Speakers, Moderators and Panelists

Ronald Azuma

Ronald Azuma is a Principal Engineer and Research Manager in Intel Labs, in a group that creates new computational imaging and display systems needed to enable novel media experiences in Augmented and Virtual Reality and other usages. He is known for being a pioneer in AR and is generally credited with defining the term “Augmented Reality.” He built the world’s first working AR system and wrote a paper that is the single most referenced publication in the field of AR and was listed as one of “50 influential papers” from the entire history of MIT Press journal publications. He serves on the Steering Committee for the IEEE International Symposium on Mixed and Augmented Reality (ISMAR). He received a B.S. in EECS from UC Berkeley and an M.S. and Ph.D. in CS at UNC Chapel Hill. In 2016, he became an IEEE Fellow.


Selim BenHimane

Selim BenHimane is an Intel Principal Engineer leading the SLAM and Dense Reconstruction team in the Perceptual Computing Group. His team develops spatial localization and mapping SW for AR/VR headsets as well as for mobile robots. Prior to Intel, he was the Head of Research of metaio, one of the leading Augmented Reality companies which got acquired by Apple in 2015. Selim received a PhD degree in 2006 from the Ecole des Mines de Paris after a 3-year research on real-time visual tracking and servoing at INRIA Sophia Antipolis, France. He received the prize of the Best French PhD in Applied and Innovative Research by the Federation of High Technology Associations (ASTI). During his postdoc, he was a research associate leading the Computer Vision group in the CAMPAR team at the Technical University of Munich, Germany. Selim authored and co-authored over 25 patents and over 50 scientific publications in journals and international conference proceedings.


Ian Bratt

Ian Bratt is a Distinguished Engineer at ARM, where he works as a member of the Architecture and Technology Group. Ian spent 5 years as an architect working on the ARM Mali family of GPUs, during a fast growing period which culminated in ARM partners shipping over 1B Mali GPUs in 2016. Prior to ARM, Ian worked at the
pioneering multicore startup, Tilera. Ian has worked on CPUs, GPUs, memory systems and SoC architecture. He holds an S.M. from MIT, and has 23 granted US patents.

 


Gary Bradski

Gary Bradski, PhD is an entrepreneur, engineer and researcher in computer vision and machine learning. He founded and still runs the most popular computer vision library in the world: OpenCV http://opencv.org/. He organized the computer vision team for Stanley, the autonomous car that won the $2M DARPA Grand Challenge which in turn kicked off the autonomous driving industry. Stanley now resides in the Smithsonian Air and Space Museum. Gary served as a visiting Professor at Stanford University Computer Science department for seven years where he co-founded the Stanford AI Robot (STAIR) Project which was the forerunner of the robot operating system (ROS) and PR2 robot developed at Willow Garage where he also served as Senior Scientist and manager. He founded Industrial Perception Inc that sold to Google in 2013 and helped develop VideoSurf which sold to Microsoft in 2011. He has a long list of patents and publications and two textbooks and sits on the boards and advisory boards of several Silicon Valley Companies. Currently, he is Co-founder/CTO of Arraiy.com founded in 2016 located in Palo Alto.


Jim Dunphy

Jim Dunphy is an Optical Hardware Engineering Lead on the Google Daydream Hardware team overseeing the development of display and optics architectures for AR. He was a technical lead and contributed to the development of Google Glass while working at Google X. Prior to Google, he developed broad expertise in optics, electronics, displays, manufacturing, and test equipment through development and commercialization of a wide range of electronics and optics technologies, including sensors, MEMS displays, photovoltaics, and printed electronics at various tech companies in the silicon valley. He holds a Ph.D. in Physics from UC Berkeley and B.S. from MIT.


Meron Gribetz

A neuroscientist by training, Meron Gribetz spent years studying how the human mind and body work. A flash of insight at a New York bar – where he watched people both fail and succeed in communicating with each other using their mobile devices – inspired a desire to build a new kind of “natural machine” that could better connect people to each other and the world around them.
Building on his research in both computer science and neuroscience at Columbia University, he developed the basic tenets that now underscore Meta’s Neural Interface design philosophy. Today, he leads as the founder of Meta, the first company to deliver augmented reality (AR) designed around the way people are built to experience the world.


Hong Hua

Hong Hua, Fellow of SPIE, is a Professor with the College of Optical Sciences (OSC), The University of Arizona. She has over 20 years of experiences in researching and developing head-mounted display technologies for virtual and augmented reality applications and investigating various visual perceptual issues related to using head-mounted displays. As the Principal Investigator of the 3D Visualization and Imaging Systems Laboratory (3DVIS Lab), Dr. Hua’s current research interests include various head-mounted displays and light field displays and imaging, optical engineering, medical imaging, augmented reality, virtual reality.


Michael Kass

Michael Kass is a Senior Principal Engineer in the New Technology Group at Intel with particular expertise in computer vision, computer graphics, augmented and virtual reality. In 2006, he received a Scientific and Technical Academy Award for his work on cloth simulation. In 2009, he received the ACM SIGGRAPH Computer Graphics Achievement Award “for his extensive and significant contributions to computer graphics, ranging from image processing to animation to modeling, and in particular for his use of optimization for physical simulation and image segmentation.” His seminal computer vision paper, “Snakes: Active contour models,” has been cited over twenty thousand times. He holds 25 patents. Before joining Intel, he was a senior research scientist at Pixar Animation Studios and a Distinguished Fellow at Magic Leap. He holds a B.A. from Princeton University (independent concentration in Artificial Intelligence), an M.S. in Computer Science from MIT, and a Ph.D. in Electrical Engineering from Stanford.


Bernard Kress

Bernard Kress is the Partner Optical Architect in the Hololens team at Microsoft. His group is focusing on next generation architectures for AR and MR. Prior to Microsoft, he was the principal optical architect at Google Glass. Prior to Google Glass, he was CTO of a micro-optics clean room fab in San Jose and SBG Labs (Digilens). His main interests are in the fields of holography, diffractive optics, micro and nano-optics. He authored 4 books on micro-optics, and co-authored numerous books on diffractive optics, parity time symmetry optics, nanophotonics and optics for AR. He was named an SPIE Fellow in 2013 and elected to the SPIE Board of Directors in 2016. He is chair of the upcoming SPIE EDO17 conference in Munich, dedicated to optical technologies for AR and MR.


Douglas Lanman, PhD

Douglas Lanman is a Research Scientist at Oculus, where he leads investigations into computational displays and imaging systems. His prior research has focused on head-mounted displays, glasses-free 3D displays, light field cameras, and active illumination for 3D reconstruction and interaction. He received a B.S. in Applied Physics with Honors from Caltech in 2002 and M.S. and Ph.D. degrees in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a Senior Research Scientist at NVIDIA Research from 2012 to 2014, a Postdoctoral Associate at the MIT Media Lab from 2010 to 2012, and an Assistant Research Staff Member at MIT Lincoln Laboratory from 2002 to 2005.


Johnny Lee

Johnny Lee is an Engineering Director in the Daydream team, which is a division focused on creating hardware and software for immersive VR/AR experiences. He leads the Tango program, which focuses on mobile HW/SW technologies for real-time motion tracking and environment mapping. Previously, he helped Google X explore new projects as Rapid Evaluator and was a core algorithms contributor to the original Xbox Kinect. His YouTube videos demonstrating Wii remote hacks have surpassed over† 15 million views and became one of the most popular TED talk videos. In 2008, he received his PhD in Human-Computer Interaction from Carnegie Mellon University and has been recognized in MIT Technology Review’s TR35.


David Luebke

David Luebke is the Vice President of Graphics Research at NVIDIA, where he helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. His principal research interests are virtual and augmented reality, ray tracing, and real-time rendering. Luebke is an IEEE Fellow and his honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics “Test of Time Award”. Dr. Luebke has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, and dozens of papers, articles, chapters, and patents.


Richard Newcombe

Richard Newcombe is Research Lead of Machine Perception at Facebook‘s Oculus Research and an affiliate assistant professor at the University of Washington. His research group at Facebook is developing a new generation of machine perception devices and infrastructure to enable always-on contextualized AI and social teleportation, built upon real-time mapping and tracking technology He received his PhD from Imperial College in London and a Postdoctoral fellowship at the University of Washington, and went on to co-found Surreal Vision Ltd., which was sold to Facebook in 2015. His research introduced the dense SLAM paradigm demonstrated in KinectFusion and DynamicFusion, which have influenced a generation of real-time dense reconstruction systems used in the emerging fields of AR/VR and robotics.


Dr. Liang Peng

Dr. Liang Peng is a Sr. Director of Technical Planning & Strategy in IC Lab of Huawei US R&D Center. He has worked in semiconductor for 19 years with primarily focus in computer architecture for GPU, Video and Memory. He has worked at Intel, Nvidia and Rambus etc before joining Huawei. His pioneer work on programmable Pixel Shader of GPU pipeline design has initialized the path to the high density and massive parallel computing for Modern GPU. Liang has received BS in Astrophysics from Peking University, and PhD from the Program of Computer Graphics at Cornell University. He is a member of ACM Siggraph, IEEE and CASPA.


Marc Pollefeys

Marc Pollefeys is Director of Science leading a team of scientist and engineers to develop advanced perception capabilities for HoloLens. He is also a Professor of Computer Science at ETH Zurich. In his PhD work at KU Leuven in Belgium in the late 90s he was the first to develop a software pipeline to automatically turn photographs into detailed 3D models. More recent projects include real-time 3D scanning with mobile devices, 3D reconstruction of cities, self-driving cars and autonomous drones, as well as combining 3D reconstruction with semantic scene understanding. He has co-founded several start-ups and is a fellow of the IEEE.


Kari Pulli

Kari Pulli is CTO at Meta. Before joining Meta, Kari worked as CTO of the Imaging and Camera Technologies Group at Intel influencing the architecture of future IPUs. He was VP of Computational Imaging at Light and before that he led research teams at NVIDIA Research (Senior Director) and at Nokia Research (Nokia Fellow) on Computational Photography, Computer Vision, and Augmented Reality. He headed Nokia’s graphics technology, and contributed to many Khronos and JCP mobile graphics and media standards, and wrote a book on mobile 3D graphics. Kari holds CS degrees from Univ. Minnesota (BSc), Univ. Oulu (MSc, Lic. Tech.), Univ. Washington (PhD); and an MBA from Univ. Oulu. He has taught and worked as a researcher at Stanford, Univ. Oulu, and MIT.


Gerhard Reitmayr

Gerhard is a Senior Director of Technology at Qualcomm Corporate R&D where he is leading a team working on localization and reconstruction systems for VR/AR headsets. Before joining Qualcomm, Gerhard was a postdoc at the Cambridge University Engineering Department, developing real-time computer vision systems for AR; then assistant professor for Augmented Reality at Graz University of Technology, leading research projects on mobile tracking and reconstruction, visual coherence and visualization in AR, and industrial applications of AR. Gerhard holds a Master in engineering mathematics (2000) and a PhD in computer science (2004) from Vienna University of Technology. He co-authored over 100 peer reviewed publications, served twice as program chair for the International Symposium on Mixed and Augmented Reality (ISMAR), and as member of the ISMAR steering committee.


Dr. Chris Rowen

Dr. Chris Rowen is the founder and CEO of Cognite Ventures. Chris is a well-known Silicon Valley entrepreneur and technologist. He has served as CTO for Cadence’s IP Group, where he and his team develop new processor and memory for advanced applications in mobile, automotive, infrastructure, deep learning and IoT systems. Chris joined Cadence after its acquisition of Tensilica, the company he founded to develop extensible processors. He led Tensilica as CEO and later, CTO, to become one of the leading embedded architectures, with more than 225 chip and system company licensees, who together ship more than 4 billion cores per year. Before founding Tensilica in 1997, he was VP and GM of the Design Reuse Group at Synopsys. Chris also was a pioneer in developing RISC architecture and helped found MIPS Computer Systems, where he was VP of Microprocessor Development. He holds an MSEE and PhD in electrical engineering from Stanford and a BA in physics from Harvard. He holds more than 40 US and international patents. He was named an IEEE Fellow in 2015 for his work in development of microprocessor technology. He started Cognite Ventures in 2016 to develop, advise and invest in new entrepreneurial ventures, especially around cognitive computing.
www.cogniteventures.com


Renato F. Salas-Moreno

Renato is CEO/Co-Founder of Vtrus Inc, a Seattle startup enabling a future where devices can socially collaborate, perceive and learn like humans using cloud-based spatial AI. He received his PhD from the Robotic Vision group at Imperial College London. His research interests involve the development of real-time computer vision algorithms for navigation and semantic scene understanding. While at Imperial he started Surreal Vision Ltd (acquired by Facebook/OculusVR), in an effort to bring cutting-edge camera mapping and tracking technology into VR/AR. Renato is a 2015 MIT Innovator under 35 and an avid student pilot. 

 


Hugo Swart

Hugo Swart serves as senior director of product management for Qualcomm Technologies, Inc. He is responsible for overseeing Qualcomm’s consumer electronics business including go-to-market, product positioning and marketing, and P&L responsibilities for Snapdragon platforms across adjacent markets, including virtual and augmented reality, drones, robotics, TVs, etc.
Swart joined Qualcomm in 2003 as a technical marketing manager in charge or promoting wireless data technologies to operators worldwide. He has held roles of increasing responsibility since that time. He has led several successful projects from idea inception to customer adoption in the wireless space as well as novel business models for content delivery.
Prior to joining Qualcomm, Swart served as sales engineer for Lucent Technologies and Telecom Italia.
Swart received his bachelor’s (1999) and Master of Science (2004) degrees in electrical engineering from the University of Campinas, Brazil. In addition, he received a Master of Business Administration (2008) from San Diego State University. 


Edward Tang

Edward Tang is the founder of Avegant and oversees its technical direction, from its groundbreaking Retinal Imaging Technology used in the award-winning Glyph to new developments in Light Field mixed reality displays. Mr. Tang has extensive optical, electrical engineering, and biomedical experience garnered from years of research and product development at Tang Engineering and the University of Michigan working on MEMS technology. He holds a degree in Electrical Engineering from the University of Michigan.

 


Gordon Wetzstein

Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab, an interdisciplinary research group focused on advancing imaging, microscopy, and display systems. At the intersection of computer graphics, machine vision, optics, scientific computing, and perception, Prof. Wetzstein’s research has a wide range of applications in next-generation consumer electronics, scientific imaging, human-computer interaction, remote sensing, and many other areas. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at the MIT Media Lab. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. His doctoral dissertation focuses on computational light modulation for image acquisition and display and won the Alain Fournier Ph.D. Dissertation Annual Award. He organized the IEEE 2012 and 2013 International Workshops on Computational Cameras and Displays as well as the 2917 Int. Conference on Computational Photography, founded displayblocks.org as a forum for sharing computational display design instructions with the DIY community, and presented a number of courses on Computational Displays and Computational Photography at ACM SIGGRAPH. Gordon is the recipient of an NSF CAREER award, he won best paper awards at the International Conference on Computational Photography (ICCP) in 2011 and 2014 as well as a Laval Virtual Award in 2005.