Deeptech

ARIA is Australia's first and only AR glass maker.

Led by our blind colleagues, we develop bleeding edge technologies to solve the hard problems experienced by those living with vision disability. To do this, we work with a lot of smart and mission-driven people to get the job done.

The technology developed at ARIA is an interesting mix of scientific research translation, rapid experimentation and productization with blind users.

We’re open to try out new ideas and technologies, and when we can find something we need, we go build it. ARIA invests in developing new and emerging technologies that support our mission.

We support over 20 researchers from leading universities including UTS, University of Sydney working on novel solutions involving high realtime low power approaches to SLAM, sensor systems like MEMS lidar, millimeter wave radar, event cameras, and novel approaches to high bandwidth very low power processing including novel ASIC designs, neuromorphic processing, geometric computation, and high efficiency device based generative AI.

Investment
$9.6M

in research translation

Cooperative Research Centres Projects
(CRC-P)

Gotta have a cool lab

While we lean toward the practical, the intuitive, and rapid & scrappy idea validation, much of what we do is tied with human perception and human experience, which rarely lends itself to clean experimentation and statistical power when trying to fine tune a system, or to tease out a subtle problem without breaking said problem though reductive experimentation. Reality, and the humans in it, are complex. ARIA’s benefit is accurate and effective augmented perception in the middle of all that complexity; and at the end of the day, ARIA is being developed as a medical device, so devising the appropriate trials and consistent data collection is a must.

A blind participant testing out ARIA smartglasses in the Human Augmentation Lab.

A huge challenge is effectively simulating reality - which is super hard; and measuring people inside such a simulation is equally hard.  This is where the HAL (Human Augmentation Lab) comes in. A fair description of it is a "poor man's" version of Star Trek's Holodeck, coupled with high precision physical, bio and neuro state tracking. With this set-up, we can create repeatable experiments, and capture a ground truth baseline to which we can measure ARIA’s performance and the performance of people in the experiment. This allows us to make incremental and quantifiable improvements to the technology.

Some of the equipment at the HAL includes 36 motion tracking cameras, a state of the art 64 channel spatial audio simulator, lighting controls, EEG and EMG tracking, GSR, and Heart Rate tracking. 

A blind participant wearing ARIA smartglasses prototype.
A blind participant testing out ARIA smartglasses prototype.
Research Collaborations

Chief Investigators

ARIA currently supports over 20 researchers through University of Technology Sydney, University of Sydney, and Swinburne University. These research collaborations focus on multidisciplinary research translation including spatial audio engine design, psychoacoustics, sensory augmentation and integration, cognitive workload monitoring and adaptation, neuroscience, ophthalmic clinical practice, orthoptic practice, orientation and mobility, health economics, and health data science, sensor simulation, high efficiency machine vision pipelines, vision classification, visual odometry SLAM, and novel high efficiency sensor and processing systems including including event cameras and neuromorphic processors.

Professor CT Lin

Chief Investigator for Human Perception
Distinguished Professor, School of Computer Science, University of Technology Sydney Core Member, Centre for Artificial Intelligence Director, Computational Intelligence and Brain Computer Interface Centre (CIBCI), FEIT, UTS Co-Director, Centre for AI (CAI), FEIT, UTS. Dr Chin-Teng Lin received the B.S. degree from National Chiao-Tung University (NCTU), Taiwan in 1986, and the Master and Ph.D. degree in electrical engineering from Purdue University, USA in 1989 and 1992, respectively. He is currently the Distinguished Professor of Faculty of Engineering and Information Technology, and Co-Director of Center for Artificial Intelligence, University of Technology Sydney, Australia.

Dr Donald Dansereau

Chief Investigator for Sensor Design and Low-level Vision
Chief Investigator for Sensor Design and Low-level Vision Senior Lecturer School of Aerospace, Mechanical and Mechatronic Engineering, University of Sydney, Dr Donald Dansereau is a senior lecturer in the school of Aerospace, Mechanical and Mechatronic Engineering, and the Perception Theme Lead for the Sydney Institute for Robotics and Intelligent Systems. His work explores how new imaging devices can help robots see and do, encompassing the design, fabrication, and deployment of new imaging technologies. In 2004, he completed an MSc at the University of Calgary, receiving the Governor General’s Gold Medal for his pioneering work in light field processing.

Dr Vincent Nguyen

Chief Investigator for Human-Centric Research
Chief Investigator for Human-Centric Research and Design Lecturer, Orthoptics, University of Technology Sydney, Dr Vincent Nguyen graduated with Honours as an orthoptist in 1993 and with a Master of Applied Science in 1996. Vincent was awarded a PhD (University of Sydney) in 2003 for his work in visual psycho-physics on image perception between the two eyes, known as binocular rivalry. He then completed a two year postdoctoral at the Centre for Vision Research, York University, Toronto, Canada, working with Professor Ian Howard on human depth perception. On returning to the University of Sydney, Dr Nguyen worked collaboratively on the organisation of receptive fields of retinal ganglion cells using patch-clamp techniques. In clinical practice, Dr Nguyen was appointed by NSW Health as a Clinical Electrophysiologist in 2007 and later at the School of Optometry and Vision Sciences, UNSW. In 2012, Dr Nguyen joined Vision Australia to work as an orthoptist assisting people with low vision. Dr Nguyen joined the Orthoptic discipline in the Graduate School of Health at UTS to fulfil his lifelong journey to study and teach the biological underpinnings of vision.

Dr Viorela Ila

Chief Investigator for Spatial Registration
Chief Investigator for Spatial Registration of Semantic and Context Information, Algorithmic Efficiency & Compressed Representations Senior Lecturer with The University of Sydney, School of Aerospace, Mechanical and Mechatronic Engineering and the Centre for Robotics and Intelligent Systems, Dr Viorela Ila is a senior lecturer with The University of Sydney, School of Aerospace, Mechanical and Mechatronic Engineering and the Centre for Robotics and Intelligent Systems. Her research interests span from robot vision to advanced techniques for simultaneous localization and mapping (SLAM) and 3D reconstruction based on cutting-edge computational tools such as graphical models, modern optimization methods and information theory.

Dr Teresa Vidal Calleja

Chief Investigator for Multi-sensor integration and Spatial SLAM
Senior Lecturer, School of Mechanical and Mechatronic Engineering, University of Technology Sydney Core Member, CAS – Centre for Autonomous Systems, Dr Teresa Vidal-Calleja received her BSc in Mech Eng from the National Autonomous University of Mexico (UNAM), Mexico City, Mexico, her MSc in Electrical Eng (Mechatronics options) from CINVESTAV-IPN, Mexico City and her PhD in Automatic Control, Computer Vision and Robotics from the Technical University of Catalonia (UPC), Barcelona, Spain in 2007.

Professor Craig Jin

Chief Investigator for Auditory Sensory Augmentation
Associate Professor, School of Electrical and Information Engineering, University of Sydney, Craig Jin is recognised worldwide as a leader on recording, generation, and perception of spatial audio. In 2005, he was awarded a QEII Fellowship to pursue this research. Dr Jin’s research investigates signal processing related to spatial audio as well as models of the human auditory system related to spatial audio perception and auditory scene analysis. Dr Jin’s most significant contribution to the field of audio engineering is the invention of a spatial hearing aid for which he was recognized nationally. The spatial hearing aid restores the ability of hearing aid users to localise sounds and the ability to follow a conversation amongst a number of competing conversations – skills that are lost with conventional hearing aids.