Implementing Eye Tracking and Augmented Reality at the UNH Vision Lab
In summer 2025, I had the opportunity to conduct research in the University of New Hampshire Vision Lab through the Research Experience and Apprenticeship Program (REAP). This lab uses a virtual reality headset to create experiments that test various psychological phenomena, such as stimuli response and depth perception. As a computer science major, my role was to enhance experiments in the Vision Lab by writing code for two new features: eye tracking and augmented reality. Eye-tracking data shows us where an individual focuses their attention during an experiment, and augmented reality creates immersive simulations by projecting 3D models over a real environment within a headset.
Virtual reality is an incredibly powerful tool in psychological research because it allows us to create controlled environments and collect biometric data, such as eye gaze. This information gives us deeper insights into how humans evaluate their surroundings to make decisions, which in turn helps us to understand the human brain. Augmented reality is also becoming increasingly popular in research. Projecting 3D objects over a real environment gives us the advantage of creating experiments that don’t require physical objects, while still allowing the user to feel as though they are inside a natural setting. Augmented reality is especially useful in fields that require people to analyze complex systems and models. For example, an engineer might use augmented reality to view a 3D model for a piece of machinery, while a medical student may use augmented reality to practice surgery in a realistic setting. By bringing this emerging technology to the university, we are able to unlock many new and exciting possibilities for experiments.
Starting in January, I began working in the Vision Lab under the mentorship of Professor Ömer Dağlar Tanrikulu, the director of the lab. Throughout the spring semester, I assisted graduate student Katrín Aspelund through the INCO 590: Student Research Experience course. Katrín was working on a virtual reality project that tests ensemble size averaging in a supermarket environment. Ensemble size averaging is the process of looking at a group of objects and visually estimating the average size of the group. The objective of Katrín’s experiment was to test how accurately a participant can differentiate between the average volumes of flour packs rendered in different distances within a virtual supermarket.
Adding eye tracking to this experiment would reveal which specific flour packs each participant fixates on to determine which group has the larger average volume. To incorporate eye-tracking functionality, Katrín and I began studying the fundamentals of SightLab, an eye-tracking software, over the course of the spring semester. I also learned how to use Vizard, a virtual reality development platform that we use to code experiments for our lab’s headset, the HP Omnicept Reverb G2.
Developing Virtual Reality Experiments with SightLab
During the spring semester, I applied for a REAP grant so that I could continue my Vision Lab research with Professor Ömer Dağlar Tanrikulu over the summer. By May, I had a thorough understanding of SightLab, and I was ready to start applying it to projects. In the first couple weeks of my REAP project, I configured the HP Omnicept Reverb G2 to run eye-tracking experiments. This involved updating Vizard, importing new software packages, and changing the settings of our virtual reality headset to allow for eye-tracking collection. I had to contact the creator of SightLab to resolve some of the issues I ran into, but by the end of the second week, eye tracking had been successfully enabled inside of our headset.
During the third and fourth weeks, I investigated how to integrate eye tracking into Katrín’s supermarket experiment. Considering that I was working with someone else’s code, I dedicated time to properly understand the flow of the program. Due to the structure of the original experiment, I discovered that I would need to rewrite the code so that it would be compatible with SightLab. Before working on the large supermarket project, I created a series of small-scale projects to test the headset’s eye-tracking capabilities. This taught me how to set up my own SightLab experiment, and I documented this process so that other students can do the same in the future.
By week five, our lab’s new augmented reality headset, the Vive Focus Vision, had arrived. Temporarily shelving the supermarket experiment, I spent the next few weeks configuring the new headset. During this time, I studied how to write and execute code for augmented reality experiments in SightLab. Our lab also acquired a video passthrough cable that connects the headset to the computer and displays the participant’s activity within the augmented reality environment on screen. This allows researchers to monitor the participant’s actions within the headset on the lab’s computer during an experiment. Just like I did with eye tracking, I created small experiments to test augmented reality, and I documented these procedures.
During week seven, it was time to start creating the new version of the supermarket experiment. This new version uses eye tracking and augmented reality to project 3D flour packs onto a table in the lab. By carefully rewriting the project’s code to be compatible with SightLab, I was able to replicate the supermarket experiment in augmented reality.
For the remaining weeks, I found ways to polish remaining technical errors, and I began drafting a new experiment about retinal and perceived sizes for the lab to continue in the fall. Before wrapping up the supermarket experiment, I wrote down comprehensive documentation. I compiled all my notes from the summer into a readable, user-friendly guide that explains how to set up eye tracking and augmented reality experiments specifically for the UNH Vision Lab. This guide will be very useful for future students, and it will help them to create experiments faster.
Research Outcomes
Being able to leverage eye tracking has greatly improved experiments at the UNH Vision Lab. Before, the lab could only collect data about a user’s head position throughout an experiment. Now, we have complete gaze data that shows us where a participant is looking over the course of a trial. Additionally, being able to run augmented reality experiments is another crucial gain for the Vision Lab. Augmented reality allows us to combine virtual objects with the real world, therefore expanding the possibilities for future projects.
REAP was an incredible experience for both my professional and personal development. This research has taught me how to manage large code projects, as well as how to design my own experiment. Working with SightLab reinforced my knowledge of code libraries and modular programming. Moreover, it taught me how to use structures that I had not yet been exposed to in my coursework, such as coroutines and callback functions. Working in the Vision Lab also taught me a great deal about cognitive science and the visual system. Understanding the human brain is a complex and continuous task, and I love that I can use my technical knowledge to help with this effort.
When looking for academic research, I encourage undergraduates to keep an open mind and be willing to branch out. I have always loved the idea of combining my technical knowledge with other areas of interest and REAP was a fantastic way to achieve my goals.
I would like to express appreciation for my mentor, Professor Ömer Dağlar Tanrikulu, for his guidance and support during this project. I also would like to thank Katrín Aspelund, the graduate student who I collaborated with, and Shannon Fleming, the Vision Lab manager this past summer. I am grateful for the wealth of psychology knowledge that I learned from everyone during this experience. I also thank the Hamel Center for Undergraduate Research, as well as Mr. Dana Hamel, for funding my REAP grant. My research this summer would not have been possible without all of you!
Author and Mentor Bios
Daniel Sixon is a sophomore from Milford, New Hampshire. He is a computer science major and will graduate in spring 2028. He currently works at the UNH InterOperability Lab, where he creates and runs tests that evaluate the performance of wireless devices. Daniel is a member of the Honors Undergraduate Student Committee, and he was a freshman orientation leader for the Honors Outdoor Orientation Trip this year. Outside of school, Daniel is an avid distance runner, and he hosts his own weekly radio show on 91.3 FM WUNH Durham.
Ömer Dağlar Tanrikulu is an assistant professor in the Department of Psychology at the University of New Hampshire. His research focuses on visual perception from a computational perspective. To learn more about his research, please visit: https://www.unhvisionlab.com/
Copyright © 2025, Daniel Sixon