Rosa, Edgar & Elijah
Current technological advances allow monitoring of behavior with great precision in virtual environments in an automated manner. In this project, we developed an automated setup to monitor behavior in a virtual projected visual environment in rodents. We trained rats and mice to perform a visually guided spatial sequence. We showed that while rats can learn this task, mice show more difficulties to successfully perform this task with comparable success.
Figure 1. Schematic of behavioral setup and bonsai workflow
We used two behavioral chambers (Figure 1), one for mice and one for rats. The underside of the maze floor was designed for rear projection using a mirror and a LCD projector. We then used Bonsai to control the sound, light, reward, and cameras to record rodent behaviors. The task required subjects to virtually touch a projected light stimulus in order to receive a water reward.
Figure 2. Rat Training Protocol; Associating Light and Reward
Figure 3. Infrared Centroid Tracking and Heat Map
Rats quickly learn to touch a 'virtual' bottom projected light for rewards. We expanded upon this behavioral paradigm finding rats to be capable of touching a sequence of at least 10 stimuli in order to recieve a reward. As the trials progressed subjects demonstrated a decreased latency to receive reward, indicating a better understanding of the task demands (figure 4). Computer vision tracking also suggests that they change their exploration strategies in order to take the shortest path to the light and subsequent reward (e.g. see circular paths to the highly occupied reward location in figure 3). While mice can learn the initial auditory/ reward pairings in this task they did not successfully learn visual/ reward pairings, possibly due to poorer visual discrimination.
Figure 4. Delay to receive reward as a function of trial number