Augmenting Social Reality for Lifelong Learning
This talk is a featured presentation of the National Science Foundation Research Experience for Undergraduates (NSF REU) on Computational Methods for Understanding Music, Media, and Minds. This event is free and open to all faculty, staff, students and community members.
Lunch is sponsored by the Goergen Institute for Data Science. Please register to ensure we order enough food for everyone.
Abstract: Learning is a social endeavor. Social interaction provides special learning resources to raise motivation, knowledge awareness, critical thinking, and conflict resolution. Such learning resource, however, is less accessible for people with underdeveloped social skills such as students with autism, as well as students from underrepresented groups such as ethnic minority, female and low social-economic status. Learning is situated in the immediate social reality. It, therefore, remains challenging to facilitate learning as thoughts and feelings are hidden between people, tightly mapped with the physical space, and subject to change in response to spontaneous and complex flow of social interaction.
In this talk, I will describe my research exploring the design space of, what I called Augmented Social Reality, which elevates lifelong learning skills for students with diverse abilities and backgrounds, through technology-enhanced social cognition and social interaction situated in the immediate physical and social environments. I will focus on two projects. The first augments pretend play through augmented and tangible user interface to help develop “theory of mind” for young children with and without autism in imaginative play. The second, “Sensing Curiosity in Play and Responding”, uses theory and data driven approaches to elaborate fine-grained peer-peer interaction dynamics that lead to positive curiosity change, and to design a peer-like collaborative embodied conversational agent to foster curiosity in small-group STEM learning. Through these projects, I will reflect on the interdisciplinary opportunities and future directions of designing accessible and supportive social reality for better learning, work and life to approach complex challenges and change environments.
Bio: Zhen Bai is an assistant professor co-leading the ROCHCI group with the Department of Computer Science at University of Rochester. Zhen received her PhD from the Graphics & Interaction Group at the University of Cambridge in 2015, and was a postdoctoral fellow of the Human-Computer Interaction Institute and the Language Technology Institute at Carnegie Mellon University before joining the University of Rochester.
Wednesday, June 19 at 12:00pm to 1:00pm
Hutchison Hall, Lander Auditorium 140
Hutchison Hall, Rochester, NY
No recent activity