The replication of our world into virtual environments has been on our minds since as early as 1935, the year Stanley G. Weinbaum published “Pygmalion Spectacles.” On the one hand, we have never been closer to Weinbaum’s vision eighty-five years later. The technology progress related to head-mounted displays has been impressive and cutting edge. On the other hand, despite this technological advance, we are still far away from Weinbaum’s ultimate vision of an immersive world that not only includes all our five senses, but also provides us with emotional qualia begotten by the virtual experience. One obstacle that users face in recognizing emotional qualities is the lack of tangible interaction. Hence, there is a growing need to create new haptic technologies that enhance the user’s immersion. Yet, it is not enough to focus on only improving the quality of the mechanical stimulation; it is also crucial to understand how the haptic device could trigger emotions during the interaction.
Mounia Ziat is an Associate Professor at Bentley University. Relying on her multidisciplinary background, Dr. Ziat’s approach to science is holistic; her goals are to better understand perception and human interaction with the natural and artificial environment. For the last twenty years, she has been studying haptic perception by combining engineering, cognitive psychology, human-computer interaction (HCI), and neuroscience to understand all aspects of human touch. From the moment fingers contact a surface to the time information reaches the brain, her research focuses on making sense of sensations that lead to a stable perception of the world. Dr. Ziat holds an Electronic Engineering degree and a Master and Ph.D. in Cognitive Science.
Uncertain predictions permeate our daily lives (“will it rain today?”, “how long until my bus shows up?”, “who is most likely to win the next election?”). Fully understanding the uncertainty in such predictions would allow people to make better decisions, yet predictive systems usually communicate uncertainty poorly—or not at all. I will discuss ways to combine knowledge of visualization perception, uncertainty cognition, and task requirements to design visualizations that more effectively communicate uncertainty. I will also discuss ongoing work in systematically characterizing the space of uncertainty visualization designs and in developing ways to communicate (difficult- or impossible-to-quantify) uncertainty in the data analysis process itself. As we push more predictive systems into people’s everyday lives, we must consider carefully how to communicate uncertainty in ways that people can actually use to make informed decisions.
Matthew Kay is an Assistant Professor in Computer Science and Communications Studies at Northwestern University working in human-computer interaction and information visualization. His research areas include uncertainty visualization, personal health informatics, and the design of human-centered tools for data analysis. He is intrigued by domains where complex information, like uncertainty, must be communicated to broad audiences, as in health risks, transit prediction, or weather forecasting. He co-directs the Midwest Uncertainty Collective (http://mucollective.co) and is the author of the tidybayes (https://mjskay.github.io/tidybayes/) and ggdist (https://mjskay.github.io/ggdist/) R packages for visualizing Bayesian model output and uncertainty.
It can be tough to communicate what we want a computer to do on our behalf, regardless of the method: examples, demonstrations, code, etc. It can be especially tough when, half-way through specifying what we think we want, we realize that we were wrong and now we know what we want the computer to do… Many existing methods for translating human intent into executable computer programs do not sufficiently support humans in refining their own intent and communicating it to the computer, or reflecting the computer’s interpretation of that intent back to the human. In this talk, I will describe new interfaces for a particular technology, program synthesis, specifically designed to improve these critical components of the human-machine interaction loop so that humans can more quickly reach their goal: a program that behaves the way they want it to.
Elena Glassman is an Assistant Professor of Computer Science at the Harvard Paulson School of Engineering & Applied Sciences and the Stanley A. Marks & William H. Marks Professor at the Radcliffe Institute for Advanced Study, specializing in human-computer interaction. At MIT, she earned a PhD and MEng in Electrical Engineering and Computer Science and a BS in Electrical Science and Engineering. Before joining Harvard, she was a postdoctoral scholar in Electrical Engineering and Computer Science at the University of California, Berkeley, where she received the Berkeley Institute for Data Science Moore/Sloan Data Science Fellowship.