Mobile and wearable technologies offer the promise of great opportunity, connection, new experiences, and natural interactions. However, what happens when these designs do not fully consider the relationship between people and the devices they use? For example, wheelchair users often use and carry multiple mobile computing devices, such as laptops, tablets, and smartphones. Upper body motor impairments or physically restrictive wheelchair frames may limit wheelchair users’ ability to interact with these devices. Designing technology for wheelchair users requires constant negotiation between the user’s needs, technological and functional constraints, and context. My research aims to support a broad range of people with diverse abilities as they interact with the world and people around them. I use Chairables to conceptualize the design approach that leverages the affordances of wheelchairs for mobile interaction. Our ongoing research aims to support and empower people with disabilities as they engage in a range of activities, including mobility, social interactions, and competitive sports.
Patrick Carrington is an Assistant Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. at the University of Maryland, Baltimore County, and his research emphasizes the design of systems to support people with diverse abilities. He studies mobile and wearable technology, builds assistive devices, and explores how to create experiences that support empowerment, independence, and improved quality of life. His current projects span topics including accessing digital content and media, transportation and mobility, and developing technologies for athletes with disabilities.
The replication of our world into virtual environments has been on our minds since as early as 1935, the year Stanley G. Weinbaum published “Pygmalion Spectacles.” On the one hand, we have never been closer to Weinbaum’s vision eighty-five years later. The technology progress related to head-mounted displays has been impressive and cutting edge. On the other hand, despite this technological advance, we are still far away from Weinbaum’s ultimate vision of an immersive world that not only includes all our five senses, but also provides us with emotional qualia begotten by the virtual experience. One obstacle that users face in recognizing emotional qualities is the lack of tangible interaction. Hence, there is a growing need to create new haptic technologies that enhance the user’s immersion. Yet, it is not enough to focus on only improving the quality of the mechanical stimulation; it is also crucial to understand how the haptic device could trigger emotions during the interaction.
Mounia Ziat is an Associate Professor at Bentley University. Relying on her multidisciplinary background, Dr. Ziat’s approach to science is holistic; her goals are to better understand perception and human interaction with the natural and artificial environment. For the last twenty years, she has been studying haptic perception by combining engineering, cognitive psychology, human-computer interaction (HCI), and neuroscience to understand all aspects of human touch. From the moment fingers contact a surface to the time information reaches the brain, her research focuses on making sense of sensations that lead to a stable perception of the world. Dr. Ziat holds an Electronic Engineering degree and a Master and Ph.D. in Cognitive Science.
Uncertain predictions permeate our daily lives (“will it rain today?”, “how long until my bus shows up?”, “who is most likely to win the next election?”). Fully understanding the uncertainty in such predictions would allow people to make better decisions, yet predictive systems usually communicate uncertainty poorly—or not at all. I will discuss ways to combine knowledge of visualization perception, uncertainty cognition, and task requirements to design visualizations that more effectively communicate uncertainty. I will also discuss ongoing work in systematically characterizing the space of uncertainty visualization designs and in developing ways to communicate (difficult- or impossible-to-quantify) uncertainty in the data analysis process itself. As we push more predictive systems into people’s everyday lives, we must consider carefully how to communicate uncertainty in ways that people can actually use to make informed decisions.
Matthew Kay is an Assistant Professor in Computer Science and Communications Studies at Northwestern University working in human-computer interaction and information visualization. His research areas include uncertainty visualization, personal health informatics, and the design of human-centered tools for data analysis. He is intrigued by domains where complex information, like uncertainty, must be communicated to broad audiences, as in health risks, transit prediction, or weather forecasting. He co-directs the Midwest Uncertainty Collective (http://mucollective.co) and is the author of the tidybayes (https://mjskay.github.io/tidybayes/) and ggdist (https://mjskay.github.io/ggdist/) R packages for visualizing Bayesian model output and uncertainty.