BostonCHI Feb 2022, featuring Paul Kahn & Dario Rodighiero
Join Paul and Dario in a special lunchtime seminar on Feb 8th.
What lessons can we learn from the enormous outpouring of online visualizations related to the Covid-19 pandemic?
Please join Paul Kahn and Dario Rodighiero for a presentation of the Covid-19 Online Visualization Collection (COVIC) and the programs developed to view and access the collection. COVIC currently contains more than 10,000 visualizations, a collective response to the pandemic gathered from online sources around the world. We have assembled this collection of animations, interactive visualizations, diagrams, charts, illustrations, and maps, assigning consistent metadata attributes to each item, and published the collection online, along with web-based applications that enable researchers to explore the collection in an open-ended fashion. The goal of the project is to provide a resource for teaching and research. COVIC defines a problem space in a way that does not predict who will be seeking solutions or what solutions will be found.
Paul Kahn is a lecturer in the Information Design and Data Visualization program at Northeastern University. He currently leads the COVIC project. He has previously been active in hypertext research and agencies in the US and France offering services in information architecture, interface design, and experience design. Currently he lectures and writes about information design history.
Dario Rodighiero is a postdoc at metaLAB (at) Harvard, currently working at Bibliotheca Hertziana in Rome. He has done numerous visualization projects combining design, data, and humanities. A current project is Surprise Machines, a digital installation that shows Harvard Art Museums’ extensive collection through a choreographic interface capable of capturing body gestures.
Schedule – EST (UTC-5)
Please note: this event is at lunchtime (12 noon) in EST.
Join Jared in this virtual talk hosted by GBC/ACM and BostonCHI
Much of what holds us back is outdated thinking about what both UX and Agile are. We’re stuck in 2001 when Agile was first conceived, and today’s UX practices were in their infancy. We’ve learned so much in the last two decades. We know how to do better.
Agile and UX can work together. To do that, we need to reframe some misconceptions about how we put them together. We need to replace old, dysfunctional habits with state-of-the-art techniques and processes.
This online event will be held in Zoom.
6:50 – 7:00: Brief introduction
7:00 – 8:30: Jared speaks, then Q&A
About Jared Spool
Jared M. Spool is a Maker of Awesomeness at Center Centre – UIE. Center Centre is the school he started with Leslie Jensen-Inman to create industry-ready User Experience Designers. UIE is Center Centre’s professional development arm, dedicated to understanding what it takes for organizations to produce competitively great products and services.
In the 43 years he’s been in the tech field, he’s worked with hundreds of organizations, written two books, published hundreds of articles and podcasts, and tours the world speaking to audiences everywhere. When he can, he does his laundry in Andover, Massachusetts.
We would like for robots to be able to adaptively help people in their day-to-day lives, but the state-of-the-art in robot learning is typically either under-informed about the needs and abilities of actual users or is designed and tested in highly-controlled environments and interactions that fail to reflect real-world noise and complexity. In our work, we focus on identifying the real-world situations where current human-robot interaction (HRI) and robot learning algorithms fail, and developing new methods that enable robots to robustly learn to assist non-expert teachers under real-world noise and complexity. This includes using human-centered design to develop more realistic simulated teachers for early algorithm development, incorporating both teacher and environmental reward into state-of-the-art deep reinforcement learning algorithms, finding new ways to model and take advantage of rich-but-noisy human feedback, and designing novel models that enable robot-robot collaboration to improve detection of human attention. Finally, throughout all of this work, we seek to break down the artificial disciplinary divide between service robotics for non-disabled users and assistive robotics for users with disabilities, and insure that our robots treat all users as valued partners who are integrated into the social and physical environments in which they live their lives.
Elaine Schaertl Short is the Clare Boothe Luce Assistant Professor of Computer Science at Tufts University. She completed her PhD under the supervision of Prof. Maja Matarić in the Department of Computer Science at the University of Southern California (USC). She received her MS in Computer Science from USC in 2012 and her BS in Computer Science from Yale University in 2010. From 2017-2019 she worked as a postdoctoral researcher in the Socially Intelligent Machines Lab at the University of Texas at Austin. At USC, she received numerous awards for her contributions to research, teaching, and service, including being one of very few PhD students to have received all three of the CS department Best TA, Best RA, and Service awards.
Elaine’s research seeks to improve the computational foundations of human-robot interaction by designing new algorithms that succeed in contexts where other algorithms’ assumptions frequently fail, such as in child-robot interaction, in minimally-supervised public space deployments, and in assistive interactions. As a disabled faculty member, Elaine is particularly passionate about disability rights in her service work. In addition to having recently joined the new AccessComputing Leadership Corps, she is the Communications Chair and Community Liaison of AccessSIGCHI, an advocacy group that works to increase the accessibility of the 24 SIGCHI conferences.